After months of criticism over their refusal to share data on whether efforts to halt the spread of false news are working, Facebook officials told POLITICO this week that they may be ready to slowly open up.
Tessa Lyons, a product manager at Facebook, said that the company has invited representatives from the U.S. fact-checking groups it has partnered with to its Menlo Park, California headquarters in early February to discuss, in part, what information could finally be shared. Going ahead and doing so would represent a major shift for the social media giant.
While Facebook is unlikely to fully throw open the vault, the meeting could help thaw what has been, at times, a tense relationship with the fact-checking groups—FactCheck.org, PolitiFact, Snopes, The Weekly Standard, ABC News, and the AP —which it began enlisting shortly after the 2016 election to sweep the platform for misinformation.
Critics have said that Facebook’s refusal to share data on the year-old program not only makes it impossible for outsiders to gauge its effectiveness, but has robbed the fact-checkers of crucial information that could help them, ranging from how they should prioritize stories to check to which headline formulations work best on fact-check articles.
But at the meeting, Lyons said she expects to discuss, “What additional information can we provide to them to help them be more effective and increase their impact, and then what data can we all look at to understand the actual impact and results of the partnership.”
Alexios Mantzarlis, the director of the Poynter Institute's International Fact-Checking Network, has been particularly pointed in his criticism of Facebook’s stinginess with data, but said he was optimistic about the February summit.
“I am hopeful that this will be an opportunity to talk technical fixes, but above all to discuss transparency and data-sharing,” he said in an email, adding that he also hoped “that bringing all these people who care about the problem in one room will result in a commitment to share lessons from this fact-checking project with academics and the public.”
News of the meeting comes just two weeks after Facebook announced the first major change to the fact-checking program, that it would drop the red “disputed” flags that had been used to mark bogus stories. Instead, fact-checks from groups like PolitiFact and FactCheck.org are now posted as “Related Articles” in newsfeeds alongside the original false story, with bright “fact-check” badges calling attention to them.
Facebook first announced the change in a post on Medium, which said that internal tests showed that users click on false stories with similar frequency under both approaches, but that the “related articles” treatment was more effective at stopping false stories from being shared. The Medium post did not specify how much more effective, though, or offer any specific numbers or evidence.
Speaking to POLITICO, Lyons declined to elaborate, other than to say that the difference was “substantial.”
Fact-checkers and academics have greeted the new approach with cautious optimism, but also renewed frustration over Facebook’s refusal to be more open.
“From a gut perspective, I would think that having related articles is just a good thing, since it provides more information,” said Eugene Kiely, the director of FactCheck.org. He continued, though, “If you’re going to make changes like this, it would just be helpful to provide as much information as possible as to why it was done rather than just saying we found ‘related articles’ more effective than ‘disputed flags.’ How did you come to that conclusion? Show the work.”
Yale University psychologist David Rand, who has conducted research casting doubt on whether the disputed flags made people less likely to believe false reports, said that the change at least seemed to him like a good idea, though it was impossible to be sure without more information.
“The fact that they changed, that they’re responding to both external and internal evidence that the disputed tags weren’t great is very positive. We don’t know how positive, though, because we don’t know how well the new thing works,” he said. “It’s basically all taking their word for it.”
“If they release the real data,” he continued, “then we get a sense of the extent to which they’re actually helping fix the problem rather than just helping their brand.”
Facebook listed several reasons for the change in the Medium post, including that the disputed flags sometimes backfired, further entrenching beliefs. The post also said that the way the flags were presented required too many clicks before users actually got to the information refuting the false story.
Now that the fact-check story’s headline will appear right beside the false story, users will immediately be exposed to the correct information, said Grace Jackson, a Facebook user-experience researcher.
“We made it much more likely that people will find out quickly, hey, this is something that’s questionable,” she said.
Facebook also previously required two fact-checks to plant a flag next to a story—a high bar for overworked fact-checking staffs to reach. Now, just one fact-checking story will be required for a “related articles” link.
David Mikkelson, the publisher of Snopes, said in an email that he was pleased that the change would allow for more nuance.
Fact-checkers, he said, will now be able to better handle stories that “that might contain a mixture of truth and falsity.”
He pointed to spurious reports that circulated following the recent Amtrak crash in Washington state that claimed Antifa was possibly responsible, based on an earlier message-board post to an anarchist group.
“Such speculation is something that couldn't really be flagged as ‘false’ in Facebook under the previous system, but it can now be linked with explanatory articles detailing why such a claim is highly dubious,” he said.
Emily Vraga, a political communications professor at George Mason University, has in the past conducted research showing that the “related articles” approach is effective at correcting misinformation on Facebook. The difference, she said, is leading a horse to water vs. forcing its head in.
“‘Disputed’ inherently tells us there’s conflict, that there are people fighting over something, and that kind of raises our hackles, our defenses if you will,” she said, “whereas just providing people more information could potentially get around that at least to some extent.”
Vraga said that she was “cautiously optimistic” about the new approach, but, like others, pointed to the lack of internal Facebook data. “It seems like it’s drawing from better practices,” she said, “but it’s really hard for us to know what’s going on in the black box of Facebook.”
Facebook’s Lyons said there were several reasons, including privacy, that more information has not yet been shared. For instance, she said, sometimes there have been requests for data that have been overly expansive, like to “to see every post that has been used to share a hoax article.”
She said there were privacy issues involved in sharing that wide a swath of data, as well as practical problems with drawing meaning from so much information. One alternative, she said, may be disclosing data on the top pages that have shared a false story, “because there are trends you can get from that, and insights and understandings.”
As for data that would shed light on the overall effectiveness of the fact-checking program, she said, the issue is more complicated than it seems. It’s not as simple as measuring how stories are shared before and after being fact-checked, she said, because stories often spread the fastest when they’re first published, before they can be examined by fact-checkers.
Measuring effectiveness, she said, requires either a complex and still preliminary modeling system, or AB testing.
In those AB tests, after a false story is fact-checked, some users see the story with the disputed flag—or now, with a “related articles” fact check beside it—and it is downgraded in their newsfeed. For other users, the presentation is unaffected.
Using that test, Lyons said, Facebook found that, once a fact-checking group rates a story as false, future impressions on it are reduced by 80 percent, mainly because the fact-check is fed into Facebook’s algorithms, making the false story much less likely to appear in users’ newsfeeds.
Facebook has put that number out publicly before, but not shared any of the underlying data that led to it.
At the February meeting with fact-checkers, Lyons said, she expects to discuss ways to improve the internal Facebook interface fact-checking groups use on the site, as well as how to address fact-checking images and videos, a major issue going forward.
“Misinformation is adapting and evolving,” she said. “One of the things that we’ll want to do is spend some time talking about what we think the challenges of 2018 will be.”
Kiely, the FactCheck.org director, said that he plans to attend the meeting and would like to see data that supports both the assertions that “related articles” works better than the “disputed” flags, and that impressions on false stories decline 80 percent after being fact-checked. He said he’d also like to know how shares and likes on fact-checks compare to shares and likes on the false stories that get debunked.
Lyons declined, however, to commit to a specific time by which Facebook would openly share more information.
“I would love it if there were one date that we were all going to solve this problem forever, but it’s just not the reality,” she said. “I think this is an ongoing increase in our efforts to continue to communicate and collaborate and share and improve. This isn’t the start of it, and it’s certainly not the stop of it.”