NEW YORK — For weeks, Facebook has been questioned about its role in spreading fake news. Now the company has mounted its most concerted effort to combat the problem.
Facebook said Thursday that it had begun a series of experiments to limit misinformation on its site. The tests include making it easier for its 1.8 billion members to report fake news and creating partnerships with outside fact-checking organizations to help it indicate when articles are false. The company is also changing some advertising practices to stop purveyors of fake news from profiting from it.
Facebook, the social network, is in a tricky position with these tests. It has long regarded itself as a neutral place where people can freely post, read, and view content, and it has said it does not want to be an arbiter of truth. But as its reach and influence have grown, it has had to confront questions about its moral obligations and ethical standards regarding what appears on the network.
Its experiments on curtailing fake news show that the company recognizes it has a deepening responsibility for what is on the site. But Facebook also must tread cautiously in making changes, because it is wary of exposing itself to claims of censorship.
“We really value giving people a voice, but we also believe we need to take responsibility for the spread of fake news on our platform,’’ said Adam Mosseri, a Facebook vice president who is in charge of its News Feed, the company’s method of distributing information to its global audience.
He said the changes — which, if successful, may be available to a wide audience — resulted from many months of internal discussion about how to handle false news articles shared on the network.
What effect Facebook’s moves will have on fake news is unclear. The issue is not confined to the social network, with a vast ecosystem of false news creators who thrive on online ads and who can use other social media and search engines to propagate their work. Google, Twitter, and message boards such as 4chan and Reddit have all been criticized for being part of that chain.
Still, Facebook has taken the most heat over fake news. The company has been under that spotlight since Nov. 8, when Donald Trump was elected the 45th president. Trump’s unexpected victory almost immediately led people to focus on whether Facebook had influenced the electorate, especially with the rise of hyperpartisan sites on the network and many examples of misinformation, such as a false article that claimed Pope Francis had endorsed Trump for president that was shared nearly 1 million times across the site.
Mark Zuckerberg, Facebook’s chief executive, has said he did not believe that the social network had influenced the election result, calling it “a pretty crazy idea.’’ Yet the intense scrutiny of the company on the issue has caused internal divisions and has pushed Zuckerberg to say he was trying to find ways to reduce the problem.
In an interview, Mosseri said Facebook did not think its News Feed had directly caused people to vote for a particular candidate, given that “the magnitude of fake news across Facebook is one fraction of a percent of the content across the network.’’
Facebook has changed the way its News Feed works before. In August, the company announced changes to marginalize what it considered “clickbait,’’ the sensational headlines that rarely live up to their promise. This year, Facebook also gave priority to content shared by friends and family, a move that shook some publishers that rely on the social network for much of their traffic. The company is also constantly fine-tuning its algorithms to serve what its users most want to see, an effort to keep its audience returning regularly.
This time, Facebook is making it easier to flag content that may be fake. Users can report a post they dislike in their feed, but when Facebook asks for a reason, the site presents them with a list of limited or vague options, including the cryptic “I don’t think it should be on Facebook.’’ In Facebook’s new experiment, users will have a choice to flag the post as fake news and have the option to message the friend who originally shared the piece to tell him or her the article is false.
If an article receives enough flags as fake, it can be directed to a coalition of groups that will fact-check it. The groups include Snopes, PolitiFact, The Associated Press, and ABC News. They will check the article and can mark it as a “disputed’’ piece, a designation that will be seen on Facebook.
Partner organizations will not be paid, the companies said. Some characterized the fact-checking as an extension of their journalistic efforts.
“We actually regard this as a big part of our core mission,’’ James Goldston, president of ABC News, said in an interview. “If that core mission isn’t helping people regard the real from the fake news, I don’t know what our mission is.’’
Disputed articles will ultimately appear lower in the News Feed. If users still decide to share such an article, they will receive a pop-up reminding them that the accuracy of the piece is in question.
In another change in News Feed, articles that many users read but do not share will be ranked lower on people’s feeds. Mosseri said a low ratio of sharing an article after it has been read could be perceived as a signal the article was misleading or of poor quality.
Facebook must take something else into consideration: profits. Any action taken to reduce popular content, even if it is fake news, could hurt the company’s priority of keeping users engaged on the platform. People spend an average of more than 50 minutes per day on Facebook, and the company wants that number to grow.
Executives stressed the overriding factor right now is not just engagement. “I recognize we have a greater responsibility than just building technology that information flows through,’’ Zuckerberg wrote in a post Thursday. “We have a responsibility to make sure Facebook has the greatest positive impact on the world.’’