SAN FRANCISCO (Reuters) – Instagram is adding an option for users to report posts they think are false, the company announced on Thursday, as the Facebook-owned (FB.O) photo-sharing site tries to stem misinformation and other abuses on its platform.
FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Instagram logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration
Posting false information is not banned on any of Facebook’s suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.
Facebook started using image-detection on Instagram in May to find content debunked on its flagship app and also expanded its third-party fact-checking program to the app.
Results rated as false are removed from places where users seek out new content, like Instagram’s Explore tab and hashtag search results.
Facebook has 54 fact-checking partners working in 42 languages, but the program on Instagram is only being rolled out in the United States.
“This is an initial step as we work toward a more comprehensive approach to tackling misinformation,” said Stephanie Otway, a Facebook company spokeswoman.
Instagram has largely been spared the scrutiny associated with its parent company, which is in the crosshairs of regulators over alleged Russian attempts to spread misinformation around the 2016 U.S. presidential election.
But an independent report commissioned by the Senate Select Committee on Intelligence found that it was “perhaps the most effective platform” for Russian actors trying to spread false information since the election.
Russian operatives appeared to shift much of their activity to Instagram, where engagement outperformed Facebook, wrote researchers at New Knowledge, which conducted the analysis.
“Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” they said.
It has also come under pressure to block health hoaxes, including posts trying to dissuade people from getting vaccinated.
Last month, UK-based charity Full Fact, one of Facebook’s fact-checking partners, called on the company to provide more data on how flagged content is shared over time, expressing concerns over the effectiveness of the program.
Reporting by Elizabeth Culliford and Katie Paul; Editing by Cynthia Osterman