A bug that showed violent content in her Instagram feed has been fixed, Meta says
Instagram’s parent company Meta apologised on Thursday Violent graphic content Some users saw it on their Instagram reel feed. Meta attributed the issue to an error the company said it had addressed.
“We fixed an error that some users should not recommend content in their Instagram reel feed,” a Meta spokesperson said in a statement provided to CNET. “I apologize for the mistake.”
Meta went on to say that the incident was an error unrelated to changes to content policy made by the company. Earlier this year I made some Instagram There will be significant changes to user and content creation policiesHowever, these changes did not specifically address filtering or inappropriate content displayed in the feed.
Meta has made its own Change content changes Recently I have The fact-checking department has been demolished It supports community-driven moderation. Amnesty International I warned him earlier this month Changes in the meta could increase the risk of promoting violence.
read more: Instagram could spin off reels as a standalone app, according to a report
Meta is flagged for most graphics or intrusive images Removed and replaced with warning label Users must click through to view the image. According to Meta, some content is also filtered out for content under the age of 18. The company says it is an ongoing process to develop policies on violent, graphic images with the help of international experts and improve those policies.
User posted On social media And on the message board Includes Redditabout some of the unnecessary images they saw on Instagram, perhaps because of glitch. They included shootings, beheadings, people who were hit by vehicles and other acts of violence.
Brooke Erin DuffyAs a social media researcher and associate professor at Cornell University, she said she was unconvinced by Meta’s claim that the issue of violent content is unrelated to policy changes.
“Content moderation systems never fail, whether they have AI or human labor,” Duffy told CNET. “And while many speculated that a meta moderation overhaul (announced last month) would create high levels of risk and vulnerability, yesterday’s ‘glitch’ provided direct evidence of the cost of an unlimited platform. ”
Duffy added that while social media platforms are difficult to moderate, “platform moderation guidelines act as a safety mechanism for users, especially in marginalized communities. Replacing the existing systems of meta with the “Community Note” feature represents a step backward from the perspective of user protection. ”