select between over 14,000 AI Tool and 4,700 AI News Posts.
The Meta Safety Advisory Council has written the company a letter about its concerns with its recent policy changes, including its decision to suspend its fact-checking program. In it, the council said that Meta's policy shift "risks prioritizing political ideologies over global safety imperatives." It highlights how Meta's position as one of the world's most influential companies gives it the power to influence not just online behavior, but also societal norms. The company risks "normalizing harmful behaviors and undermining years of social progress... by dialing back protections for protected communities," the letter reads. <br /> Facebook's Help Center describes the Meta Safety Advisory Council as a group of "independent online safety organizations and experts" from various countries. The company formed it in 2009 and consults with its members on issues revolving around public safety. <br /> Meta CEO Mark Zuckerberg announced the massive shift in the company's approach to moderation and speech earlier this year. In addition to revealing that Meta is ending its third-party fact-checking program and implementing X-style Community Notes — something, X's Lina Yaccarino had applauded — he also said that the company is killing "a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse." Shortly after his announcement, Meta changed its hateful conduct policy to "allow allegations of mental illness or abnormality when based on gender or sexual orientation." It also removed removed a policy that prohibited users from referring to women as household objects or property and from calling transgender or non-binary people as "it."<br /> The council says it commends Meta's "ongoing efforts to address the most egregious and illegal harms" on its platforms, but it also stressed that addressing "ongoing hate against individuals or communities" should remain a top priority for Meta as it has ripple effects that go beyond its apps and websites. And since marginalized groups, such as women, LGBTQIA+ communities and immigrants, are targeted disproportionately online, Meta's policy changes could take away whatever made them feel safe and included on the company's platforms. <br /> Going back to Meta's decision to end its fact-checking program, the council explained that while crowd-sourced tools like Community Notes can address misinformation, independent researchers have raised concerns about their effectiveness. One report last year showed that posts with false election information on X, for instance, didn't show proposed Community Notes corrections. They even racked up billions of views. "Fact-checking serves as a vital safeguard — particularly in regions of the world where misinformation fuels offline harm and as adoption of AI grows worldwide," the council wrote. "Meta must ensure that new approaches mitigate risks globally."This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-safety-advisory-council-says-the-companys-moderation-changes-prioritize-politics-over-safety-140026965.html?src=rss
<p>There’s no longer any question that Threads and Bluesky have created the most viable alternatives to the platform once known as Twitter. But while the two services may share some of the sam [...]
<p>On Thursday, <em>The New York Times </em>published a <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href="h [...]
<p>Meta has agreed to pay President Donald Trump $25 million to settle a <a data-i13n="elm:context_link;elmt:doNotAffiliate;cpos:1;pos:1" class="no-affiliate-link" href=&q [...]