© 2024 WUKY
background_fid.jpg
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Twitter Says Steps To Curb Election Misinformation Worked

Twitter said on Thursday it would maintain some changes it had made to slow down the spread of election misinformation, saying they were working as intended.

Before Election Day, Twitter, Facebook and other social networks had announced a cascade of measures billed as protecting the integrity of the voting process.

For Twitter, those included more prominent warning labels on misleading or disputed claims and limiting how such claims can be shared.

Twitter said on Thursday that between October 27 and November 11, it had labeled about 300,000 tweets as containing "disputed and potentially misleading" information about the election. That represented 0.2% of all tweets related to the U.S. election in that time frame. However, the company declined to say how that compared to the volume of tweets labeled before October 27.

Of those 300,000 tweets, Twitter hid almost 500 behind warnings that users had to click past to read. In order to reply to those tweets or share them, users had to add their own comments — a requirement intended to give people pause. Finally, Twitter removed those tweets from recommendation by its algorithms.

Perhaps the most noticeable impact was on President Trump's account. Twitter hid more than a dozen of his tweets and retweets behind warnings between Election Day and November 7, when major media outlets called the election for former Vice President Joe Biden. The platform has stopped using the more aggressive labels since then, but has continued this week to put notices on many of Trump's tweets in which he made unsupported claims of voter fraud.

Still, false claims and conspiracy theories continue to circulate online, even as Twitter and Facebook have aggressively applied their rules.

That has left experts who track online misinformation questioning how effective warning labels are, noting that social media companies do not provide much data to quantify their impact.

On Thursday, Twitter gave some insight into that question. It said it had seen a 29% reduction in "quote tweeting" of labeled tweets — where users add their own commentary — which it attributed to a prompt warning users who tried to share them that they might be spreading misleading information.

Read more on the security measures Twitter is keeping.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.