Twitter algorithm amplifies right-leaning politics more than left, internal study findsby Katherine Wiles · MarketWatch
While 90% of Republicans might believe that social media platforms censor political viewpoints, according to Pew Research Center, a new study from Twitter points to the opposite.
The study, published Thursday, was conducted by a team at Twitter TWTR, -4.83% that analyzed millions of tweets from April 1, 2020 to Aug. 15, 2020. The researchers focused on elected officials and news outlets from the U.S., Canada, France, Germany, Japan, Spain and the United Kingdom.
The study looked at the difference between Twitter feeds that are set to show tweets in reverse chronological order versus ones that are set to show tweets based on an algorithm. Users have been able to set their feeds to either setting since 2016.
The researchers determined party affiliation of elected officials based on public, third party sources, like institutional websites. News outlets were placed on the political spectrum based on media bias ratings from two independent organizations, AllSides and Ad Fontes Media.
The study found that while political content of all stripes is amplified by the algorithm, content that leans toward the right is even more favored.
“In six out of seven countries — all but Germany — tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group,” Rumman Chowdhury, director of the META (Machine Learning Ethics, Transparency, and Accountability) team at Twitter, said in a statement.
Right-leaning news outlets also “see greater algorithmic amplification on Twitter compared to left-leaning news outlets,” she said.
The next step for her team is to figure out why exactly the algorithm seems to promote more right-leaning content than left-leaning. Finding out why this is happening, though, is a much harder question to answer because “it is a product of the interactions between people and the platform,” Chowdhurry said.
“The ML Ethics, Transparency and Accountability (META) team’s mission, as researchers and practitioners embedded within a social media company, is to identify both, and mitigate any inequity that may occur,” she said.
Twitter got into hot water last year after users noticed its photo-cropping algorithm consistently cropped photos to focus on white faces over Black faces. The outcome was the same for stock photos, cartoon characters and even dogs.
The social media giant apologized for the algorithm, saying at the time that it had tested the feature extensively before releasing it and hadn’t found any racial or gender biases. Still, the company said it recognizes “that the way we automatically crop photos means there is a potential for harm. We should’ve done a better job of anticipating this possibility when we were first designing and building this product.”
Twitter has since changed the way its algorithm automatically crops photos.
Twitter is not the only social media platform that has struggled with algorithm bias.
A 2020 report from The Wall Street Journal revealed when Facebook FB, -5.05% tweaked its newsfeed algorithm in 2017 to display less political news, policy executives were concerned the change would have a disproportionate impact on right-leaning outlets, like the Daily Wire. Mark Zuckerberg reportedly approved a plan for engineers to change the algorithm so it would affect left-leaning outlets, like Mother Jones, more than previously planned.
The company said at the time that it “did not make changes with the intent of impacting individual publishers.”