facebook logo a-go-go

Facebook did not act on own evidence of algorithm-driven extremism

Image credit: Pixabay

According to a Wall Street Journal report, Facebook decided to take no significant action after internal research demonstrated that its algorithms were stoking extremism and division.

One of Facebook’s internal presentations from 2018 explicitly stated that its algorithms – which boost certain content that targeted users may be more likely to interact with – have been aggravating divisive behaviour and would continue to do so, the report said.

“Our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked, Facebook would feed users more and more divisive content in an effort to gain user attention and increase time on the platform,” one slide read.

A separate 2016 study written by Monica Lee, an internal research scientist, found that 64 per cent of people who had joined an extremist group on the platform did so because the group was promoted by Facebook’s automated recommendation tools.

In response to these findings, Facebook’s News Feed integrity team put forward proposals, including altering engagement-optimising algorithms or introducing other measures (such as temporary sub-groups for the most polarised discussions and 'Common Ground' groups to bring people together around politically-neutral content). However, executives eventually declined to make these changes, which could affect engagement and were therefore deemed “anti-growth”.

According to the report, Facebook policy head Joel Kaplan argued that changes may more strongly affect right-wing users, pages, and groups, which drive engagement on the platform. Kaplan served as White House Deputy Chief of Staff under President George W Bush and was widely perceived to have been hired by Facebook in an effort to warm relationships with Republican lawmakers and fend off accusations of left-wing bias. He is thought to be largely responsible for Facebook’s controversial decision not to fact-check political ads.

In a statement to the media, Facebook said: “We’ve learned a lot since 2016 and are not the same company today. We’ve built a robust integrity team; strengthened our policies and practices to limit harmful content, and used research to understand our platform’s impact on society so we continue to improve.”

For years, Facebook has been struggling to strike the correct balance between free expression and safety on its platforms. Calls for stricter content moderation have intensified following cases of real-world violence associated with Facebook, such as the Christchurch terrorist attack (which was fully livestreamed and later shared repeatedly on Facebook). Facebook is in the process of establishing an independent oversight board to make final decisions on what content should be permitted on its platforms.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles