
Echoes of an extreme past
Image credit: Getty Images
On the 80th anniversary of the beginning of the Second World War, do we need to learn from history about how to deal with extremists who use communications technology to spread their hate?
At 11:15am on 3 September 1939, Prime Minister Neville Chamberlain told the British people that the nation was at war with Germany. The Nazi regime had sent a military force into Poland two days earlier and had not responded to the withdrawal demand from the British and French.
Ahead would be the most terrible war the world has ever known. Spread across six continents and all the world’s oceans, 70 to 85 million people would die and around 50 to 55 million of them were civilians. Villages and towns were destroyed. Cities flattened. Nations devastated. Economies crippled.
Yet 10 years earlier, Adolf Hitler was just a fringe, extremist politician. Germany’s military and economy were still restricted by the First World War Peace Treaty and its government was led by politicians committed to democracy and cooperation with their European neighbours.
These politicians unwittingly did something that made it much easier for the Nazis - once Hitler was appointed (by President Hinderburg) Chancellor in January 1933 - to establish control in Germany, replace democracy with dictatorship, re-arm, invade three countries and essentially fight a world war on their terms, at a time of their choosing.
In the mid-1920s, the democratic politicians passed laws to regulate radio broadcasting in Germany – ironically, to stop extremists (including Nazis) using the technology to stir up trouble.
Back then, radio was what social media is today – a relatively new technology that made mass, instant communication possible for the first time. The democratic German government took regulation further in 1932 when they nationalised commercial radio into one Reich Broadcasting Corporation. Once in power, the Nazis almost instantly began to use the regulated system of radio broadcasting for their own ends.
Following white supremacist shootings in El Paso, US, and Christchurch, New Zealand, earlier this year, there have been renewed calls for governments to regulate social media and the internet in a similar fashion.
Although regulation might make it harder for small groups of extremists to spread their hate online, do we also need to learn from history and consider what might happen in the future should such regulations be used by people who don’t adhere to democratic principles, like populist politicians with barely concealed nationalist agendas?
In August 1933, seven months after Hitler was elected Chancellor, engineer Otto Griessing presented the first Volksempfänger (people’s receiver) at the Berlin International Radio Show.
Nazi Propaganda Minister Joseph Goebbels had commissioned this cheap radio, with only German and Austrian stations marked on the dial. “Now everyone in Germany could afford a radio and the Nazis could send their propaganda straight into the homes of ordinary people,” says Steven Luckert, curator of the United States Holocaust Memorial Museum. By the outbreak of the war, 70 per cent of German households owned a radio.
Luckert explains that Nazis were fascinated by technology’s potential to help them mobilise audiences, shape public opinion and get messages out to millions of people. Goebbels was on record at the time as saying that Nazis could not have taken power and used it as they did without radio.
Nazi propaganda experts targeted different groups – workers, farmers, women, families and children – with specific messages. They set up a national radio channel, ‘Großdeutscher Rundfunk’, which aired opera, operettas, light dancing music and classical concerts.
“The Nazis used entertainment programmes to lure more people into listening to Hitler’s speeches,” says University of British Columbia historian Heidi Tworek, author of ‘News from Germany: the competition to control world communications 1900-1945.’
Luckert adds that Germany’s new leaders realised this approach would be more effective than just sending out direct propaganda. “Popular American jazz, pop and swing was banned as it was viewed by Nazi leaders as unsavoury African-American culture,” he says.
To the international community, Nazis used radio broadcasts to present themselves as a peace-loving regime who only wanted equality and self-determination for Germany. It was essential, given Germany’s economic and military weakness in the early to mid-1930s, that western powers should be convinced the Nazis didn’t want war, just to address the injustices of the First World War Peace Treaty. “Even rearmament, strictly against the Peace Treaty’s terms, was presented as a way of putting millions of Germans back to work,” says Luckert.
Tworek says in the mid-1930s Nazi leaders also used radio broadcasts to counter criticism in the foreign press. “They would send out radio content in English, Arabic, Spanish... targeting messages to these audiences,” Luckert explains.
Propaganda broadcasts also went out to German-speaking people inside Austria, Czechoslovakia and Poland to stir up resentment against those governments and secure sympathy for Nazi ideals. Subsequent Nazi invasions could then be presented as attempts to protect disaffected peoples from oppressive rulers.
To justify the invasion of Poland in 1939, the Nazis claimed that Polish soldiers had attacked their radio tower at Gleiwitz, on the German side of the Polish border. At the end of the war, it became clear this attack was a false-flag operation, carried out by SS men dressed as Poles.
In August this year, soon after the El Paso shootings, the US government called together technology companies to discuss ways of dealing with online extremists. According to a report by CNN, President Trump was also contemplating giving the Federal Communications Commission new authorities to regulate social media companies.
Following the Christchurch attacks in March of this year, the New Zealand Herald reported that the NZ government was trying to convince world leaders to regulate social media globally. The Helen Clark Foundation, a New Zealand-based think tank, suggested any regulation should be carried out by an independent agency like those for mainstream media. By then, the Australian government had already passed laws to prohibit the hosting of violent material.
In the UK, the Conservative government has proposed giving Ofcom, the communications watchdog, powers to fine social media companies that don’t adequately protect children from harmful content. The European Union’s proposed Digital Services Act, due by the end of 2020, will give Brussels new power to regulate hate speech, illegal content and political advertising. Germany passed its NetzDG law in January 2017, forcing internet companies to take down hateful or incendiary content that violates Germany’s speech laws.
When NetzDG was announced, Heiko Maas, then German Justice Minister, stated that freedom of speech has boundaries. Tworek, though, warns that in attempting to deal with what is an international relations problem as an information problem, governments might end up harming freedoms in ways we might later regret.
From a regulatory perspective it can be difficult to distinguish between hate speech and political or personal viewpoints, particularly when the hate speech is couched in subtle ways or through manipulated images or memes. Also, when certain supposedly legitimate politicians, commentators and even ordinary people sometimes use similar sentiment, albeit in a more moderate form, in their rants and missives.
“If authorities remove content, people could feel more justified in their beliefs and increasingly angry that their freedoms are being infringed in some way,” says Bjorn Ihler, an anti-extremism expert with the Kofi Annan Foundation. Ihler, who survived the 2011 Utoya mass-shooting carried out by Neo-Nazi terrorist Anders Brevik, adds: “Extremists can then more easily present themselves as victims and their conspiracy theories become more believable.” An unscrupulous government might also, as the Nazis did, use regulations designed to deal with illegitimate extremist content to ban legitimate dissent and squash opposition.
Another concern is that too much regulation would push harmful content underground, onto the dark web, which is a network of unindexed sites.
“Extremists will give you a reason as to why you feel the way you do, someone to blame and a solution.”
“Extremist groups are using more sophisticated content and shifting tactics, so it can be difficult to keep up and harder to locate,” says computer scientist Megan Squire, expert in the online behaviour of the radical right from Elon University.
One obvious answer is to get better at identifying the extremist hate-content.
In August, Facebook released an algorithm to identify terrorist images, graphic violence and child sexual exploitation. Facebook is also working with the University of Maryland, Cornell University, Massachusetts Institute of Technology and the University of California, Berkeley, to stop people from making subtle alterations to banned photos and videos to evade safety systems.
AI can also be used to monitor hate speech according to researchers at Cardiff University’s HateLab, who have been using it to target online abuse of Polish people living in the UK.
Last year, researchers from Brandeis University, the US Army and the Massachusetts Institute of Technology announced they’d found a way to predict potential extremists before the person even posted any content. The researchers used statistical modelling and optimised search policies to show that people who become extremists tend to have similar online behaviours, which would also enable predictions to be made about where a person would re-enter the network after they’ve been suspended.
London-based social enterprise Moonshot CVE has invented what it calls a redirect method, where people searching for keywords linked to extremism are instead shown counter information – adverts, videos – to help challenge their perspectives, or links to mental health counselling, as research showed that violent extremists are more likely than the average person to have mental health issues.
Tworek thinks we need to better understand why social media networks enable certain content to go viral. Ihler suggests changing sorting algorithms on social-media platforms so they don’t automatically favour content with the most clicks, or constantly feed people similar content.
Last year, an organisation called ISD, which specialises in anti-extremist research, analysis and policy advice, suggested that any regulatory body in the UK might audit algorithms. The regulator would examine each algorithm’s purpose, constitution and policies, check the outcomes of the system, and identify what data is used to train the algorithm.
“As AI improves, experts will learn more about design flaws behind the algorithms and the biases, and use that knowledge to develop better algorithms,” says Ihler.
Squire thinks it will take a while to come up with a technology solution that can reliably distinguish between hate speech and legitimate content and, for now, we need many more trained people monitoring social media.
Ihler adds that we need to understand wider societal issues relating to online behaviour. “Online communities tend to separate and segregate into echo-chambers,” he says. “Here, similar ideas and views reinforce each other, and contrary ideas are not accepted. We need to create more diverse communities of ideas, where people can challenge each other in rational ways.”
This is the real problem. People join online communities to connect with like-minded individuals, express their feelings and vent their frustrations, without fear of ridicule and judgement. Extremists know this and use social media to stir up people’s emotions so they will take action that favours the extremist. Spread the word, align with a cause, join a group.
“Extremists will also give you a reason as to why you feel the way you do, someone to blame and a solution,” Squire explains.
In 1930s Germany, the Nazi radio broadcasts blamed the West whose greed, they said, had caused the Wall Street crash and subsequent economic depression. They also accused the communists – who would apparently capitalise on people’s hardship to divide and destroy the nation – and most of all, the Jews, who, the Nazis claimed, pulled the strings of western capitalism and eastern Bolshevism for their own selfish ends. Today’s extremists use equally fabricated, twisted, irrational arguments to blame society’s ills on Muslims, foreigners, the West, liberal society, Jeremy Corbyn, Donald Trump, the European Union... the list of scapegoats is endless.
Luckert thinks the answer to all of this is for people to become more critical consumers of online information. “If people are not swayed by emotional outpourings, they will be more alert to the dangers of extremist speech,” he says.
However, if everyone becomes more internet-savvy and self-aware online, it won’t just be extremists who will find it harder to influence us; there are also advertisers, who try to trigger us emotionally so we’ll buy their products, and politicians, who do the same to secure our votes or get us to support a policy or idea.
The powers that be are unlikely to countenance any measures that would make it more difficult to do this. It’s for the same reason their predecessors didn’t take a more hard-line stance towards Hitler and the Nazis in the 1930s.
Back then, US commercial interests had realised that radio was perfect for advertising. Every country used it to encourage what those in charge believed to be correct sentiment, and to globally present a desired image and message. Empowering people to see through these propaganda techniques, both then and now, might stop the extremists, but it would also be very bad for business.
Further information
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.