Debate: Should we use technology to monitor social media?
Author & speaker
Profile: Andrea Kates
Andrea Kates is an authority on social media and author of the bestselling business innovation book ‘Find Your Next’. She is a member of the TED (Technology, Entertainment, Design) community and a featured 2012 TED speaker.
Author & broadcaster
Profile: Guy Clapperton
Guy Clapperton is a technology author and broadcaster specialising in social media and other forms of digital publishing. He is the author of ‘This is Social Media’ and his latest book - ‘The Smarter Working Manifesto’ - is published this month.
Following recent media announcements, we are now becoming aware that there are people developing tools to monitor the truth of online statements. But the first thing we need to discuss is whether this is scientifically possible to the degree where it could be implemented to any great effect. How do you triangulate information early enough and piece it together well enough to come up with a machine that can definitively recognise ‘the truth’?When you think about the notion of ‘proxies for truth’ it’s worth remembering that in the United States of America we have been using the polygraph lie detector for decades, and yet only around a quarter of states accept the technology as reliable. Also, we have to ask whether people will accept the standard of such a technology, along with the basic question: what will this technology look like?
The work that is being done might be in its infancy, but it is a step in the right direction. To really understand its value we need to go back to the era of the newspaper. Now, the big difference between newspapers and the self-publishing end of the Internet - social media - is that newspapers had editors and fact checkers. What these people did was to filter the facts, and so rather than crafting their version of the truth, they at least attempted to come up with some level of objective truth. In the area of the Internet, where there is a low level of self-editing, ethical standards have appeared to have fallen to the point where they are quite low. People are acting much more quickly on partial information than we were accustomed to with more traditional news outlets.
The danger of ‘living at the speed of Twitter’ is that harmful statements that are not true can spread unchecked. And so if there is the potential for frenzies over what is not true, you will end up with serious situations that could result in 1920s-style runs on financial markets, people ignoring natural disasters or misinformation campaigns such as those that operated during the revolution in Egypt.
It’s not our fault that we have been lured into life at the speed of Twitter, but it does come with its pitfalls. You can hit ‘send’ before you’ve done the self-filtering required to make a responsible statement. This rapidity of posting, with the potential to reach so many people so quickly, means that we are not always able to pause to reflect on what our natural ethical instincts are. It’s important to realise that we’re not talking about regrettable statements (which can be harmful enough). We’re talking about things that are demonstrably untrue, statements which filtering software will be able to identify. If it is a matter of putting in place something that stops people behaving antisocially I can’t see a downside. In addition we will have a level of security that protects us from our own failure to conduct adequate fact checking and a protection against the unintentional spreading of misinformation.
As citizens, it is tempting to feel that we are over-regulated, but bringing in technology to assist in that regulation is a downfall brought about by our own stupidity. We can see this lower order of ethical standard very clearly when we consider whether it is wrong to walk into a music store and fill our pockets with CDs, while it appears that ripping audio off the Internet is somehow a lesser crime.
I’d like to think that the majority of us are self-aware to the point where we can recognise the potential for online misinformation to cause damage to others. But the evidence is we’re not, so we need the technology. Once the technology for scanning social media postings becomes reliable then I see no moral objection to it.
Most would accept that stealing and lying are wrong; so if the question comes down to whether we prefer people behaving in anarchic ways online or there being a form of monitoring, then I come down on the side of getting the technology in place, and the sooner the better.
Is the development of technology to determine the truth of social media statements a good thing? Well it all depends on whether you think that social media is about truth in the first place. Personally, I don’t think that this is necessarily what it was developed for. But more importantly, the first idea that emerges from the notion that we’ll have the veracity of our statements monitored by a machine is that it takes away our responsibility to do what all good journalists have hammered into them: we must do our own fact checking. As a journalist I take this very seriously: you have to be certain that what you put online, or into print, is correct. Social media users tend not to worry about this quite so much. And what this means is that you can sometimes get some really hot scoops on Twitter, for example. I found out that Michael Jackson had died on social media, although that was covered pretty much everywhere. But, on the same day I also discovered that the actor Jeff Goldblum had died. Apparently, Jeff Goldblum discovered the same thing too, and he was understandably quite upset about it, and this led to him having to make a frantic round of phonecalls to tell relatives that he was okay, and was not in fact, as had been reported, dead. For basic facts such as this, a facility that monitors the truth of your statements could be valuable.
One the other hand, the first few iterations of the technology are bound to be fallible, which is why I would rather trust people to take everything they see on social media with a pinch of salt and check their facts for themselves. Recently we’ve been asked to believe reports that Justin Bieber has died and Jony Ive has left Apple. These were pretty quickly dismissed as untrue because people looked at other sources for verification. At the moment there is a consortium of five universities researching how we can automate this process, and they have identified that there are four fundamental different types of misleading or untruthful statement. And as interesting as this may be, I think that the whole thing is being taken too seriously.
There are always online rumours that, say, Led Zeppelin are going to reform (again). Now, this sort of gossip hardly matters to anyone, with perhaps the exception of the members of the band itself, who presumably would have a deeper understanding of the situation than they could get from their Twitter accounts. And it might matter to the individual agents for the musicians too, although you have to remember that last year, when social media was awash with speculation about who would be the next Doctor Who, most of these rumours were spread by actors’ agents in a bid to raise their clients’ profiles.
Now, this is all harmless fun. But when it comes to serious issues such as the orchestration of riots, misinformation can be actively harmful. And so I can see the need in these cases for a control mechanism for truth on social media. I just don’t think that the problem will be solved by an automated system. The real solution is back where I started, and that is to get people to not believe what they read on social media platforms as gospel, and to encourage them to do their own fact checking.
If we reach a point where we are able to put such technology in place, in the cases where laws have been broken, the best we can say is that we might be able to claim that we would be able to produce contributory rather than conclusive evidence. This is no bad thing, but the idea of being cautious of an idea being presented as fact until we have reasonable proof is a better idea. This leads to us being responsible for our actions without having to be watched by lie-detection technology.
All it will take for the online lie-detection to be undermined is for one high-profile instance of where it has gone wrong. I would rather rely on people’s common sense, and this is an instance where it doesn’t appear to be terribly well applied at the moment.
Do you agree?
Social media postings should be automatically monitored
|E&T magazine - Debate - Social media should be automatically monitored||2||Reply|
"How do you feel about the Internet of Things, big data, wearables, gamification or self-driving cars? Hyper excited or just plain bored?"
- New railway signalling system vulnerable to hackers, says expert
- ‘Safest bike ever’ designed by UK inventor
- Contact centres 'missing data analytics advantage'
- India plasma research centre promises sun-like energy
- Healthcare and wearable technology: monitoring the connected body
- IET Vice President named Rolls-Royce CEO
- Test [06:22 pm 20/03/15]
- Test [06:20 pm 20/03/15]
- What to Specialise in Electronics Engineering?? [03:02 am 03/04/14]
- Britain to have just one remaining coal pit by the end of 2015 [01:11 am 03/04/14]
- LV Generator Star point earthing - UK [08:35 pm 02/04/14]
The essential source of engineering products and suppliers.