Katie Atkinson

‘You need to identify the tasks AI can help with’: Professor Katie Atkinson

Image credit: Nick Smith

Professor of computer science Katie Atkinson - dean of the School of Electrical Engineering, Electronics and Computer Science, University of Liverpool - is pioneering the application of artificial intelligence to the legal process through ‘explainable AI systems’ that can make legal evaluations, reduce paperwork, save time and cut costs.

We probably all have an outdated mental image of the legal profession. Perhaps we think of ceiling-high bookcases overflowing with caramel-coloured, leather-bound tomes on case law. Possibly, subterranean archives with row upon row of rarely consulted metal filing cabinets. Even roll-top escritoires buried under sheaves of manuscript documents bundled together with lawyers’ red ribbon, and yellow notepads annotated in fountain pen. But these clichés are more the stylisations of television drama because, as Katie Atkinson explains, the legal profession is moving on and is starting to use artificial intelligence to do the grunt work.

Professor Atkinson, who is also Dean of the School of Electrical Engineering, Electronics and Computer Science at the University of Liverpool, thinks there has been an increase in public awareness about artificial intelligence technologies that are starting to be deployed in real-world applications. “Once it was in the realm of science fiction, but today it is in the here and now.” But what might not be so clear to the public, she maintains, is that the same thing has happened in law.

For Atkinson, whose work has centred on the relationship between AI and law for nearly two decades, the connection started to become more mainstream, “perhaps five years ago. Before that it was very much an academic field.” She explains that in academia there have been conferences on the subject since the 1980s, in the early days mostly attended by computer scientists. “There was a lot of concern at the time over where the end users were. They were asking why people weren’t using the solutions we were coming up with. But in the past five years, the field of legal technology has expanded, and that’s tied in with the maturing of technology and research solutions being turned into practical tools that can be used in a variety of fields, including law.”

Yet there’s no need for lawyers to start thinking about a career switch yet because, rather than it being a ‘robots are coming for our jobs’ scare that tabloids enjoy so much, it’s more a case of technology creating the opportunity for ‘task automation’.

“These are the right words,” says Atkinson, “because when you look to apply these solutions in practice, you need to identify the task, or range of tasks, that AI can help with,” some of which are more cognitively challenging than others.

The most obvious of these, she says, is document processing. “Machines can process large volumes of paperwork and data faster than humans. However, it also depends what you want that processing to do. If you want information extraction, a lawyer may have reams of documentation to go through when preparing a case… so you need to be able to pick out what the legal concepts are.” This knowledge is then built into a ‘computational model of argument’ to use information extracted in decision-support tools that enable more consistent and faster outcomes than can be delivered by human processing alone.

Searching keywords is a start, but the game-changer comes with the ability to model and automate legal reasoning. The best way to think about this, says Atkinson, is “to look at the type of reasoning humans undertake when carrying out tasks in the legal field.”

When lawyers present cases they look for what the legal strong points are. When judges make decisions about cases brought before them, they must decide which are the winning arguments and why. “And that task is what my research is focused on specifically.”

The idea is that “our modelling tools are capable of replicating to a high degree of accuracy the actual outcomes of closed court cases in a variety of well-studied areas, reaching a 100 per cent success rate in certain scoped legal fields.”

She goes on to say that Liverpool researchers have recently been developing AI tools that provide ‘explainable’ decision support, addressing, “significant concerns about the transparency of AI software. These methods open up the possibility for AI to assist legal professionals to take informed actions by advising them on legal outcomes, while displaying the arguments and justification processes of the AI software. Our recent projects have shown that the adoption of these technologies benefits law firms significantly: streamlining administration, cutting costs, and driving efficiencies.”

As well as doing fundamental research into legal reasoning modelling, Atkinson is applying it in practice. She cites a recent real-world collaboration with law firm Weightmans, where alongside commercial information extraction partner Kira Systems, “we have been using the modelling to analyse whether a case should be decided for the defendant or the claimant”. This work was done in the domain of hearing loss claims, in which you have, for example, “someone making a claim against their employer, saying their hearing loss is a result of negligence on the part of the employer. So you need to examine whether there is evidence to support actual hearing loss. Then, if so, you need to decide whether that loss is attributable to actions or negligence on the part of the employer.” Given that there is a body of case law on the subject, “what we need to do is capture that data and reasoning within the software and then feed new cases into it, against which we can decide the strength of the claim and whether the claimant is likely to have a winning case.”

The level of capture is a lot more involved than scanning a few documents “and out comes the result because of the magical AI machine,” says Atkinson. “The first task is to secure expert knowledge, which means knowing what legally relevant factors in the domain are, such as extent of damage, provision of safety equipment and training, and how they all relate to one another. Then you need to analyse what’s either present or absent from the case. For this, you need to write an AI program that’s able to zoom into relevant information and perform the reasoning that says: when we’ve got facts A, B, C and D present, with E, F and G absent in a particular case, this is what we predict the outcome could be,” – in effect simulating the reasoning process that humans would apply, but more objectively, efficiently and quickly.

At this point, Atkinson is keen to stress that the entire process is to assist with decision support – analogous to the DRS (umpire ‘decision review system’) technology used in professional cricket – rather than AI fully replacing humans. “So, it’s a situation where humans can see the explanation for the AI system decision and then saying: ‘okay, this looks like a strong or weak case,’ and then making their own decision on the back of that.”

Before “we can even think about allowing” AI support tools anywhere near real-world legal decision making, the reasoning model needs to be evaluated by testing against historical cases where there is a consensus that safe decisions have been reached. To do that, “you need to model a particular domain of case law, capture the legally relevant factors and how they interact, akin to how a judge would. Then you look at past cases and say: ‘okay, these are the facts that were present in those cases. If we feed them into our tool, that kicks off the reasoning, which says that according to the law, the case should be found for the plaintiff or the defendant.’ Then we look at whether the software comes up with the same decision as the judge did.” The accuracy of the software can then in effect be gauged against the recorded decisions.

‘The adoption of these technologies benefits law firms significantly.’

Professor Katie Atkinson

One domain that Atkinson focuses on concerns the possession of wild animals, an area that “has been well studied in the annals of AI and law literature for a long time,” and forms one of the bases for testing the reasoning model. The volume of data is significant, as well as the scale of the timeframe. An early case dates back to a 19th-century dispute over the legal rights to the ownership of a wounded fox, which rested on whether the animal was the legal possession of the hunter that had wounded it, or the opportunistic passer-by that poached the incapacitated animal: the former being able to demonstrate intent to capture, the latter unable to do so.

This theme was to re-emerge in a much-publicised dispute arising from a baseball match in the United States in October 2001, at which the ball was whacked into the crowd and a somewhat predictable scramble ensued. The legal dispute emerged when two spectators were equally convinced that ownership of the ball had transferred to them and, it being America, the whole thing went to court. Without a trace of irony, at this point Atkinson says she needs to choose her words carefully because they are ‘legally relevant’.

Alex Popov was deemed to have “stopped the forward motion of the ball in the upper webbing of his baseball glove. But so many other people were trying to get hold of the ball that he got thrown to the ground, and in the ruckus the ball became dislodged from his glove and went to the floor.” At which point Patrick Hayashi picked up the ball, and in so doing created the argument between the two parties as to who owned the ball (apparently it came out in the case that the original purchasers of the ball for the purpose of playing the game could no longer sustain their claim to ownership). In the court case, “there was a lot of detail that had to be worked through concerning what it means to own a ball. Is it sufficient to have stopped the forward motion?” Umpires were brought in to give evidence, who suggested that having the ball in the upper part of the glove did not provide sufficient certainty of completing the catch, despite the party having shown intent to catch the ball. Yet, crucially, Popov had been forced to the ground by what the judge saw as a ‘mob engaged in violent illegal behaviour’, meaning he should have been allowed the opportunity to complete the catch. But importantly, Hayashi was not judged to have been a constituent member of the aforementioned mob and, having come into the ball’s possession by retrieving it from the ground, was therefore found not to have acted illegally. As spellbinding as the further detail is, there is no room for it here: the whole case lurched to its predictable outcome that definitive ownership of the ball could not be established and so the finding of the judge was that the ball was to be sold at auction with the proceeds split equally between Popov and Hayashi.

What’s interesting about this condensed version of this modern-day ‘wild animal ownership’ parable is not so much how mind-bogglingly trivial it will be to engineers, but how complex something that appears to be so straightforward can be once legal reasoning is factored in. In fact, it gives a real insight into the scale of Atkinson’s task in producing software that can accurately and repeatably replicate the findings in such cases that to the casual observer instinctively appear to be wholly subjective.

Atkinson remains steadfastly neutral. “I’m a computer scientist. So far as I’m concerned what the judge says, and what the legal scholars say, goes. All I’m trying to do is replicate their decisions so that I can assist them.”

What most lawyers will want to know is where we are on the maturity curve of this technology tool and how the seemingly inevitable path towards task automation within their profession will affect them. “There’s been a lot of hype around AI in general, as well as in the legal technology field,” says Atkinson, “and that can often inflate expectations. But at the same time, it shouldn’t take away from the achievements that have happened so far. I try to sit in the middle: on the one hand we are not going to see a scenario where there are swathes of people suddenly out of a job because of new technology. Yet, on the other, we are almost certainly going to see a change in the types of roles that exist in law firms as a direct result of adopting these new technologies.”

One such change will be the evolution of legal technical innovation managers – something that was broadly unthinkable in the 20th century – whose responsibilities will include horizon scanning, technology maturity analysis, in-house technology investment and procurement.

“There will also be data scientists being brought into law firms,” says Atkinson, “and while we won’t be seeing immediate mass unemployment in the profession, over time the employment profile will shift to include more technologists.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles