Linklaters Puts Robot Lawyers to the Legal Test
Ben Thomson, Legal affairs writer
Magic Circle firm Linklaters puts AI through rigorous legal exams – and the results might surprise you.
In a bold experiment that reads like science fiction but is very much our present reality, Magic Circle firm Linklaters has been subjecting artificial intelligence to law exams. The move does not signal the arrival of the ‘Robot Lawyer’, but rather is literally putting legal AI to the test.
These aren’t your garden-variety legal quizzes either – we’re talking specialist questions that would challenge human lawyers with two years of post-qualification experience.
The firm, where equity partners pocket a cool £1.9 million on average, has developed their bespoke “LinksAI English law benchmark” to determine just how far these silicon-based legal minds have come.
And come they have – with marked improvements over the past two years that might raise an eyebrow or two at your next partners’ meeting.
From “Often Wrong” to “Getting Useful”
Back in 2023, when Linklaters first put AI to the test, the results were rather underwhelming. Four systems – GPT-2, GPT-3, GPT-4, and Bard – were examined, with Bard emerging as the valedictorian with a mediocre 4.4 out of 10.
The common assessment? “Often wrong” with a troubling tendency to fabricate citations out of digital thin air.
Fast forward to today, and OpenAI’s o1 has scored an impressive 6.4 out of 10, with Google’s Gemini 2.0 not far behind at 6.0.
The improvement is substantial, primarily in substantive legal knowledge and citation accuracy. To put it in perspective, that’s like watching your articling student evolve from “concerning” to “promising” in the space of two years.
The AI Arms Race in Legal Circles
This experimentation doesn’t exist in a vacuum. While Linklaters tests the boundaries of artificial legal intelligence, other firms are erecting digital barricades.
Hill Dickinson – one of the UK’s 50 largest practices – recently cut off general access to AI technology, citing potential misuse as the primary concern.
It’s the classic law firm dichotomy: innovation versus risk management. Some see AI as the associate who never sleeps (or bills hours), while others view it as the articling student who might accidentally email privileged information to opposing counsel.
Human Supervision Still Required
Before you retire to your country estate and leave the practice to R2-D2, Esq., Linklaters has issued an important caveat: these AI systems “are still not always right and lack nuance.” They remain adamant that AI should not be trusted to dispense English law advice “without expert human supervision.”
However – and this is where managing partners might want to pay attention – with proper human oversight, these systems “are getting to the stage where they could be useful” for first drafts or document cross-checking.
The most tantalizing observation? AI appears particularly adept at “summarising relatively well-known areas of law” – precisely the work typically assigned to junior associates and trainees. One can almost hear the nervous shuffling of training contract applications.
What This Means for Law Firm Recruitment
Linklaters’ assessment forecasts potential seismic shifts in how legal talent is recruited and deployed. If AI can handle the grunt work traditionally assigned to junior lawyers, firms might reconsider their hiring patterns or redistribute human capital toward more complex, nuanced tasks that still confound our digital colleagues.
The future of law might not be an either/or proposition between human and artificial intelligence, but rather a carefully choreographed dance between the two – with humans leading on matters of judgment, strategy, and client relationships, while AI handles the heavy lifting of research, drafting, and document review.
So while the robots aren’t quite ready to argue before the Supreme Court, they’re increasingly capable of preparing the briefs – under human supervision, of course.