Legal Training For the AI Revolution
Tom Borman* – Lawyers find themselves in the midst of a transformative era, where their role in overseeing AI is pivotal for the future of the legal profession. In this dynamic landscape, lawyers must embrace the potential of AI while upholding ethical, professional, and legal standards.
While some major firms are allowing limited use of AI technology, the consensus is that lawyers and staff must be swiftly trained in AI.
Developing technical competence in AI has become increasingly crucial in the legal profession and more reports, such as those from Reuters and Kira Systems, the increased focused on training and supervisory issues arising from AI’s development.
Some lawyers may not fully understand how AI works or how it can be used which can make it difficult to train lawyers to use and implement AI effectively.
Law firms require a comprehensive internal training approach to avoid pitfalls such as exposing sensitive client data or disseminating inaccurate information through AI-generated content.
Firms are adopting various training methods, including in-person seminars and video presentations. For instance, Orrick Herrington & Sutcliffe partnered with AltaClaro to develop a curriculum centered around “prompt engineering,” instructing individuals on structuring queries for AI-generated content.
Law firms are also crafting policies to mitigate risks associated with generative AI tools.
BakerHostetler, for instance, advised its staff not to use large language models combined with client data to prevent privacy breaches.
In the academic sphere, US law schools are responding to student demand for up-to-date courses on evolving AI technology.
The same thing is happening across other jurisdictions too, with major academic institutions like Oxford University implementing major programs to investigate the implications of AI on the law and legal work.
While some offer specialized courses like “Generative AI for Lawyers,” the challenge lies in integrating AI education into the existing curriculum.
Overall, law firms are navigating the complexities of AI integration, with a focus on training, policy development, and adaptation to technological advancements, along with a keen eye on ethical issues arising from AI and its use.
Fulfilling the Duty to Supervise AI
At the core of this shift is the responsibility to supervise AI, a duty that holds paramount importance.
Lawyers play a critical role in ensuring organizations harness AI’s advantages while maintaining their ethical obligations and public interests, which increasingly concerns regulators, academics and of course lawyers.

Even AI leader Microsoft is taking a lead in the ethical use of AI, as we wrote about in the article on Microsoft lawyer Natasha Crampton.
Ethics, Standards, and Professional Responsibilities
This duty is an extension of lawyers’ commitment to competent representation and ethical practice.
In the United States, the American Bar Association’s (ABA) Model Rules of Professional Conduct, under Rule 5.1, underscores the need for effective supervision to ensure both competence and ethical behavior and similar Law Society rules in the UK, Australia and elsewhere impose similar requirements regarding legal training.
As the legal landscape evolves, so does the duty to supervise. With the integration of AI across businesses, lawyers’ responsibility extends to overseeing AI technologies and the way in which they are implemented.
Strengthening Organizations Through Supervision

Lawyers are facilitators of growth and excellence within organizations. They guide colleagues and executives, ensuring legal compliance, addressing conflicts, and fostering a culture of integrity.
While lawyers will continue to shape the law – in human form, rather than robots – the growth of AI and the development of technology continues to dominate debate and law firm tech talk.
Quality Control and Ethical Oversight
A cornerstone of supervision is maintaining ethical and professional standards. Lawyers establish systems to uphold these standards, promote compliance, and resolve conflicts.
We recently reported on Australian law firm Allens developing their own version of ChatGPT for ethical and other reasons. Law firms are becoming increasingly proactive on the way they handle this technology.
But the conflict between the ‘rush’ towards AI and the need to maintain suitable and legal – let alone ethical – standards is of paramount importance.
One of the issues is the cost and time involvement with AI implementation too. AI can be expensive, and some law firms may be hesitant to invest in new technologies without a clear return on investment.
Navigating AI with Transparency
Transparent communication and effective delegation are crucial for a productive work environment. Lawyers enhance their skills through continuous professional development to offer informed guidance.
In-house counsel too assumes a pivotal role in ensuring responsible AI use within businesses and organizations. Their duty encompasses understanding AI tools, managing data, considering ethics, and prioritizing ongoing education.
Incorporating AI in Supervision

As AI becomes more prevalent, legal teams can integrate AI in their supervisory efforts. A comprehensive understanding of AI tools, including biases and limitations, is essential.
AI can perpetuate and even amplify biases that exist in the legal system. For example, if an AI system is trained on biased data, it may produce biased outcomes, which can lead to discrimination against certain groups and undermine the fairness of the legal system.
AI also has limitations and may not be able to provide accurate or complete information in all cases.
Understanding the nuances of legal language or take into account the unique circumstances of a particular case is important and it is something that AI is not by any means always equipped to adequately handle.
Developing Organizational Guidelines
In-house lawyers, for instance, also have specific obligations within their organizations, including the crafting of guidelines, procedures, and standards for AI implementation, data management, and compliance.
They need to ensure accountability and quality control in monitoring AI performance.
Collaboration with executives, managers, and AI experts is vital to embed legal and ethical concerns into AI initiatives. Discussing ethics keeps lawyers informed about AI developments, allowing them to adapt their advice.
Balancing Act: Addressing Risks and Ethical Considerations
Supervising AI requires risk mitigation, collaboration with AI specialists, and addressing ethical concerns such as transparency and fairness. Lawyers are responsible for assessing AI-generated work’s accuracy and reliability.
There are also key privacy issues. Lawyers must be trained to use AI in a way that is compliant with data privacy and security regulations, which can be both challenging and difficult.
Striking the Balance
Adaptive lawyers who invest in education and professional development can utilize AI’s power while upholding legal standards in an appropriate way. This challenge marks a transformative juncture in the legal profession’s history as major technological reform completely transforms the way in which legal services are provided.
Conclusion
In this digital age, lawyers’ oversight of AI defines the trajectory of legal practice and is increasingly vital to ensure the appropriate standards are upheld, if not strengthened.
Embracing this responsibility empowers legal professionals to harness AI’s potential while safeguarding ethical and professional integrity.
Author –
Tom Borman is a freelance legal writer who has contributed several articles for LawFuell including the lawyers indicted with Donald Trump in the Georgia election interference cases.