This is a test notice item This is a test notice item. Watch now →
Kyle Sutherland
Family Lawyer at LawY

AI policy for law firms: 2024 resources

Summary

Like any technological innovation, AI is a powerful tool, yet it has limitations. Reaping the benefits while minimising risks begins with first understanding these limitations, then developing robust policies to encourage effective and appropriate use. Explore these resources to help develop the AI policy for your firm.

What could you do if you could reclaim precious hours back in your week? As a powerful productivity enhancer, AI has the potential to transform legal practice for the better, freeing up more time for strategic client work.

A LexisNexis Report from April 2024 surfaced this incredible statistic: 1 in 2 lawyers are already using AI in their practice.

However, as with any technological innovation, there are limitations within AI’s capabilities. Reaping the benefits while minimising risks begins with first understanding these limitations, and then rolling out robust policies to shape appropriate AI guidelines for your firm.

To help you devise an effective policy, we have gathered a wide range of resources.

Resources to shape your firm’s AI policy

Stay updated about AI legal developments and key cases.

The Allens AI Australian Law Benchmark

Legal Insight / Report, May 2024

Australia

In May 2024, Allens published a paper on benchmarking LLMs being GPT-4, Gemini 1, Claude 2, Perplexity and LLaMa 2. The Allens AI Australian law benchmark tested the capabilities of LLMs to deliver Australian law legal advice.   LLMs continue to develop at a significant rate and could have profound implications for the future provision of legal services. The paper sought to identify the key risks of obtaining Australian legal advice from an LLM. What the report found was that is that Lawyers are still needed:

“An understanding of the law and an ability to apply it are two vastly different skill sets, the latter requiring a profound understanding both of the law and the business context in which it's being applied.”

Allens intends to rerun this benchmarking exercise in future months as new LLMs and other AI tools are released onto the market, including models specifically focused on the legal domain.

Introducing mandatory guardrails for AI in high-risk settings

Australian Government | Department of Industry, Science and Resources

Legislative, Date N/A

Australia

AI is a rapidly evolving landscape. The Australian Government's interim response to safe and responsible use of AI (published in January 2024) identified that AI systems enable actions and decisions to be taken at a speed and scale previously unimaginable. They are considering several mandatory guardrails for some AI uses, including potential reforms to Australia's privacy laws, and even exploring mandatory watermarking of AI-generated content. Submissions closed on 4 October 2024, and we're eager to see how the government is responding to this.

Court Protocols on AI

Law Society of New South Wales | AI Hub

Resource, June 2024

International

On June 2024 the Law Society of New South Wales published a number of summaries of guidance issued in Australia and internationally on the use of AI by both lawyers and non-lawyers, with a focus on its use in relation to Courts and Tribunals (as at June 2024).

Guidance for Judicial Office Holders

United Kingdom | Artificial Intelligence

Court Guidance, 12 December 2023

United Kingdom

This guidance was developed to assist judicial office holders in using Artificial Intelligence (AI). It outlines key risks and issues associated with AI use, along with suggestions for minimising them. Examples of potential uses are also included. It applies to all judicial office holders under the responsibility of the Lady Chief Justice and Senior President of Tribunals, as well as their clerks and other support staff.

Guidelines for use of generative artificial intelligence in courts and tribunals (Lawyers)

Courts of New Zealand | Nga Koti o Aotearoa

Court Guidance, 7 December 2023

New Zealand

The guidelines remind lawyers that their existing professional obligations apply to the use of these new technologies. They urge caution when using GenAI chatbots due to their inherent risks and limitations. The document provides practical suggestions for minimizing these risks and helps lawyers comply with their professional obligations when using GenAI chatbots.

Guidelines for use of generative artificial intelligence in courts and tribunals (Non-lawyers)

Courts of New Zealand | Nga Koti o Aotearoa

Court Guidance, 7 December 2023

New Zealand

These guidelines address the use of generative artificial intelligence (GenAI) chatbots (such as ChatGPT, Bing Chat, or Google Bard) in court/tribunal proceedings. They have been developed to assist non-lawyers—including self-represented litigants, McKenzie friends, and lay advocates—who represent themselves or others. Noting that:

“GenAI chatbots are not a substitute for a qualified lawyer and cannot give tailored legal advice. Unlike GenAI chatbots, lawyers have professional obligations and must uphold ethical standards to their clients and to courts and tribunals.”

Practical Guidance Note No. 2 of 2023 Guidelines on the use of large language models and generative AI in proceedings before the DIFC Courts

Dubai International Financial Centre Courts

Court Guidance, 21 December 2023

Dubi

These guidelines apply to parties in proceedings before the DIFC Courts and must be considered when using Large Language Models ("LLMs") and Generative Content Generators ("GCGs"). The Courts recognize that LLMs and GCGs are increasingly common in the legal industry and can significantly enhance case preparation and presentation by saving time and costs. However, potential risks associated with these technologies must be carefully considered by all parties using or planning to use them. The Guidelines outline core principles and best practices to help parties and practitioners comply with their legal and professional obligations when relying on AI-generated content.

Generative AI exists because of the transformer

Financial Times

AI Knowlege, 12 September 2023

International

This article that provides a clear introduction to understanding how Large Language Modes (LLMs) generate text and understanding why these models are such versatile cognitive engines. For example:

“In order to grasp a word’s meaning, work in our example, LLMs first observe it in context using enormous sets of training data, taking note of nearby words. These datasets are based on collating text published on the internet, with new LLMs trained using billions of words.”

Practice Resource: Guidance on Professional Responsibility and Generative AI

Law Society of British Columbia

Court Guidance, October 2023

Canada

These guidelines apply to parties in proceedings before the DIFC Courts (“the Courts”) and are to be taken into consideration when using Large Language Models (“LLMs”) and Generative Content Generators (“GCGs”) during such proceedings and outline the core principles that parties/practitioners should take into account and the best practices to help them comply with their legal and professional obligations when relying on AI-generated content.

Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings

Federal Court of Canada

Court Guidance, 20 December 2023

Canada

The Federal Corut of Canada on 20 December 2023 Issues a Notice setting out a Declaration and Principles concerning certain uses of AI, including large language models (“LLMs”).  The Notice requires counsel, parties, and interveners in legal proceedings at the Federal Court to make aDeclaration for AI-generated content, and to consider certain principles when using AI to prepare documentation filed with the Court. There is an example of the declaration required by the Court:

“Artificial intelligence (AI) was used to generate content in this document.” / “L'intelligence artificielle (IA) a été utilisée pour générer au moins une partie du contenu de ce document.”

General Practice Direction 29 (Use of Artificial Intelligence Tools)

Supreme Court of Yukon

Court Guidance, 26 June 2023

Canada

Requires any counsel or a party that relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter andin any form before the Court, they must advise the Court of the tool used and for what purpose.

Practice Direction: Use of Artificial Intelligence in Court Submissions

Court of King’s Bench of Manitoba

Court Guidance, 23 June 2023

Canada

The Court’s short practice direction which provides that when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.

Notice to the Public and Legal Profession: Ensuring the Integrity of Court Submissions when using Large Language Models

Alberta Courts

Court Guidance, 6 October 2023

Canada

Practice guidance which requires that any AI-generated submissions must be verified with meaningful human control. Verification can be achieved through cross-referencing with reliable legal databases, ensuring that the citations and their content hold up to scrutiny.

Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence

New York State Bar Association

Report, April 2024

United States

An 85-page report submitted to the House of Delegates on April 6, 2024, which examined the legal, social, and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. The report includes several recommendations, covering adoption guidelines, education, risk management, and the role of law in AI governance.

Preliminary guidelines for the use of artificial intelligence

New Jersey Supreme Court Committee On Artificial Intelligence

Report, 25 January 2024

United States

The New Jersey Supreme Court Committee on Artificial Intelligence released preliminary guidelines for the use of artificial intelligence. The committee acknowledged the technology’s “significant capabilities as well as significant risks” and its aim to strike a balance between AI’s benefits and potential harms. The guidelines function as a reminder regarding compliance with the court’s existing Rules of Professional Conduct.

Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation

Supreme Court of Victoria

Court Guidance, May 2024

Australia

The Guidelines are to assist those conducting litigation in the Supreme Court of Victoria. They are designed to assist both legal practitioners and self-represented litigants and set out a number of principles for use of AI by all litigants.

AI Decision-Making and the Courts: a guide for Judges, Tribunal Members and Court Administrators

The Australasian Institute of Judicial Administration Inc

Report, June 2022

Australia

This report presents an overview of various AI and decision-making tools and raises the possible challenges and opportunities they present for our courts and tribunals. The report provides different areas where AI can be used as well as how such tools can impact core judicial values of open justice, accountability and equality before the law, procedural fairness, access to justice, and efficiency. It poses questions to be considered by court managers and administers, judges, and tribunal members when contemplating the use of these technologies.

Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation

County Court of Victoria

Court Guidance, July 2024

Australia

These Guidelines are to assist those conducting litigation in the County Court of Victoria. They are designed to assist both legal practitioners and self-represented litigants. In providing these Guidelines, the Court acknowledges that developments in the use of Artificial Intelligence are fast-moving and may require further guidance over time. In this context, the Court has placed a high value on consistency of practice, including between Courts in this field. To that end the following Guidelines are given in the same terms as those published by the Supreme Court of Victoria (May 2024).

The use of Generative Artificial Intelligence (AI): Guidelines for responsible use by non-lawyers

Queensland Courts

Court Guidance, May 2024

Australia

These guidelines apply to civil and criminal proceedings in Queensland courts and tribunals, including the Supreme Court, District Court, Planning and Environment Court, Magistrates Courts, Land Court, Children's Court, Industrial Court, Queensland Industrial Relations Commission andQueensland Civil and Administrative Tribunal.  These guidelines for the responsible use of Generative AI chatbots in court and tribunal proceedings have been developed to assist non-lawyers (including self-represented litigants, McKenzie friends, lay advocates and employment advocates) who represent themselves or others and which notes:

“… that Generative AI is not a substitute for a qualified lawyer and cannot give [non-lawyers]tailored legal advice. Currently available Generative AI chatbots have been known to provide inaccurate information on Australian law. Using Generative AI chatbots is not an alternative to seeking legal advice.”

A solicitor’s guide to responsible use of artificial intelligence

Law Society of New South Wales

Guidance, 10 July 2024

Australia

These guidelines assist lawyers to understand the issues arising from the use of generative AI and the relevant Conduct Rules to consider in the use of AI. It provides guidance on practice management and how to use Generative AI efficiently and safely when providing legal services.  In summary the guidelines provide:

“Generative AI is likely here to stay. Practitioners should neither avoid generative AI completely nor embrace it without first understanding its limitations and giving critical thought to maintaining their professional obligations while using it. Generative AI presents a host of new and positive opportunities for practitioners and their legal practice. Key to capitalising on these opportunities is understanding the inherent limitations of the ever-changing and evolving nature of generative AI and steadfast adherence to solicitor professional and ethical obligations.”

Issues Arising from the Use of AI Language Models (including ChatGPT) in Legal Practice

New South Wales Bar Association

Guidance, 12 July 2023

Australia

This document is intended to assist NSW barristers to understand their duties under the Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW) and the importance of their role in the administration of justice in the context of the consideration of whether to use artificial intelligence (AI) language models, such as ChatGPT, in their legal practice. It is a guide in relation to professional obligations that must be considered before, and not breached in the event of, such use.

Future of Professionals Report | AI-powered technology & the forces shaping professional work

Thomson Reuter

Legal Insight / Report, July 2024

International

A comprehensive report drawing from over 2,200 survey responses. It explores key trends including AI-related concerns (such as potential job displacement), the technology's impact on professional work, emerging AI-powered tools, and projections for AI adoption and utilization across various surveyed professions.

It will soon be negligent not to use AI, Master of the Rolls predicts

Legal Futures

Legal Insight, 5 March 2024

United Kingdom

An article discussing Sir Geoffrey Vos KC, Master of the Rolls, and his remarks at the LawtechUK events on generative AI. He stated:

“… to think of the day when there will be liability, legal liability, not for using AI, but for failing to use AI to protect the interests of the people we serve. I think that is undoubtedly a day that’s coming soon. When an accountant can use an AI tool to spot fraud in a major corporate situation and fails to do so, surely there might be liability. The same for employer liability to protect employees and in every other field you can possibly imagine.”

Fast law: why speed is the priority for lawyers using AI

LexisNexis

Legal Insight / Report, 24 September 2024

United Kingdom

The results of a follow up survey of more than 800 UK legal professionals at law firms and in-house legal teams, which found in summary: 82% of UK lawyers have adopted generative AI or have plans in motion, 71% said the biggest benefit of AI is delivering work faster, 70% feel more comfortable using AI tools grounded in legal content and the number of lawyers using generative AI for work purposes has nearly quadrupled in just over a year, jumping from 11% in July 2023 to 41% in September 2024. Additionally, the percentage of lawyers planning to use AI has also increased significantly, from 28% to 41% during the same period.

Generative AI tools trialed by 50% of legal practitioners across ANZ with in-house lawyers leading the way

LexisNexis

Legal Insight / Media Release, 16 April 2024

Australia & New Zealand

In this LexisNexis survey of over 560 lawyers in Australia and New Zealand, they discovered 1 in 2 lawyers have already used generative artificial intelligence to perform day-to-day tasks and almost the entire profession believe it will change how legal work is carried out in future. Furthermore, 60% believed that they will be left out if they don’t use AI tools.

Mata v. Avianca, Inc.

United States District Court Southern District of New York

Legal Case, 22 June 2023

United States

The infamous "fake ChatGPT" case. This legal matter involved a lawyer who, while representing a man suing an airline, relied on artificial intelligence to help prepare a court filing. The outcome was unfavourable. However, it's noteworthy as one of the first instances where a court provided official guidance on AI use and existing solicitor ethical responsibilities, including this statement:

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.  But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.“

Risk On Air | AI unplugged Episode 36: AI unplugged

Lawcover

Guidance, March 2024

Australia

Host Julian Morrow discusses the impact of AI tools like ChatGPT on legal practice with Schellie-Jayne Price, AI Practice leader and partner at Stirling & Rose. Their conversation explores AI's applications in law, associated risks, and strategies for lawyers to safeguard themselves and their practices. The episode includes a recording and transcript link.

Limitations and risks of using AI in legal practice

Legal Practitioners’ Liability Committee

Guidance, 17 August 2023

Australia

This article examines the limitations and risk strategies related to legal practitioners’ use of generative AI, illustrated by ChatGPT.  It covers issues such as Confidentiality, Privacy, Copyright and Intellectual Property, Risk Management, concluding:

“ChatGPT and generative AI technologies specifically for lawyers are rapidly developing. Practitioners have their professional and ethical duties to their client, which requires them to ensure that the client’s confidentiality and privacy is protected. Practitioners need to ensure that the content is reviewed, considered, scrutinised, and then adapted. Whilst this article was based on research from a range of reliable sources as at August 2023, given the rapid development of technology, case law and legislation, practitioners should continue to stay in touch with current developments.”

70% Of Workers Using ChatGPT At Work Are Not Telling Their Boss; Overall Usage Among Professionals Jumps To 43%

Fishbowl Insights

Article, 1 February 2023

United States

A recent survey by Fishbowl, a professional social network for anonymous career discussions, revealed that 43% of professionals have used AI tools like ChatGPT for work-related tasks. Strikingly, nearly 70% of these users are doing so without their employer’s knowledge.

Artificial intelligence: Do you have a usage policy?

Queensland Law Society Proctor

Guidance / Article, 24 April 2023

Australia

This article looks at factors to consider when adopting an acceptable AI usage policy for law firms, to ensure ethical compliance.

“It is likely that AI tools specifically for lawyers will become commonplace from late 2023. No matter how useful the tool is, whether and how to use it should be a considered decision taken at practice level, not left up to the discretion of individual staff. This is not a suggestion that AI should not be used. The ability to do this well is likely to be an important professional and business skill in the near future. The reality is also that, as more software providers include AI functionality in their products, avoiding AI is going to become extremely difficult.”

Template for AI Use Policy

Queensland Law Society

Template, April 2023

Australia

A template for a firms AI Use Policy, free for use by QLS members and member firms.

Generative AI and Lawyers

Victorian Legal Services Board + Commissioner

Article, 17 November 2023

Australia

This article offers insights into recent developments in generative AI, with a focus on ChatGPT but applicable to all generative AI platforms. It explores the nature and functionality of ChatGPT, providing lawyers with a foundational understanding of this transformative technology.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

ANTHROPIC

Article, 23 October 2024

International

This article details upgrades to Claude 3.5 Sonnet and introduces a new model, Claude 3.5 Haiku.

Publications | Risk Outlook report: The use of artificial intelligence in the legal market

Solicitors Regulation Authority

Report, 20 November 2023

United Kingdom

This report examines the increasing adoption of AI in legal services, detailing its applications and strategies for risk mitigation. The authors conclude:

“The legal market’s use of AI is growing rapidly, and this is likely to continue. As systems become ever more available, firms that could not previously have used these tools will be able to do so. Indeed, it is likely that these systems will become a normal part of everyday life, automating routine tasks. Used well: they will free people to apply their knowledge and skills to tasks that truly need them, they will improve outcomes both for firms and for consumers; and the speed, cost and productivity benefits that they can provide could help to improve access to justice.”

Zhang v Chen

Supreme Court of British Columbia

Legal Case, 23 February 2024

Canada

In this case a Canadian judge has ordered a family law lawyer who submitted fake ChatGPT generated cases to the court to personally pay the costs of the time opposing counsel spent trying to verify them.  Stating as a final comment:

“As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.  Competence in the selection and use of any technology tools, including those powered by AI, is critical.  The integrity of the justice system requires no less.“

F Harber v HMRC [2023] UKFTT 1007 (TC)

United Kingdom First-tier Tribunal (FTT)

Legal Case, 4 December 2023

United Kingdom

A litigant representing herself, apparently using ChatGPT, submitted summaries of non-existent cases to the First-tier Tribunal (FTT) to support her defense against a penalty for failing to notify a Capital Gains Tax liability on the sale of a rented property. Although the FTT accepted that these false citations were provided innocently, the tribunal issued a stern warning about the dangers of litigants using AI-generated "hallucinations" in legal proceedings. The Tribunal was also assisted by the US case of Mata v Avianca agreeing with Judge Kastel and finding:

“It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined. As Judge Kastel said, the practice also "promotes cynicism" about judicial precedents, and this is important, because the use of precedent is "a cornerstone of our legal system" and "an indispensable foundation upon which to decide what is the law and its application to individual cases…"

Conclusion

The question around AI is now longer “if” but “how” lawyers will use it. An effective policy will ensure everyone in your firm can use AI effectively to enhance efficiency, reduce risk, and produce better client outcomes. We hope these resources will help you in formulating appropriate AI guidelines to equip your firm in effective adoption.

More From The Blog