Introducing an enhanced way to submit for verification. Find out more →
Kyle Sutherland
Family Lawyer at LawY

20+ AI policy resources for your legal firm

Summary

Like any technological innovation, AI is a powerful tool, yet it has limitations. Reaping the benefits while minimising risks begins with first understanding these limitations, then developing robust policies to encourage effective and appropriate use. Explore these resources to help develop the AI policy for your firm.

December 6, 2024

Imagine the power of reclaiming precious hours back every week. As a powerful productivity enhancer, AI has the potential to transform legal practice for the better, freeing up more time for strategic client work.

However, as with any technology innovation, there are limitations to keep in mind. Reaping the benefits while minimising risks begins with first understanding these limitations, and then enforcing appropriate AI policy guidelines for your firm.

We have gathered a wide range of AI guidelines and resources to help you develop an appropriate policy for your firm.

Explore 20+ resources and guidelines: AI in your legal firm

AI guidelines for Australia

National

Final Report

The Parliament of Australia Select Committee on Adopting Artificial Intelligence - Legislative, November 2024

The Australian Government has released its final report on the opportunities and impacts arising from the uptake of AI technologies in Australia. The report contains excellent summaries of pertinent issues such as bias and discrimination, transparency, and the review of automated decision-making. It presents 13 recommendations, ranging from mandatory guardrails to managing AI infrastructure growth.

However, due to the broad language used in these recommendations, it remains unclear what specific measures any Government will implement.

AI Decision-Making and the Courts | A guide for Judges, Tribunal Members and Court Administrators

The Australasian Institute of Judicial Administration Inc - Guidance, December 2023

This guide sets out the key challenges and opportunities that AI and automated decision-making presents for courts and tribunals. It draws on legislation, case law and rules in a range of jurisdictions. The guide is not intended to provide an exhaustive analysis of emerging technologies, AI tools and the courtroom. Instead, it overviews some of the ways in which AI may be incorporated into domestic courtrooms and analyses some associated benefits and risks.

Given that technology continues to evolve, the guide starts with the function and purpose of the technology and its impact on foundational values which underpin the judicial system.

“AI is becoming popular in courts and tribunals across the globe. AI systems range from simple practices such as automated e-filing of documents to the complexity surrounding determining the likely risk that an offender will reoffend. The tension between the need for a judiciary to remain flexible as technology evolves and the normative principles of consistency and predictability in the resolution of justice is not a new phenomenon. In deciding whether any particular tool should be used in courts or tribunals, members of the judiciary or tribunal, as the persons overseeing the proper resolution of disputes, should be aware of the potential benefits which flow from the use of such technologies and their complex relationship with core judicial values.”

Limitations and risks of using AI in legal practice

Legal Practitioners’ Liability Committee - Guidance, 17 August 2023

This article examines the limitations and risk strategies related to legal practitioners’ use of generative AI, illustrated by ChatGPT.  It covers issues such as Confidentiality, Privacy, Copyright and Intellectual Property, Risk Management, concluding:

“ChatGPT and generative AI technologies specifically for lawyers are rapidly developing. Practitioners have their professional and ethical duties to their client, which requires them to ensure that the client’s confidentiality and privacy is protected. Practitioners need to ensure that the content is reviewed, considered, scrutinised, and then adapted. Whilst this article was based on research from a range of reliable sources as at August 2023, given the rapid development of technology, case law and legislation, practitioners should continue to stay in touch with current developments.”

NSW

Issues Arising from the Use of AI Language Models (including ChatGPT) in Legal Practice

New South Wales Bar Association - Guidance, 12 July 2023

This document is intended to assist NSW barristers to understand their duties under the Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW) and the importance of their role in the administration of justice in the context of the consideration of whether to use artificial intelligence (AI) language models, such as ChatGPT, in their legal practice. It is a guide in relation to professional obligations that must be considered before, and not breached in the event of, such use.

A solicitor’s guide to responsible use of artificial intelligence

Law Society of New South Wales - Guidance, 10 July 2024

These guidelines assist lawyers to understand the issues arising from the use of generative AI and the relevant Conduct Rules to consider in the use of AI. It provides guidance on practice management and how to use Generative AI efficiently and safely when providing legal services.  In summary the guidelines provide:“Generative AI is likely here to stay.

Practitioners should neither avoid generative AI completely nor embrace it without first understanding its limitations and giving critical thought to maintaining their professional obligations while using it. Generative AI presents a host of new and positive opportunities for practitioners and their legal practice.

Key to capitalising on these opportunities is understanding the inherent limitations of the ever-changing and evolving nature of generative AI and steadfast adherence to solicitor professional and ethical obligations.”

Court Protocols on AI

Law Society of New South Wales | AI Hub - Resource, June 2024

On June 2024 the Law Society of New South Wales published a number of summaries of guidance issued in Australia and internationally on the use of AI by both lawyers and non-lawyers, with a focus on its use in relation to Courts and Tribunals (as at June 2024).

VIC

Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation

County Court of Victoria - Court Guidance, July 2024

These Guidelines are to assist those conducting litigation in the County Court of Victoria. They are designed to assist both legal practitioners and self-represented litigants. In providing these Guidelines, the Court acknowledges that developments in the use of Artificial Intelligence are fast-moving and may require further guidance over time.

In this context, the Court has placed a high value on consistency of practice, including between Courts in this field. To that end the following Guidelines are given in the same terms as those published by the Supreme Court of Victoria (May 2024).

Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation

Supreme Court of Victoria - Court Guidance, May 2024

The Guidelines are to assist those conducting litigation in the Supreme Court of Victoria. They are designed to assist both legal practitioners and self-represented litigants and set out a number of principles for use of AI by all litigants.

Generative AI and Lawyers

Victorian Legal Services Board + Commissioner - Article, 17 November 2023

This article offers insights into recent developments in generative AI, with a focus on ChatGPT but applicable to all generative AI platforms. It explores the nature and functionality of ChatGPT, providing lawyers with a foundational understanding of this transformative technology.

QLD

The use of Generative Artificial Intelligence (AI): Guidelines for responsible use by non-lawyers

Queensland Courts - Court Guidance, May 2024

These guidelines apply to civil and criminal proceedings in Queensland courts and tribunals, including the Supreme Court, District Court, Planning and Environment Court, Magistrates Courts, Land Court, Children's Court, Industrial Court, Queensland Industrial Relations Commission andQueensland Civil and Administrative Tribunal.

These guidelines for the responsible use of Generative AI chatbots in court and tribunal proceedings have been developed to assist non-lawyers (including self-represented litigants, McKenzie friends, lay advocates and employment advocates) who represent themselves or others and which notes:

“… that Generative AI is not a substitute for a qualified lawyer and cannot give [non-lawyers]tailored legal advice. Currently available Generative AI chatbots have been known to provide inaccurate information on Australian law. Using Generative AI chatbots is not an alternative to seeking legal advice.”

Template for AI Use Policy

Queensland Law Society - Template, April 2023

Explore this template for a firm’s AI Policy, free for use by QLS members and member firms.

AI guidelines for the United Kingdom

Guidance for Judicial Office Holders

United Kingdom | Artificial Intelligence - Court Guidance, 12 December 2023

This guidance was developed to assist judicial office holders in using Artificial Intelligence (AI). It outlines key risks and issues associated with AI use, along with suggestions for minimising them. Examples of potential uses are also included.

It applies to all judicial office holders under the responsibility of the Lady Chief Justice and Senior President of Tribunals, as well as their clerks and other support staff.

AI guidelines for New Zealand

Guidelines for use of generative artificial intelligence in courts and tribunals (Lawyers)

Courts of New Zealand | Nga Koti o Aotearoa - Court Guidance, 7 December 2023

The guidelines remind lawyers that their existing professional obligations apply to the use of these new technologies. They urge caution when using GenAI chatbots due to their inherent risks and limitations.

The document provides practical suggestions for managing and reducing risks and helps lawyers comply with their professional obligations when using GenAI chatbots.

Guidelines for use of generative artificial intelligence in courts and tribunals (Non-lawyers)

Courts of New Zealand | Nga Koti o Aotearoa - Court Guidance, 7 December 2023

These guidelines address the use of generative artificial intelligence (GenAI) chatbots (such as ChatGPT, Bing Chat, or Google Bard) in court/tribunal proceedings. They have been developed to assist non-lawyers—including self-represented litigants, McKenzie friends, and lay advocates—who represent themselves or others. Noting that:

“GenAI chatbots are not a substitute for a qualified lawyer and cannot give tailored legal advice. Unlike GenAI chatbots, lawyers have professional obligations and must uphold ethical standards to their clients and to courts and tribunals.”

AI guidelines for the United States

Copyright and Artificial Intelligence | Part 1 : Digital Replicas

United States Copyright Office - Legislative / Copyright, July 2024

This 72-page report, the first part of the Copyright Office's study on copyright and artificial intelligence, focuses on digital replicas. From AI-generated musical performances to robocall impersonations of political candidates to deepfake pornographic videos. While technologies for producing fake images or recordings have existed for some time, generative AI's ability to do so easily, quickly, and with uncanny realism has raised concerns among creators, legislators, and the general public.  [Additional Parts will be published as they are completed].

“Based on all of this input, we have concluded that a new law is needed. The speed, precision, and scale of AI-created digital replicas calls for prompt federal action. Without a robust nationwide remedy, their unauthorized publication and distribution threaten substantial harm not only in the entertainment and political arenas, but also for private individuals.”

Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence

New York State Bar Association - Report, April 2024

An 85-page report submitted to the House of Delegates on April 6, 2024, which examined the legal, social, and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. The report includes several recommendations, covering adoption guidelines, education, risk management, and the role of law in AI governance.‍

Preliminary guidelines for the use of artificial intelligence

New Jersey Supreme Court Committee On Artificial Intelligence - Report, 25 January 2024

The New Jersey Supreme Court Committee on Artificial Intelligence released preliminary guidelines for the use of artificial intelligence. The committee acknowledged the technology’s “significant capabilities as well as significant risks” and its aim to strike a balance between AI’s benefits and potential harms.

The guidelines function as a reminder regarding compliance with the court’s existing Rules of Professional Conduct.

Public Notice of General Order 23-01

United States Bankruptcy Court of Western District of Oklahoma - Court Guidance, September 2023

The United States Bankruptcy Court for the Western District of Oklahoma issued a Notice stating that any document filed with the court that has been drafted using a generative artificial intelligence program must be accompanied by an attestation. This attestation must detail how AI was used within that document as follows:

“(1) identifying the program used and the specific portions of text for which a generative artificial intelligence program was utilized; (2) certifying the document was checked for accuracy using print reporters, traditional legal databases, or other reliable means; and (3) certifying the use of such program has not resulted in the disclosure of any confidential information to any unauthorized party.”

AI guidelines for Canada

Guidelines for the Use of Artificial Intelligence in Canadian Courts

Canadian Judicial Council - Court Guidance, September 2024

In response to the growing use of AI, the Canadian Judicial Council (CJC) has recently released its 2024 Guidelines for the Use of Artificial Intelligence in Canadian Courts (the "Guidelines") for Canadian judges. These Guidelines aim to create a framework for using AI tools to support or enhance judicial functions and to raise awareness of the risks inherent in AI-assisted judicial decision-making.

Emphasising the core principles that form the ethical backbone of Canadian judicial conduct, the Guidelines address the ethical, legal, and operational implications of AI while considering the principles of judicial independence and transparency. They provide five key principles: Transparency and Explainability, Maintaining Judicial Independence and Ethics, Reducing Bias and Preserving Impartiality, Bolstering Privacy and Data Security, and Improving Judicial Education and Training.

“The adoption of AI cannot be a passive or reactive process. Some forms of AI are alreadyembedded in everyday judicial applications for tasks such as translation, grammar checking,speech recognition and legal research. As generative AI becomes more prevalent, it becomes imperative that judges appreciate the implications, limitations, evolving risks, and mitigation strategies associated with its use.”

Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings

Federal Court of Canada - Court Guidance, 20 December 2023

The Federal Corut of Canada on 20 December 2023 Issues a Notice setting out a Declaration and Principles concerning certain uses of AI, including large language models (“LLMs”).  The Notice requires counsel, parties, and interveners in legal proceedings at the Federal Court to make a Declaration for AI-generated content, and to consider certain principles when using AI to prepare documentation filed with the Court.

There is an example of the declaration required by the Court:“Artificial intelligence (AI) was used to generate content in this document.” / “L'intelligence artificielle (IA) a été utilisée pour générer au moins une partie du contenu de ce document.”

Practice Resource: Guidance on Professional Responsibility and Generative AI

Law Society of British Columbia - Court Guidance, October 2023

These guidelines apply to parties in proceedings before the DIFC Courts (“the Courts”) and are to be taken into consideration when using Large Language Models (“LLMs”) and Generative Content Generators (“GCGs”) during such proceedings and outline the core principles that parties/practitioners should take into account and the best practices to help them comply with their legal and professional obligations when relying on AI-generated content.

Notice to the Public and Legal Profession: Ensuring the Integrity of Court Submissions when using Large Language Models

Alberta Courts - Court Guidance, 6 October 2023

Practice guidance which requires that any AI-generated submissions must be verified with meaningful human control. Verification can be achieved through cross-referencing with reliable legal databases, ensuring that the citations and their content hold up to scrutiny.

General Practice Direction 29 (Use of Artificial Intelligence Tools)

Supreme Court of Yukon - Court Guidance, 26 June 2023

Requires any counsel or a party that relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter andin any form before the Court, they must advise the Court of the tool used and for what purpose.

Practice Direction: Use of Artificial Intelligence in Court Submissions

Court of King’s Bench of Manitoba - Court Guidance, 23 June 2023

The Court’s short practice direction which provides that when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.


AI guidelines for Singapore

Registrar’s Circular No. 9 of 2024 Issue of the Guide on the Use of Generative Artificial Intelligence Tools by Court Users

Republic of Singapore State Courts - Guide, 23 September 2024

Effective from October 1, 2024, this Guide aims to help legal practitioners and self-represented litigants use AI tools responsibly during court proceedings. While permitting the use of generative AI tools, the Guide requires users to comply with specific stipulations and verify the accuracy of AI-generated content. It takes a neutral stance on AI use but places full responsibility for the output on the users.

The Guide emphasises the importance of fact-checking and proofreading AI-generated content to ensure accuracy, relevance, and compliance with intellectual property rights. It explicitly states that Generative AI tools should not be used to generate evidence for court proceedings and highlights the need for proper source attribution and confidential information protection. Users may be required to declare their AI tool usage and demonstrate compliance with the Guide, failure to do so may result in costs orders, document rejection, or disciplinary action.

Conclusion

The question around AI is now longer “if” but “how” lawyers will use it. An effective AI policy will ensure everyone in your firm can use AI effectively to enhance efficiency, reduce risk, and produce better client outcomes.

Are there any resources you have seen that you think may be useful to add to this list? Let us know! We hope these resources will help you in formulating appropriate AI guidelines to equip your firm in effective adoption.

More From The Blog