Happy holidays! LawY will have limited support available between 21st December to 5th January. Please be patient with our team during this time. →
Kyle Sutherland
Family Lawyer at LawY

Catch up on the latest AI legal news and prominent AI cases

Summary

AI is changing faster than ever. Lawyers must adapt to these changes to ensure ethical and effective use of AI in their practices, as highlighted by recent cases and reports on AI's implications in the legal field.

In an AI landscape innovating faster than ever, staying ahead of the curve can feel like a constant challenge. What was impossible a year ago or even a month ago is now soon becoming possible and even the norm.

In the legal space, accuracy is of paramount importance, which is why AI offers unique potential and risk. It’s crucial for lawyers to understand how AI works and how it’s changing in order to unlock the benefits of ethical and effective use.

To help you stay informed, we have gathered a wide range of legal AI news and reports, along with three prominent cases.

AI legal news and reports

Introducing mandatory guardrails for AI in high-risk settings

Australian Government | Department of Industry, Science and Resources - October 2024, Australia

AI is a rapidly evolving landscape. The Australian Government's interim response to safe and responsible use of AI (published in January 2024) identified that AI systems enable actions and decisions to be taken at a speed and scale previously unimaginable.

They are considering several mandatory guardrails for some AI uses, including potential reforms to Australia's privacy laws, and even exploring mandatory watermarking of AI-generated content. Submissions closed on 4 October 2024, and we're eager to see how the government is responding to this.

Generative AI exists because of the transformer

Financial Times - September 2023, International

This article that provides a clear introduction to understanding how Large Language Modes (LLMs) generate text and understanding why these models are such versatile cognitive engines.

It’s a fantastic visual guide that unearths the unique way an AI works, which gives light to limitations to keep in mind.

For example:

“In order to grasp a word’s meaning, work in our example, LLMs first observe it in context using enormous sets of training data, taking note of nearby words. These datasets are based on collating text published on the internet, with new LLMs trained using billions of words.”

The Allens AI Australian Law Benchmark

May 2024, Australia

In May 2024, Allens published a paper on benchmarking LLMs being GPT-4, Gemini 1, Claude 2, Perplexity and LLaMa 2. The Allens AI Australian law benchmark tested the capabilities of LLMs to deliver Australian law legal advice.   LLMs continue to develop at a significant rate and could have profound implications for the future provision of legal services. The paper sought to identify the key risks of obtaining Australian legal advice from an LLM. What the report found was that is that Lawyers are still needed:

“An understanding of the law and an ability to apply it are two vastly different skill sets, the latter requiring a profound understanding both of the law and the business context in which it's being applied.”

Allens intends to rerun this benchmarking exercise in future months as new LLMs and other AI tools are released onto the market, including models specifically focused on the legal domain.

Future of Professionals Report | AI-powered technology & the forces shaping professional work

Thomson Reuter - July 2024, International

A comprehensive report drawing from over 2,200 survey responses. It explores key trends including AI-related concerns (such as potential job displacement), the technology's impact on professional work, emerging AI-powered tools, and projections for AI adoption and utilisation across various surveyed professions.

It will soon be negligent not to use AI, Master of the Rolls predicts

Legal Futures - March 2024, United Kingdom

An article discussing Sir Geoffrey Vos KC, Master of the Rolls, and his remarks at the LawtechUK events on generative AI. He stated:

“… to think of the day when there will be liability, legal liability, not for using AI, but for failing to use AI to protect the interests of the people we serve. I think that is undoubtedly a day that’s coming soon. When an accountant can use an AI tool to spot fraud in a major corporate situation and fails to do so, surely there might be liability. The same for employer liability to protect employees and in every other field you can possibly imagine.”

Fast law: why speed is the priority for lawyers using AI

LexisNexis - September 2024, United Kingdom

The results of a follow up survey of more than 800 UK legal professionals at law firms and in-house legal teams, which found in summary:

  • 82% of UK lawyers have adopted generative AI or have plans in motion
  • 71% said the biggest benefit of AI is delivering work faster
  • 70% feel more comfortable using AI tools grounded in legal content and the number of lawyers using generative AI for work purposes has nearly quadrupled in just over a year, jumping from 11% in July 2023 to 41% in September 2024.
  • Additionally, the percentage of lawyers planning to use AI has also increased significantly, from 28% to 41% during the same period.

Generative AI tools trialed by 50% of legal practitioners across ANZ with in-house lawyers leading the way

LexisNexis - April 2024, Australia & New Zealand

In this LexisNexis survey of over 560 lawyers in Australia and New Zealand, they discovered 1 in 2 lawyers have already used generative artificial intelligence to perform day-to-day tasks and almost the entire profession believe it will change how legal work is carried out in future. Furthermore, 60% believed that they will be left out if they don’t use AI tools

Risk On Air | AI unplugged Episode 36: AI unplugged

Lawcover - March 2024, Australia

Host Julian Morrow discusses the impact of AI tools like ChatGPT on legal practice with Schellie-Jayne Price, AI Practice leader and partner at Stirling & Rose. Their conversation explores AI's applications in law, associated risks, and strategies for lawyers to safeguard themselves and their practices. The episode includes a recording and transcript link.

70% Of Workers Using ChatGPT At Work Are Not Telling Their Boss; Overall Usage Among Professionals Jumps To 43%

Fishbowl Insights - February 2023, United States

A recent survey by Fishbowl, a professional social network for anonymous career discussions, revealed that 43% of professionals have used AI tools like ChatGPT for work-related tasks. Strikingly, nearly 70% of these users are doing so without their employer’s knowledge.

Artificial intelligence: Do you have a usage policy?

Queensland Law Society Proctor - April 2023, Australia

This article looks at factors to consider when adopting an acceptable AI usage policy for law firms, to ensure ethical compliance.

“It is likely that AI tools specifically for lawyers will become commonplace from late 2023. No matter how useful the tool is, whether and how to use it should be a considered decision taken at practice level, not left up to the discretion of individual staff. This is not a suggestion that AI should not be used. The ability to do this well is likely to be an important professional and business skill in the near future. The reality is also that, as more software providers include AI functionality in their products, avoiding AI is going to become extremely difficult.”

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

ANTHROPIC - October 2024, International

This article details upgrades to Claude 3.5 Sonnet and introduces a new model, Claude 3.5 Haiku.

Publications | Risk Outlook report: The use of artificial intelligence in the legal market

Solicitors Regulation Authority - November 2023, United Kingdom

This report examines the increasing adoption of AI in legal services, detailing its applications and strategies for risk mitigation. The authors conclude:

“The legal market’s use of AI is growing rapidly, and this is likely to continue. As systems become ever more available, firms that could not previously have used these tools will be able to do so. Indeed, it is likely that these systems will become a normal part of everyday life, automating routine tasks. Used well: they will free people to apply their knowledge and skills to tasks that truly need them, they will improve outcomes both for firms and for consumers; and the speed, cost and productivity benefits that they can provide could help to improve access to justice.”

Prominent cases

Mata v. Avianca, Inc.

United States District Court Southern District of New York - 22 June 2023, United States

The infamous "fake ChatGPT" case. This legal matter involved a lawyer who, while representing a man suing an airline, relied on artificial intelligence to help prepare a court filing. The outcome was unfavourable.

However, it's noteworthy as one of the first instances where a court provided official guidance on AI use and existing solicitor ethical responsibilities, including this statement:

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.  But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.“

Zhang v Chen

Supreme Court of British Columbia - Legal Case, 23 February 2024, Canada

In this case a Canadian judge has ordered a family law lawyer who submitted fake ChatGPT generated cases to the court to personally pay the costs of the time opposing counsel spent trying to verify them.  Stating as a final comment:

“As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.  Competence in the selection and use of any technology tools, including those powered by AI, is critical.  The integrity of the justice system requires no less.“

F Harber v HMRC [2023] UKFTT 1007 (TC)

United Kingdom First-tier Tribunal (FTT) - 4 December 2023, United Kingdom

A litigant representing herself, apparently using ChatGPT, submitted summaries of non-existent cases to the First-tier Tribunal (FTT) to support her defense against a penalty for failing to notify a Capital Gains Tax liability on the sale of a rented property. Although the FTT accepted that these false citations were provided innocently, the tribunal issued a stern warning about the dangers of litigants using AI-generated "hallucinations" in legal proceedings.

The Tribunal was also assisted by the US case of Mata v Avianca agreeing with Judge Kastel and finding:

“It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined. As Judge Kastel said, the practice also "promotes cynicism" about judicial precedents, and this is important, because the use of precedent is "a cornerstone of our legal system" and "an indispensable foundation upon which to decide what is the law and its application to individual cases…"

Conclusion

While keeping up with the rapid changes in AI can be daunting, the right resources can make all the difference. We are looking to regularly update this resources of AI legal news, so you can stay informed, adapt quickly, and remain competitive in a constantly evolving environment. As more lawyers implement legal AI in their firms, the key question becomes not “if” but “how” to do this in an effective and ethical way.

Are there any news or prominent cases you think should be included above? Let us know! We hope you found this list a helpful resource.

More From The Blog