In an AI landscape innovating faster than ever, staying ahead of the curve can feel like a constant challenge. What was impossible a year ago or even a month ago is now soon becoming possible and even the norm.
In the legal space, accuracy is of paramount importance, which is why AI offers unique potential and risk. It’s crucial for lawyers to understand how AI works and how it’s changing in order to unlock the benefits of ethical and effective use.
To help you stay informed, we have gathered a wide range of legal AI news and reports, along with three prominent cases.
Australian Government | Department of Industry, Science and Resources - October 2024, Australia
AI is a rapidly evolving landscape. The Australian Government's interim response to safe and responsible use of AI (published in January 2024) identified that AI systems enable actions and decisions to be taken at a speed and scale previously unimaginable.
They are considering several mandatory guardrails for some AI uses, including potential reforms to Australia's privacy laws, and even exploring mandatory watermarking of AI-generated content. Submissions closed on 4 October 2024, and we're eager to see how the government is responding to this.
Financial Times - September 2023, International
This article that provides a clear introduction to understanding how Large Language Modes (LLMs) generate text and understanding why these models are such versatile cognitive engines.
It’s a fantastic visual guide that unearths the unique way an AI works, which gives light to limitations to keep in mind.
For example:
“In order to grasp a word’s meaning, work in our example, LLMs first observe it in context using enormous sets of training data, taking note of nearby words. These datasets are based on collating text published on the internet, with new LLMs trained using billions of words.”
May 2024, Australia
In May 2024, Allens published a paper on benchmarking LLMs being GPT-4, Gemini 1, Claude 2, Perplexity and LLaMa 2. The Allens AI Australian law benchmark tested the capabilities of LLMs to deliver Australian law legal advice. LLMs continue to develop at a significant rate and could have profound implications for the future provision of legal services. The paper sought to identify the key risks of obtaining Australian legal advice from an LLM. What the report found was that is that Lawyers are still needed:
“An understanding of the law and an ability to apply it are two vastly different skill sets, the latter requiring a profound understanding both of the law and the business context in which it's being applied.”
Allens intends to rerun this benchmarking exercise in future months as new LLMs and other AI tools are released onto the market, including models specifically focused on the legal domain.
Thomson Reuter - July 2024, International
A comprehensive report drawing from over 2,200 survey responses. It explores key trends including AI-related concerns (such as potential job displacement), the technology's impact on professional work, emerging AI-powered tools, and projections for AI adoption and utilisation across various surveyed professions.
Legal Futures - March 2024, United Kingdom
An article discussing Sir Geoffrey Vos KC, Master of the Rolls, and his remarks at the LawtechUK events on generative AI. He stated:
“… to think of the day when there will be liability, legal liability, not for using AI, but for failing to use AI to protect the interests of the people we serve. I think that is undoubtedly a day that’s coming soon. When an accountant can use an AI tool to spot fraud in a major corporate situation and fails to do so, surely there might be liability. The same for employer liability to protect employees and in every other field you can possibly imagine.”
LexisNexis - September 2024, United Kingdom
The results of a follow up survey of more than 800 UK legal professionals at law firms and in-house legal teams, which found in summary:
LexisNexis - April 2024, Australia & New Zealand
In this LexisNexis survey of over 560 lawyers in Australia and New Zealand, they discovered 1 in 2 lawyers have already used generative artificial intelligence to perform day-to-day tasks and almost the entire profession believe it will change how legal work is carried out in future. Furthermore, 60% believed that they will be left out if they don’t use AI tools
Lawcover - March 2024, Australia
Host Julian Morrow discusses the impact of AI tools like ChatGPT on legal practice with Schellie-Jayne Price, AI Practice leader and partner at Stirling & Rose. Their conversation explores AI's applications in law, associated risks, and strategies for lawyers to safeguard themselves and their practices. The episode includes a recording and transcript link.
Fishbowl Insights - February 2023, United States
A recent survey by Fishbowl, a professional social network for anonymous career discussions, revealed that 43% of professionals have used AI tools like ChatGPT for work-related tasks. Strikingly, nearly 70% of these users are doing so without their employer’s knowledge.
Queensland Law Society Proctor - April 2023, Australia
This article looks at factors to consider when adopting an acceptable AI usage policy for law firms, to ensure ethical compliance.
“It is likely that AI tools specifically for lawyers will become commonplace from late 2023. No matter how useful the tool is, whether and how to use it should be a considered decision taken at practice level, not left up to the discretion of individual staff. This is not a suggestion that AI should not be used. The ability to do this well is likely to be an important professional and business skill in the near future. The reality is also that, as more software providers include AI functionality in their products, avoiding AI is going to become extremely difficult.”
ANTHROPIC - October 2024, International
This article details upgrades to Claude 3.5 Sonnet and introduces a new model, Claude 3.5 Haiku.
Solicitors Regulation Authority - November 2023, United Kingdom
This report examines the increasing adoption of AI in legal services, detailing its applications and strategies for risk mitigation. The authors conclude:
“The legal market’s use of AI is growing rapidly, and this is likely to continue. As systems become ever more available, firms that could not previously have used these tools will be able to do so. Indeed, it is likely that these systems will become a normal part of everyday life, automating routine tasks. Used well: they will free people to apply their knowledge and skills to tasks that truly need them, they will improve outcomes both for firms and for consumers; and the speed, cost and productivity benefits that they can provide could help to improve access to justice.”
United States District Court - District of Minnesota - 10 January 2025, United States
Minnesota recently passed a law criminalising certain “deepfakes” when they are disseminated with the intent to harm a political candidate or sway an election. The plaintiffs in the case claim that the statute violates their First Amendment rights and sought a preliminary injunction to stop its enforcement.
In this lawsuit that was literally about the perils of AI-generated misinformation, Professor Hancock, an expert on AI misuse, managed to misuse AI in spectacular fashion. He submitted a sworn declaration bolstered by sources that, as it turned out, were plucked straight out of GPT-4o’s imagination. The judge, with a hint of bemused exasperation, remarked:
“Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less.”
The irony was striking. The very expert tasked with explaining to the court how AI misuse could be hazardous had inadvertently demonstrated this point more effectively than any planned testimony could have. The Court was, understandably, unimpressed.
The judge stated that “AI, in many ways, has the potential to revolutionize legal practice for the better” but:
“… when attorneys and experts abdicate their independent judgment and critical thinking skillsin favor of ready-made, AI-generated answers, the quality of our legal profession and theCourt's decisional process suffer.”
With the court finding to exclude the entire witness expert testimony.
“Professor Hancock's citation to fake, AI-generated sources in his declaration—even with his helpful, thorough, and plausible explanation—shatters his credibility with this Court... [and while] the Court does not believe that Professor Hancock intentionally cited fake sources, and the Court commends Professor Hancock and Attorney General Ellison for promptly conceding and addressing the errors in the Hancock Declaration. But the Court cannot accept false statements — innocent or not — in an expert's declaration submitted under penalty of perjury. Accordingly, given that the Hancock Declaration's errors undermine its competence and credibility, the Court will exclude consideration of Professor Hancock's expert testimony.”
United States District Court | Eastern District of Texas - 25 November 2024, United States
In a recent ruling by the United States District Court for the Eastern District of Texas, the plaintiff's counsel Brandon Monk received sanctions for submitting a legal brief containing AI-generated citations to nonexistent cases and quotations. The court's decision followed a show cause hearing on November 21, 2024, where Monk admitted to using a generative AI tool to draft the response without verifying its content. Despite being alerted to these inaccuracies, Monk failed to address them until the court intervened.
The court emphasised the importance of Rule 11 of the Federal Rules of Civil Procedure and the Eastern District of Texas's Local Rules, which explicitly warn attorneys that generative artificial intelligence tools may produce factual and legal inaccuracies and require attorneys to verify all information submitted to the court. Consequently, Monk was sanctioned and ordered to pay a $2,000 penalty, attend a legal education course on AI, and inform his client of the court's decision.
United States District Court Central District of California - 16 October 2024, United States
In the case the Plaintiff’s counsel improperly relied on artificial intelligence to draft the commencing motion, resulting in citations to nonexistent allegations and case law, which is sanctionable conduct that the Court addresses separately.
Saratoga County Surrogate Court - 10 October 2024, United States
This case included a discussion on the use of Artificial Intelligence, where testimony revealed that an expert witness relied on Microsoft Copilot. Despite using AI, the expert witness couldn't recall the input or prompt used for the Supplemental Damages Report. Moreover, the expert was unable to specify Copilot's sources or explain how it functions and generates outputs.
The testimony didn't address whether Copilot's calculations accounted for fund fees or tax implications. Interestingly, the judge decided to query Microsoft Copilot itself as to its reliability. The judge found that Copilot had less faith in its outputs than the expert witness seemingly did.
“In the instant case, the record is devoid of any evidence as to the reliability of Microsoft Copilot in general, let alone as it relates to how it was applied here. Without more, the Court cannot blindly accept as accurate, calculations which are performed by artificial intelligence… this Court holds that due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues that prior to evidence being introduced which has been generated by an artificial intelligence product or system, counsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing prior to its admission, the scope of which should be determined by the Court, either in a pre-trial hearing or at the time the evidence is offered.”
United States District Court for the Western District Court of Virginia Harrisonburg Division - 24 July 2024, United States
U.S. District Judge Thomas Cullen ordered counsel to show cause why she shouldn't face sanctions and referral to the state bar for disciplinary proceedings. This action stems from the attorney's citation of multiple fake cases and use of fabricated quotations in a court filing.
Judge Cullen emphasised that attorneys who submit filings with inaccurate or fabricated case law or quotations should face close scrutiny.
In the United States Court of Appeals for the District of Columbia Circuit - Legal Case / Copyright, 3 June 2024
The appellant, Dr. Stephen Thaler, applied to register a claim to a visual work. According to the application, an artificial intelligence machine autonomously generated the work without human involvement. The Copyright Office denied Dr. Thaler's application because the work lacked human authorship. After an unsuccessful administrative appeal, Dr. Thaler appealed, alleging that the Copyright Office should not have applied a human-authorship requirement.
The district court granted summary judgment to the government. The appeals court confirmed the decision, finding that the Copyright Office correctly denied the application because human authorship is a basic prerequisite to copyright. This conclusion reflects a straightforward application of the statutory text, history, and precedent.
United States Court of Appeals for the Ninth Circuit - 22 March 2024, United States
The Ninth Circuit struck down an appellate brief that cited nonexistent case law and misrepresented facts. Although it's uncertain whether generative AI was used to create these fictitious cases or draft the brief, the situation bears a striking resemblance to other instances where lawyers have cited cases fabricated by ChatGPT.
“The panel noted that appellants filed an opening brief replete with misrepresentations and fabricated case law. The brief included only a handful of accurate citations, almost all of which were of little use to this Court because they were not accompanied by coherent explanations of how they supported appellants’ claims. No reply brief was filed. The deficiencies violated Federal Rule of Appellate Procedure 28(a)(8)(A). The panel was, therefore, compelled to strike appellants’ brief and dismiss the appeal.”
Supreme Court of British Columbia - Legal Case, 23 February 2024, Canada
In this case a Canadian judge has ordered a family law lawyer who submitted fake ChatGPT generated cases to the court to personally pay the costs of the time opposing counsel spent trying to verify them. Stating as a final comment:
“As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers. Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less.“
United Kingdom First-tier Tribunal (FTT) - 4 December 2023, United Kingdom
A litigant representing herself, apparently using ChatGPT, submitted summaries of non-existent cases to the First-tier Tribunal (FTT) to support her defense against a penalty for failing to notify a Capital Gains Tax liability on the sale of a rented property. Although the FTT accepted that these false citations were provided innocently, the tribunal issued a stern warning about the dangers of litigants using AI-generated "hallucinations" in legal proceedings.
The Tribunal was also assisted by the US case of Mata v Avianca agreeing with Judge Kastel and finding:
“It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined. As Judge Kastel said, the practice also "promotes cynicism" about judicial precedents, and this is important, because the use of precedent is "a cornerstone of our legal system" and "an indispensable foundation upon which to decide what is the law and its application to individual cases…"
United States District Court, D. New Hampshire - 16 October 2023, United States
In this case, Lidia Taranov, a blind and cognitively disabled self-represented litigant, was enrolled in a Medicaid program in New Hampshire. She sued several officials and agencies for terminating some of her services, claiming it violated her rights. However, the court dismissed her complaint due to the lack of a viable legal argument.
While it's unclear whether generative AI was used to draft the brief or create fictitious cases, the situation bears resemblance to other instances where lawyers have cited AI-fabricated cases.
“In her objection, Taranov cites to several cases that she claims hold "that a state's Single Medicaid Agency can be held liable for the actions of local Medicaid agencies" The cases cited, however, do no such thing. Most of the cases appear to be nonexistent. The reporter citations provided for Coles v. Granholm, Blake v. Hammon, and Rodgers v. Ritter are for different, and irrelevant, cases, and I have been unable to locate the cases referenced. The remaining cases are entirely inapposite.”
United States District Court Southern District of New York - 22 June 2023, United States
The infamous "fake ChatGPT" case. This legal matter involved a lawyer who, while representing a man suing an airline, relied on artificial intelligence to help prepare a court filing. The outcome was unfavourable.
However, it's noteworthy as one of the first instances where a court provided official guidance on AI use and existing solicitor ethical responsibilities, including this statement:
“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.“
United States District Court for the Southern District of Ohio Western Division - 9 May 2023, United States
In this case, Elijah Whaley, the Plaintiff, is criticised for presenting a disorganized, contradictory argument that lacked clarity and relied extensively on fabricated authority. Whaley's status as a self-represented litigant did not exempt him from following procedural rules or providing factual support for his claims. The plaintiff later acknowledged using Liquid AI.
“… includes a number of invented citations to opinions which do not exist. As reflected in Plaintiff’s Table of Authorities, he supposedly cites to approximately 66 cases. At the very least, it appears Plaintiff made up every unreported decision he cites to … Even when citing actual cases, Plaintiff patently misrepresents them by inventing quotes which do not appear in those opinions. … It is possible that the extensive use of false authority could be the result of Plaintiff relying on Chat GPT or similar artificial intelligence language applications to draft arguments for his Opposition.”
While keeping up with the rapid changes in AI can be daunting, the right resources can make all the difference. We are looking to regularly update this resources of AI legal news, so you can stay informed, adapt quickly, and remain competitive in a constantly evolving environment. As more lawyers implement legal AI in their firms, the key question becomes not “if” but “how” to do this in an effective and ethical way.
Are there any news or prominent cases you think should be included above? Let us know! We hope you found this list a helpful resource.