top of page

AI Is Replacing Lawyers Now

  • Writer: Nikita Silaech
    Nikita Silaech
  • Dec 5, 2025
  • 3 min read

Updated: Dec 10, 2025

Image generated with Gemini
Image generated with Gemini

Professionals using AI tools completed 12.2% more tasks and did them 25.1% faster and produced results with 40% higher quality than those who did not use AI  (DocuPilot, 2025). This is the kind of measurement that gets the attention of law firm partners who are constantly looking for ways to increase billable hours while reducing cost.


However, this measurement obscures the fact that lawyers and paralegals have started to notice something troubling about the AI systems they have been asked to integrate into their workflows. These systems sometimes produce legal briefs that cite cases that do not exist and that reference court decisions that were never decided in any jurisdiction.


The problem lies in how language models work. They are trained to produce coherent-sounding text that follows patterns in their training data, not to verify whether the information they generate is factually accurate or legally real.


A lawyer in New York drafted a legal brief using an AI tool and cited a case that sounded plausible. It was formatted like a real citation and appeared in a proper legal database interface, but when opposing counsel looked it up, the case did not exist.


The lawyer had to explain to the judge that the AI had hallucinated the case. The judge ultimately did not sanction the lawyer because the hallucination did not materially affect the outcome. However, the incident revealed a systemic vulnerability in how the legal profession is adopting AI (Thomson Reuters, 2025).


AI systems are now being used to draft legal briefs that are submitted directly to courts without extensive human review, because the extra efficiency means that firms are moving documents from draft to submission faster than humans can meaningfully check the work.


Law firms are cutting associate positions and reducing paralegal staff because AI can now handle tasks that traditionally required years of legal training and on-the-job experience. There is an undeniable economic incentive to deploy these systems even when their reliability is questionable.


An associate lawyer who used to spend 40 hours on contract review now spends approximately 4 hours because the AI does the initial review and flags potential issues, so law firms can handle the same volume of work with fewer people.


If a law firm can reduce its workforce and maintain or increase output, profit margins expand dramatically. The speed at which this is happening suggests that legal education and legal hiring are shifting in real time as firms discover they need fewer junior lawyers.


But the safety problem remains unsolved. AI systems cannot reliably be trusted to generate legal citation, accurately summarize case law or to identify all the relevant precedents that should inform a legal argument.


Companies like Spellbook and Juro have attempted to address this by training their models specifically on legal texts. They’re also building systems to check citations and to flag when the system is uncertain about a legal principle (Juro, 2025).


Thomson Reuters and LexisNexis have integrated AI into their legal research platforms, which allows the AI to verify that citations exist before presenting them to lawyers.


But the verification systems themselves are not foolproof, and there have been cases where even the AI-powered verification tools have missed hallucinations because the systems are complex and the checks are not exhaustive.


The economic incentives to deploy AI are extremely strong and immediate, but the risks from deploying unreliable AI are distributed across many cases, defendants and plaintiffs, and they only occasionally surface as a visible problem.


A lawyer who is pressured by their firm to move quickly through a document review using AI has an incentive to trust the system because questioning its output slows down the workflow, and if nothing goes wrong, the efficiency justifies the deployment.


But the person bearing the risk of the AI hallucinating a legal citation is the client or the opposing party, not the lawyer or the firm that is using the AI.


There is also a knowledge problem because many lawyers using these tools do not fully understand how large language models work or why they hallucinate. They don’t even know what kinds of errors are most likely, which means they cannot reliably judge when to trust the AI and when to verify everything manually.


The ethical constraints around AI in law are still being developed, and bar associations in different states are releasing guidance about when AI use is permissible and when lawyers have duties to verify AI output. But the guidance is uneven and often comes after widespread adoption has already begun (Thomson Reuters, 2025).

Comments


bottom of page