BakerTilly.ca Logo

Publications

Publications

Law has a magic wand now

Some people think large language models will transform the practice of law. I think it's bigger than that.

There is a reasonable chance that large language models (LLMs), led by ChatGPT4 today but who knows what by next weekend, are going to change the world. Like, the actual world, spinning underneath you right now.

Bill Gates ranks AI with the internet and the mobile phone in terms of revolutionary impact. Microsoft’s GPT4‑powered Bing is doing things no search engine should be able to do. Reputable scientists are asking questions that normally get you removed from faculty mailing lists, like: Has GPT4 somehow developed theory of mind? Is it exhibiting the first signs of artificial general intelligence?

Answering those kinds of questions is so far above my pay grade I can’t even see it from here. So I’ll settle for a much less challenging query: What will LLMs do to the legal sector? Are we experiencing, as an NYU Law School professor declared yesterday, “the end of legal services as we know it”?

Well, let’s start with what we do know. ChatGPT has demonstrated not just its competence with, but its mastery of the LSAT and the Uniform Bar Examination, scoring in the 88th and 90th percentiles on those tests. “Large language models can meet the standard applied to human lawyers in nearly all jurisdictions in the US,” says Professor Dan Katz (and others), “by tackling complex tasks requiring deep legal knowledge, reading comprehension, and writing ability.” That’s something.

ChatGPT4 can also do things that only lawyers (used to be able to) do. It can look up and summarize a court decision, analyze and apply sections of copyright law, and generate a statement of claim for breach of contract. (Have I mentioned that ChatGPT4 only emerged in March?) Those are just three quick examples I found on Twitter; with an announcement on March 23, 2023, that OpenAI is rolling out plugins to integrate ChatGPT with third‑party services and allow it to access the internet, those examples could soon number in the thousands.

Will ChatGPT4 and other LLMs replace lawyers? I keep hearing this question and it fascinates me, because I think it really speaks to the legal profession’s insecurities. Doctors and architects and engineers aren’t, for the most part, asking themselves whether GPT4 will replace them, because they are confident about their other skills and functions that AI can’t replicate. (Yet. I’m not selling this technology short.)

But for most lawyers, our entire professional functionality is rooted in our expertise with knowledge and our fluency with words. We understand the law, we apply the law to facts, and we analyze the results in order to reach an actionable conclusion. We create untold types of documentation and correspondence, with language precisely arranged, deployed, and manipulated to obtain for our clients the results they want.

That’s not all we do. It’s not all we can do. But it sure is the vast majority of what our billable time is spent on. And now someone has gone and invented a Knowledge and Words Machine that does all of those things, in hardly any time at all. Why would we not be alarmed? There’s a reason why “legal services” is #1 on this list of industries most at risk of disruption from generative AI.

Look at the naming conventions of LLMs for another clue. Casetext has released an incredibly powerful program it describes as a “legal assistant that does document review, legal research memos, deposition preparation, and contract analysis in minutes.” It looks awesome.

But this program is not called “AI Assistant”; it’s called “Co‑Counsel.” Just like Microsoft’s new GPT4‑powered productivity tool for Word, Excel, and PowerPoint is called “Co‑Pilot.” These are not the names you give to assistants. They are the names you give to your colleagues, your partners, and your peers. “Co‑” means equal.

Now, let’s be clear. LLMs are not people, they’re not the same as people, and they’re not sentient beings (although they fake sentience alarmingly well). They don’t “think” the way humans think. But we don’t really fully understand how they work (and their creators aren’t interested in sharing the details with us), and they perform their tasks with a speed and apparent ease that defies coherent explanation.

So this seems like a good time to remember Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.” Note carefully: Clarke didn’t say the technology was magic ⁠–⁠ he said it couldn’t be differentiated from it. That’s what ChatGPT4 looks like to the legal sector today. In practical terms, it’s a magic wand for law.

What happens when you introduce a magic wand into the legal market? On the buyer side, you reduce by a staggering degree the volume of tasks that you need to pay lawyers (whether in‑house or outside counsel) to perform. It won’t happen overnight: developing, testing, revising, approving, and installing these sorts of systems in corporations will take time. But once that is done, the beauty of LLMs like ChatGPT4 is that they are not expert systems. Anyone can use them. Anyone will.

(The PeopleLaw market won’t be forgotten. As the strength of LLMs’ computational capacity intensifies and high‑quality datasets of legal knowledge for everyday problems are developed, we’ll also soon see ordinary people logging on to navigate a free, public, online hub of sound answers to legal questions and basic remedies to legal problems, of a type I described last spring.)

What about legal services sellers? Law firms will (and have already begun to) adopt legal LLMs ⁠–⁠ their clients will expect it, their lawyers will demand it (lawyers love intuitive technology, which is why they don’t like most legal tech), and their competitors will do it if they don’t. But a business that sells a single asset that a magic wand just made obsolete isn’t a business with long‑term upside. Or medium‑term. Or even next Christmas.

“Lawyer hours worked” is the inventory of law firms, and LLMs are going to reduce that inventory massively and permanently. But “lawyer hours worked” is also integral to how law firms price their offerings, generate their profits, measure their lawyers’ value, decide on promotions to partnership, and establish standards of organizational commitment. It is a core part of their business identity. There is no way LLMs will leave law firms unscathed.

As I said earlier, ChatGPT4 is still in its infancy. It is exceedingly foolish to try drawing any firm conclusions from such scant evidence, and I won’t try. But I can’t shake the feeling that, someday, we will divide the history of legal services into “Before GPT4” and “After GPT4.” I think it is that big.

I think law firms should fundamentally re‑examine what they are going to sell and what they will organize their culture around. And I think that lawyers will need to re‑imagine who we are, what we do, and what we’re for, because it shouldn’t be this easy for a machine to become a magic wand when pointed at the legal profession.

Will AI replace lawyers? I absolutely don’t believe so. But if somehow that happens, it will say more about us than it does about the AI.

Jordan Furlong is a speaker, author and legal market analyst who forecasts the impact of changing market conditions on lawyers and law firms. He has given dozens of presentations in the U.S., Canada, Europe and Australia to law firms, state bars, courts and legal associations. He is the author of Law is a Buyer’s Market: Building a Client-First Law Firm, and he writes regularly about the changing legal market at his website Law21.ca. Jordan recently began a Substack newsletter, where “Law has a magic wand now” appeared on March 24, 2023.

Information is current to April 20, 2023. The information contained in this release is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavour to provide accurate and timely information, there can be no guarantee that such information is accurate as of the date it is received or that it will continue to be accurate in the future. No one should act upon such information without appropriate professional advice after a thorough examination of the particular situation.

Recommended Content