Page 20 - ACTL Journal_Sum24
P. 20
We have to make sure that lawyers are not using generative AI as a shortcut and as a proxy for learning how to do things themselves. It’s a tool.
And if it’s ChatGPT, I don’t think you can put anything that could be remote- ly traced to the client into that software without violating the rule. There are private companies that are offering generative AI that is not built on the large language model that ChatGPT is, where you can have contractual protections. But you need to have those contractual protections in order to make sure that for use like this you’re not violating your ethical roles.”
Frederick and Wolfe continued to describe Pro’s responses that identified favor- able jurors for the plaintiff – individuals with personal loss, skeptical of auton- omous technology, and likely older, or cyclists; and unfavourable jurors – those with a background in the automotive or related industries, tech enthusiasts and those with a strong belief in progress and innovation.
Frederick proceeded to do a live demonstration, asking Pro to give him jury voir dire questions to identify unfavourable jurors. Pro complied, but clearly missed some important areas for questions, proving again that human input remains invaluable to producing the best results.
When asked if Pro’s voir dire questions would be permitted in his court, Judge Thumma gave a resounding “NO.” A perfect segue to David Wolfsohn’s discussion about the proliferation of judges’ orders related to AI. The general apprehension is that these orders are too broad or unwieldy, and even unnecessary, if lawyers
simply follow the rules and ethics required. “If I want to comply with these orders, would I be able to? What am I supposed to not do and what am I permitted to do under these orders? And I think you’ll see that a lot of the terms are not defined and there are some definite problems with trying to apply to these orders.
“I spent a lot of time on Westlaw’s AI tool that was rolled out maybe six, eight months ago. In one of the orders the judge says, ‘You can’t use any AI but
Westlaw and Nexis are okay’.”
But Wolfsohn was impressed with the extent of AI’s efforts to find an answer to his questions. He posited whether lawyers might actually be obligated to use AI in the future, particularly for cost effectiveness, considering the twenty hours billable time it would have taken an associate to produce the same responses AI gave in moments.
Judge Thumma himself, has previously queried AI about how to improve access to justice. His AI app answered that it could provide “a more just legal outcome than a human.” A harsh answer to a judge. But Judge Thumma didn’t disagree completely. “There is a kernel of truth in that and I think that’s a good thing. And I’m not saying that it’s going to replace folks in this room but I think there’s opportunities using generative AI for sort of large volume dispute resolution.
“And what it will also do is change judg- es. It will change law students and recent law grads. It used to be when I was a kid long ago that you became indispensable by knowing the facts of the case, right? Big case stuff, if you knew the witnesses and the documents, you were in pretty good shape and you had to know some law and do some other things, as well. But you know, five years ago I was talking at a symposium about the need-to-know tech- nology; that if you’re a young law grad and know technology, that’s how you become indispensable. I think today if you grad- uate with some knowledge of generative AI, you’re indispensable. And if you don’t have that knowledge, you’re going to be re- placed, given competence and other things, as a brand-new attorney. Even if you decide not to use it, you’ll make a better decision on whether to use it if you know what it is.”
That led to the ethical issue of meeting the duty of competence for lawyers who are not familiar with AI. Fairless noted that it’s a competitive advantage. Also, it is an ethical obligation. Rule 1.1, Comment 8 requires lawyers to keep abreast of technol- ogy that may impact their practice. Gener- ative AI falls squarely within that.
Understanding the limitations, using it as a tool, not taking at face value what it spits out, but using it as a starting point, it can be really valuable. And for things that re- quire digesting large amounts of informa- tion, it could be extraordinarily valuable. But one of the reasons why judges are im- plementing all these different requirements is because they’re being reactive. They’re seeing lawyers making mistakes. They’re seeing lawyers who don’t understand what they’re using; assuming that it’s giving them accurate information.
We don’t actually need to have any stand- ing AI orders regarding fake cases and similar errors. Those things are a failure of lawyering, not a failure of the court not having the right order. The rules are there;
19
JOURNAL