Florida Issues Proposed Advisory Opinion 24-1 on Lawyer use of AI

Florida has issued a proposed guideline on the use of AI in the legal field. The guidelines are solid and practical. This is not a final opinion.

Part I – The lawyer has a duty of confidentiality and a duty to understand the technology:

A lawyer’s first responsibility when using generative AI should be the protection of the confidentiality of the client’s information as required by Rule 4-1.6 of the Rules Regulating The Florida Bar. The ethical duty of confidentiality is broad in its scope and applies to all information learned during a client’s representation, regardless of its source. Rule 4-1.6, Comment. Absent the client’s informed consent or an exception permitting disclosure, a lawyer may not reveal the information. In practice, the most common exception is found in subdivision (c)(1), which permits disclosure to the extent reasonably necessary to “serve the client’s interest unless it is information the client specifically requires not to be disclosed[.]” Rule 4-1.6(c)(1). Nonetheless, it is recommended that a lawyer obtain the affected client’s informed consent prior to utilizing a third-party generative AI program if the utilization would involve the disclosure of any confidential information.

Rule 4-1.6(e) also requires a lawyer to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the client’s representation.” Further, a lawyer’s duty of competence requires “an understanding of the benefits and risks associated with the use of technology[.]” Rule 4-1.1, Comment.

When using a third-party generative AI program, lawyers must sufficiently understand the technology to satisfy their ethical obligations. For generative AI, this specifically includes knowledge of whether the program is “self-learning.” A generative AI that is “self-learning” continues to develop its responses as it receives additional inputs and adds those inputs to its existing parameters. Neeley, supra n. 2. Use of a “self-learning” generative AI raises the possibility that a client’s information may be stored within the program and revealed in response to future inquiries by third parties.

The opinion explains that the lawyer should understand the AI company’s policies regarding confidentiality and its policies on data retention. Does the provider disclose information to third parties? Does the provider sell your search history to other advertisers?

Part II – A lawyer also has a duty to verify the accuracy of any work product or text created by the AI software.

While Rule 4-5.3(a) defines a nonlawyer assistant as a “a person,” many of the standards applicable to nonlawyer assistants provide useful guidance for a lawyer’s use of generative AI.

First, just as a lawyer must make reasonable efforts to ensure that a law firm has policies to reasonably assure that the conduct of a nonlawyer assistant is compatible with the lawyer’s own professional obligations, a lawyer must do the same for generative AI. Lawyers who rely on generative AI for research, drafting, communication, and client intake risk many of the same perils as those who have relied on inexperienced or overconfident nonlawyer assistants.

Second, a lawyer must always review the work product of a generative AI just as the lawyer must do so for the work of nonlawyer assistants such as paralegals. Lawyers are ultimately responsible for the work product that they create regardless of whether that work product was originally drafted or researched by a nonlawyer or generative AI.

Functionally, this means a lawyer must verify the accuracy and sufficiency of all research performed by generative AI. The failure to do so can lead to violations of the lawyer’s duties of competence (Rule 4-1.1), avoidance of frivolous claims and contentions (Rule 4-3.1), candor to the tribunal (Rule 4-3.3), and truthfulness to others (Rule 4-4.1), in addition to sanctions that may be imposed by a tribunal against the lawyer and the lawyer’s client.

Third, these duties apply to nonlawyers “both within and outside of the law firm.” ABA Comm. on Ethics and Prof’l Responsibility, Formal Op. 498 (2021); see Fla. Ethics Op. 07-2. The fact that a generative AI is managed and operated by a third-party does not obviate the need to ensure that its actions are consistent with the lawyer’s own professional and ethical obligations.

Further, a lawyer should carefully consider what functions may ethically be delegated to generative AI. Existing ethics opinions have identified tasks that a lawyer may or may not delegate to nonlawyer assistants and are instructive. First and foremost, a lawyer may not delegate to generative AI any act that could constitute the practice of law such as the negotiation of claims or any other function that requires a lawyer’s personal judgment and participation.

Comments: AI is going to challenge the legal profession in ways we cannot fully understand. Just remember (a) don’t divulge confidences and (b) monitor any outputs from the AI to make sure that those outputs are factually true and based on existing caselaw.

Colorado Lawyer Agrees To Discipline for AI Use

The Colorado Supreme Court suspended a lawyer for 90 days for his use of Chat GPT in composing a brief which contained fictitious caselaw. The lawyer asked Chat GPT to compose a brief. Chat GPT made up the case citations completely. The lawyer presented the brief in court before eventually revealing the problem to his supervising attorney and the Court. The disciplinary notice reads as follows:

People v. Zachariah C. Crabill. 23PDJ067. November 22, 2023.

“The Presiding Disciplinary Judge approved the parties’ stipulation to discipline and suspended Zachariah C. Crabill (attorney registration number 56783) for one year and one day, with ninety days to be served and the remainder to bestayed upon Crabill’s successful completion of a two- year period of probation, with conditions. The suspension took effect November 22, 2023.

In April 2023, a client hired Crabill to prepare a motion to set aside judgment in the client’s civil case. Crabill, who had never drafted such a motion before working on his client’s matter, cited case law that he found through the artificial intelligence platform, ChatGPT. Crabill did not read the cases he found through ChatGPT or otherwise attempt to verify that the citations were accurate. In May 2023, Crabill filed the motion with the presiding court. Before a hearing on the motion, Crabill discovered that the cases from ChatGPT were either incorrect or fictitious. But Crabill did not alert the court to the sham cases at the hearing. Nor did he withdraw the motion. When the judge expressed concerns about the accuracy of the cases, Crabill falsely attributed the mistakes to a legal intern. Six days after the hearing, Crabill filed an affidavit with the court, explaining that he used ChatGPT when he drafted the motion.

Through this conduct, Crabill violated Colo. RPC 1.1 (a lawyer must competently represent a client); Colo. RPC 1.3 (a lawyer must act with reasonable diligence and promptness when representing a client); Colo. RPC 3.3(a)(1) (a lawyer must not knowingly make a false statement of material fact or law to a tribunal); and Colo. RPC 8.4(c) (it is professional misconduct for a lawyer to engage in conduct involving dishonesty, fraud, deceit, or misrepresentation).”

My comments: This case has been widely reported in the press. It is unfortunate the attorney trusted the software to write a brief for him, but then failed to check the case citations. Software can be corrected quickly so that it does not repeat the error. Human beings, on the other hand, are prone to repeat these types of errors. As an associate it was drummed into me to check the case citations and check Shepards. This can all be done online now.

Ed Clinton, Jr.