X

Ethical Considerations: AI In the Legal Profession

By now, many of us have heard about the New York attorney who was sanctioned for his use of ChatGPT in drafting a brief filed in a United States District Court. In Mata v. Avianca, Inc.,1 plaintiff’s attorney filed a brief which relied heavily on a fictitious case. Judge P. Kevin Castel found that while there is nothing “inherently improper” about using artificial intelligence (“AI”), lawyers must still ensure the accuracy of their filings.2 Judge Castel stated that the attorneys “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the AI tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”3 Mata demonstrates one reason why it is important for attorneys to consider the Rules of Professional Conduct before using AI in their practice. 

Although the attorneys in Mata were not professionally disciplined, they arguably violated Rule 3.3(a) of the New York Rules of Professional Conduct.4 Rule 3.3(a) states, “A lawyer shall not knowingly … make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer[.]”5 When the attorneys in Mata continued to stand by the fake opinions when questioned by the judge, the attorneys failed to correct a false statement of law made to the tribunal under Rule 3.3(a). 

This is not the only Professional Conduct Rule that is potentially implicated when considering the use of AI in legal work. Under Ohio R. Prof. Cond. 1.1, attorneys shall provide competent representation to a client. This includes “keep[ing] abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”6 One day, using AI may be a standard practice among attorneys, and eventually it may be a necessity of providing competent representation. However, right now, AI provides minimal benefits to practitioners. Relying solely on AI without proper oversight by the attorney could lead to a breach of the duty of competence. If an attorney chooses to use AI in their practice, they should always check the accuracy of the AI generated document to ensure competent lawyering. 

Next, under Ohio R. Prof. Cond. 1.6, “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of or unauthorized access to information related to the representation of a client.”7 To adhere to this rule, a lawyer should be cognizant about the information they input into ChatGPT. OpenAI, the parent company of ChatGPT, warns its users to not share any sensitive information with ChatGPT. As a self-learning generative AI tool, ChatGPT learns from the information input into its system by its users. This means that information disclosed by one user to ChatGPT may be used when generating a response for another user. Accordingly, if an attorney inputs confidential client information into ChatGPT, that information could be accessed by a third party researching the same topic or fact pattern. Because of this risk, the ABA advises that a client must give informed consent before their attorney inputs information relating to the representation into a self-learning generative AI tool.8 In addition to these inherent risks, a lawyer should consider the security of the AI tool they are using before revealing any sensitive client information. For example, ChatGPT recently experienced a data leak that allowed users to view other users’ conversations with ChatGPT.9 Because lawyers have a duty to implement measures to safeguard client data, lawyers should only utilize online tools that will adequately protect client data.10 

While there is no universal rule regarding AI use in the legal profession, some judges have created standing orders regarding the use of AI in their respective courtrooms. Shortly after Mata, U.S. District Judge Brantley Starr of the Northern District of Texas issued the first judicial standing order regarding AI. The order requires parties to file a certificate attesting that either (1) the party will not use AI to draft any documents filed in his court or (2) if generative AI is used in creation of the filings, they have been checked for accuracy by a human being.11 While Judge Starr does not ban AI from his courtroom, this certification serves as a reminder to attorneys to avoid the same mistake made by the attorneys in Mata. In the Southern District of Ohio, U.S. District Judge Michael Newman issued a standing order forbidding any party from using AI in the preparation of any court filing.12 Importantly, this order does not ban AI used to gather information from legal search engines, such as LexisNexis or Westlaw. Rather, although not explicitly limited to generative AI, this order also appears to target generative AI. 

It is important to consider all the potential implications before implementing AI in one’s legal practice. The attorney should first consult the standing orders issued by the judge they are practicing in front of to determine whether a standing order on the use of AI exists. Assuming there are no specific limitations on AI use in that judge’s courtroom, the attorney should consider whether they can effectively use AI in their practice. When deciding whether using AI is in their client’s best interest, an attorney should reflect on their own technological capabilities as well as the capabilities of the AI system they use. If an attorney decides to utilize AI in their law practice, it is important for an attorney to check the AI generated document for accuracy. An AI system is not trained to produce true information, it is trained to predict what the author wants it to output. While AI may be useful when starting to draft a document, it is not currently dependable enough to draft a court document without the oversight of a legal professional. Accordingly, it is important for the attorney to check the AI created document for accuracy before using it and especially before filing it with the court.


Shepard is a recent graduate of Salmon P. Chase College of Law and worked as a law clerk at the CBA in the Ethics and Grievance Department.  

 

1 Mata v. Avianca, Inc., No. 1:22-CV-1461-PKC, 2023 U.S. Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023).

2 Id.

3 Id.

4 Ohio R. Prof’l. Cond. 3.3(a) and N.Y. R. Prof’l. Cond. 3.3(a) are identical. 

5 N.Y. R. Prof’l. Cond. 3.3(a).

6 Ohio R. Prof. Cond. 1.1, Comment 8. 

7 Ohio R. Prof. Cond. 1.6(c). 

8 ABA Comm. on Ethics & Prof’l Responsibility, Formal Op. 512 (2024).

9 OpenAI, March 20 ChatGPT outage: Here’s what happened, https://openai.com/blog/march-20-chatgpt-outage (last visited April 8, 2024). 

10 Ohio R. Prof’l. Cond. 1.6, Comment 18. 

11 Judge Brantley Starr, United States District Court Northern District of Texas, https://www.txnd.uscourts.gov/judge/judge-brantley-starr (last visited April 8, 2024).

12 Standing Order Governing Civil Cases, Judge Michael J. Newman, United States District Court Southern District of Ohio, https://www.ohsd.uscourts.gov/sites/ohsd/files//MJN%20Standing%20Civil%20Order%20eff.%2012.18.23.pdf (July 14, 2023). 

print
Return