AI & the courts: judicial and legal ethics issues
This interim guidance on artificial intelligence (AI) was released by the CCJ/COSCA AI Rapid Response Team.
Courts need to anticipate the ethical issues that arise from the use of artificial intelligence (AI) in the legal profession. Principles in the Model Code of Judicial Conduct (MCJC) and the Model Rules of Professional Conduct (MRPC) for lawyers are implicated when AI is used in the courts.
Competence in technology is an ethical requirement
Judicial officers and lawyers have a basic duty to be competent in technology relevant to their profession. MCJC 2.5 imposes a duty of competence on judicial officers and an obligation to keep current with technology and to know the benefits and risks associated with all types of technology relevant to service as a judicial officer. MRPC 1.1 states that lawyers must provide competent representation to their clients which includes technical competence.
Judicial officers and lawyers must:
- Have a basic understanding of AI, including generative AI, and its capabilities. This includes knowledge of the terms of use and how the data will be used by the AI tool, as well as general familiarity with machine learning algorithms, natural language processing, and other AI techniques relevant to legal tasks
- Analyze the risks associated with using AI for research and drafting, such as bias or hallucinations (made up responses)
- Determine which areas of practice or processes can be improved with AI
- Determine where AI may not be appropriate for use in the legal profession or the judicial system
- Learn how to optimize prompts to get better results when using generative AI models such as Chat-GPT, Gemini, or Co-Pilot
- Identify which issues may require new policies or rules for AI use in the court system
Ethical standards for consideration
Judicial ethics issues
Judicial officers should be aware of the potential for ethical issues arising from AI usage and keep the following rules in mind when using or considering AI.
Ex Parte Communication (MCJC 2.9)
The Rule prohibiting ex parte communication also prohibits considering "other communications made to the judge outside the presence of the parties or their lawyers" (MCJC 2.9[A]), and material generated by AI could arguably be viewed as information outside the case that is improperly introduced into the judicial decision-making process. Rather than merely reviewing and summarizing case law, many AI-generated results have built-in biases. Relying on such information could also result in a violation of the Rule's provision barring independent investigation
(MCJC 2.9[C]). External influences on judicial conduct (MCJC 2.4) could also be an issue when a judge relies on an AI program that sets forth an opinion on legal policy.
Confidentiality
Judicial officers have a duty of confidentiality, and they must be cognizant of whether they — or their clerks or staff — are entering confidential, sensitive, or draft information into an open AI system when conducting legal research or drafting documents, and how that information is being retained and used by the AI technology. In an open system, it is possible the AI tool will use the shared information to train the model, potentially breaching confidentiality. Judges must avoid inadvertently releasing confidential information. This is also true for lawyers per MRPC 1.6.
Impartiality and Fairness (MCJC 2.2)
The Rule requiring judges to perform their duties fairly and impartially could be triggered if a judge is influenced by an AI tool that produces results infected by bias or prejudice.
Bias, Prejudice, and Harassment (MCJC 2.3)
Judicial officers need to be aware of the potential bias or prejudice inherent in certain AI technology and that using it could violate the Rule against acting with bias or prejudice if the AI tool has biased data in its algorithm or training data.
Hiring and Administrative Appointments (MCJC 2.13)
Judicial officers should be aware of the risks of bias or discrimination if AI tools are used to help screen prospective clerks or other staff or to otherwise assist in the hiring process. If the algorithmic recruiting program is biased, it could produce results or recommendations based on discriminatory information, which could violate the rule requiring judges to make appointments impartially and on the basis of merit, as well as Title VII. Attorneys using AI technology in making hiring decisions should be mindful of a similar provision, which forbids engaging in invidious discrimination in conduct related to the practice of law. MRPC 8.4(g).
Duty to Supervise (MCJC 2.12)
Judicial officers have a duty to supervise staff and to make sure they are aware of the obligations under the rules which extend to ensuring staff are using AI technologies appropriately.
Attorney ethics issues
Along with the Rules referenced above, lawyers should consider the following rules when using AI.
Responsibilities of a Partner or Supervisory Lawyer (MRPC 5.1)
Partners and other lawyers with "managerial authority" (MRPC 5.1[a]) will be held accountable for ensuring that other lawyers in the firm comply with the Rules of Professional Conduct. Therefore, training in the ethical use of artificial intelligence and policies for lawyers in the firm is necessary. Of course, this also presupposes competence with technology, as discussed earlier.
Responsibilities Regarding Nonlawyer Assistants (MRPC 5.3)
The Rule governing oversight of the work of nonlawyers could be triggered when a subordinate is tasked with deciding which particular AI tool to use, and further while implementing those tools. In addition, the AI technology itself arguably could be considered nonlawyer assistance.
Fees (MRPC 1.5)
Lawyers will have to navigate the issues of using AI to the financial benefit of the client, not using AI if a client specifically chooses not to have it used on their legal matters, and determining proper fee schedules for using, supervising, and editing a product that relies on generative AI.
Rules that may also be germane to the use of artificial intelligence in the practice of law include MRPC 5.5 (Unauthorized Practice of Law), MRPC 3.2 (Expediting Litigation), and MRPC 3.3 (Candor towards the Tribunal), among others.
In sum, understanding AI's capabilities and risks, especially regarding bias and confidentiality, is a necessity for technological competence. Court professionals must stay up to date on developments in AI and the potential ethical implications of using it.
Additional information
- "Navigating AI in Court Systems: Ethics, Legal Frameworks, and Practical Tools" webinar (Oct. 16, 2024)
- "Ethics of Generative AI: A Guide for Judges and Legal Professionals" webinar (Sept. 18, 2024)