Generative artificial intelligence (GenAI) is transforming nearly every sector of society including the practice of law. Legal professionals are increasingly using AI tools for research, drafting, contract review, and even predicting judicial outcomes with as many as one third of respondents to a survey using GenAI daily.1 But with this rapid adoption come questions that go beyond efficiency and instead point to the core of legal ethics including issues such as competence, confidentiality, and professional judgment.
ATTORNEY COMPETENCE
The main ethical challenges with GenAI, like with other forms of technology, is competence. Rule 1.1 of the American Bar Association Model Rules of Professional Conduct2 requires lawyers to provide competent representation. Historically, competence has meant being knowledgeable in the relevant areas of law and using traditional tools effectively. The 2009-2013 ABA Commission on Ethics formally included technology within the Rule 1.1 framework under the “Maintaining Competence” comment: “[A] lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology[.]”3 The Michigan Rules of Professional Conduct (MRPC) follow the ABA’s lead and include technological competence (“including the knowledge and skills regarding existing and developing technology that are reasonably necessary to provide competent representation for the client …”)4 within the meaning of Rule 1.1 competence.
While the competence mandate regarding technology originally would have been relevant for using the internet, redacting PDFs, and efficiently using Microsoft Word, it now encompasses the use of GenAI and, in the future, will reach to technologies not yet available. How well do lawyers need to understand the algorithms behind their AI tools to use them competently? This question is difficult, especially given the “black box” nature of many machine-learning models which make complete, in-depth knowledge of these systems a near impossibility. The comment to ABA Rule 1.1 points to understanding the “benefits and risks” of a technology; if a lawyer truly has that understanding, they will then understand additional steps that they may need to take to make sure they are competently representing their client. Without that understanding, lawyers may find themselves relying on tools that make predictions or generate content without fully grasping how these outputs are created. This reliance can lead to errors, as we saw early on with the Avianca Airlines case.5 Lawyers must be able to critically assess the reliability of AI systems and understand the implications of delegating parts of their work to an algorithm.
CONFIDENTIALITY
Confidentiality is another major concern for attorneys using GenAI. Under both the ABA Model Rules6 and the MRPC,7 lawyers must protect client information from unauthorized disclosure. AI tools, particularly those that rely on cloud computing or external data sources (e.g., ChatGPT), may expose sensitive client information to third parties either inadvertently or through security vulnerabilities.
For instance, using GenAI tools like chatbots to draft documents could mean that confidential data is sent to servers where the lawyer has limited control over how that information is processed or stored. Even when providers promise data security, the very act of transferring sensitive information introduces risks that require careful consideration. Additionally, GenAI systems trained on large datasets could, theoretically, learn from and retain information provided during client consultations. Lawyers need to take proactive steps to ensure the tools they use comply with ethical standards, including carefully reviewing the terms of service and privacy policies associated with GenAI technologies.
BIAS AND FAIRNESS
AI systems are trained on data, and that data carries the biases present in the real world and on the internet. This becomes an ethical issue when lawyers rely on AI tools for predictive analysis, sentencing recommendations, or even jury selection. If an AI system is trained on biased data, it will likely perpetuate unfair outcomes — contrary to a lawyer’s duty to uphold justice. For example, studies have shown that some AI algorithms used in criminal justice settings are more likely to misclassify individuals from marginalized communities, leading to biased policing,8 sentencing,9 or parole decisions.10 Lawyers using such tools must be vigilant, questioning the fairness of these algorithms and ensuring that they are not reinforcing systemic inequalities.
The challenge here is twofold: lawyers must educate themselves about how biases can infiltrate AI systems, and they must advocate for transparency in AI development. One possible improvement would be to require developers to disclose datasets used for training AI models so biases can be discovered and countered.
PROFESSIONAL JUDGMENT
Human judgment is an important part of a lawyer’s role. AI tools can automate the drafting of contracts, perform legal research, and even suggest litigation strategies — tasks that were once solely within the lawyer’s purview. While this automation can save time, there’s a risk that overreliance on AI might erode the exercise of professional judgment.
Professional judgment is nuanced, context sensitive, and deeply rooted in experience. AI, however, works by identifying patterns and making probabilistic predictions based on historical data. It lacks the ability to fully appreciate the subtleties that might inform a lawyer’s strategy or the ethical considerations that might come into play in a particular case. Lawyers must be cautious not to let AI make decisions for them, especially in areas that require nuanced judgment. AI should augment, not replace, the critical thinking and ethical considerations that lie at the heart of legal practice.
CONCLUSION
Artificial intelligence has the potential to transform the legal profession for the better by helping lawyers work more efficiently, reduce costs, and provide better service to clients. However, lawyers must remain vigilant, ensuring that they use AI tools in a manner consistent with their professional responsibilities. This means not only understanding the tools but also questioning their limitations, biases, and impact on the justice system.
The legal community needs to engage in ongoing dialogue about the role of AI in practice. Ethical frameworks must adapt to ensure that the core values of the profession are upheld even as technology reshapes the landscape. This might involve revisiting current ethical rules and issuing new guidelines that specifically address the challenges posed by AI. By integrating AI thoughtfully and ethically, lawyers can ensure that technology serves as a force for justice rather than a threat to it. As AI continues to evolve, so too must our understanding of what it means to be an ethical legal practitioner in the digital age.