Mata v. Avianca (Case No. 1:22-cv-01461), a case where plaintiff’s counsel cited nonexistent cases in opposition to a motion to dismiss after using generative artificial intelligence in the U.S. District Court for the Southern District of New York, is a cautionary tale about the impact of modern technology used by law firms. From predictive analytics in litigation to contract review, legal research, and even client communication, AI is truly revolutionizing the legal sector. Despite the tremendous potential, this technology harbors serious ethical implications that cannot be overlooked. As representatives of the legal system, law firms must thoughtfully address some of the many ethical challenges that AI presents. Some of which are the problem of accountability for work done with AI, and similarly, a lawyer’s duties of honesty, candor, and competence.
Accountability, Liability, Transparency, and Explicability.
When AI tools are employed for tasks that affect their clients, it is not obvious who bears the responsibility if the AI makes an error or oversight, which it is capable of doing. This question raises ethical and legal questions about accountability and liability. AI systems, particularly those using machine learning, often suffer from a “black box” problem, where the decision-making process is opaque. In a sector like law, where decisions can have profound impacts on peoples’ lives, there is a need for transparency and explicability. Ensuring that AI systems are designed and used in ways that provide clarity around their decision-making process is an ethical necessity.
Of course, an AI, by definition, cannot be morally or legally accountable for anything. After all, it’s not a person. The apparent solution to this issue then, is to suggest that the legal system should hold either the programmers of the software or its user accountable for any harm that follows from its use. But this results in at least two more problems. First, if the system holds the programmer responsible, then with whom does the buck ultimately stop? With the development team? This inquiry leads to the second issue. What about the person who uses the AI which returns an error? To what extent is the practitioner responsible for the harm caused by the AI? The practitioner almost certainly does not understand the mechanism of the program. Indeed, the practitioner probably could not understand it even if such an attempt was made.
Although intuitively, it seems unjust to hold someone accountable for something that nobody could realistically understand, our system is such that someone must be liable when false AI is used. Ultimately, the lawyer as an officer of the court is the one who swears an oath and holds the duties of candor and competency. Therefore the operator of AI tools must know, and has a duty to know about the risks inherent in using AI tools. To use an artificial intelligence for something like research, and then fail to check its work, falls short of the duty of competence. The legal community will be closely watching the result of the Mata v. Avianca case to see whether sanctions are levied on the attorneys for using ChatGPT to draft a brief in which nonexistent cases were cited.
A Lawyer’s Duties of Honesty and Candor.
The use of AI tools in the legal profession also raises concerns regarding a lawyer’s duty of honesty and candor towards a tribunal. When lawyers make statements before a court, whether orally or in writing, they are expected to affirm the truthfulness of their statements to the best of their knowledge. However, if a lawyer relies on an AI tool to perform tasks that are essentially inscrutable to them, it becomes challenging for them to make such a declaration confidently. The complexity of AI algorithms and their opaque decision-making processes make it difficult for lawyers to fully understand and explain the basis of the AI’s conclusions. This excludes the basic necessity of shepardizing cases which should be done when relying on AI for legal authority such as case law and codes.
This situation prompts the question of how the legal system should interpret a lawyer’s duty of honesty and candor in the context of AI-generated work. One possibility is for the legal system to incorporate AI usage within a concept resembling “to the best of my knowledge,” acknowledging that the lawyer’s understanding may be limited due to the involvement of AI tools. This approach would recognize the lawyer’s reliance on AI and their responsibility to make accurate statements based on the information provided by the AI, while acknowledging the inherent limitations in comprehending the AI’s internal workings. However, it is also important to note that this could also have the effect of making a lawyer feel like he or she is absolved of her duty of due diligence and over-rely on AI as a crutch, rather than a tool, which would be prohibited under ethical rules.
A Lawyer’s Duty of Competency.
The use of AI may nonetheless become necessary because lawyers also have a duty of competence and to be zealous advocates. If AI tools are available and can enhance a lawyer’s ability to provide more accurate and comprehensive legal services, it may be that lawyers have a professional obligation to utilize such tools, as failing to do so could be seen as a failure to meet the standards of competence and could potentially result in suboptimal representation of clients.
Thus, while AI presents tremendous opportunities for law firms to enhance efficiency and accuracy, it also introduces a number of significant ethical challenges. Law firms must navigate these ethical waters carefully, taking into account not just the advantages of AI, but also its potential risks and implications. As we continue to adopt and integrate AI into the legal sector, an ongoing dialogue around these ethical considerations will be crucial. Law firms, and indeed all stakeholders in the legal ecosystem, have a responsibility to ensure that the AI revolution in the legal industry is both ethically conscious and just.