Columns

Lawyering in the age of GenAI

 

by Ann T. Hollenbeck and Emily Tait and Tyler Loveall   |   Michigan Bar Journal

The first public release of ChatGPT in November 2022 had an immediate, disruptive impact — seemingly overnight, the excitement and fear surrounding generative artificial intelligence (GenAI) could be felt across legal and business circles. Organizations across diverse industries ranging from art1 to health care2 to zoology3 grappled with the promise and peril of this breakthrough technology.

In our state, the University of Michigan became the first major university to develop its own GenAI tool, U-M GPT4; the Michigan Campaign Finance Act was amended to require disclosure when political ads are wholly or substantially generated by AI5; and Detroit-based Rocket Mortgage launched Pathfinder, a GenAI tool to help develop loan agreements.6

By training on voluminous data to learn language patterns, grammar, and context, GenAI can generate new, human-like content (including visual, written, and audio content) in response to a user’s prompt. While AI has been around (and evolving) for decades, GenAI places sophisticated technological tools in the hands of any individual with an account. For employers, that means these tools are potentially in the hands of their workforce. For in-house counsel, it means that evaluating legal risks associated with GenAI is now a key responsibility which must be balanced against business opportunities GenAI can create.7

While 2023 was marked by a flurry of activity as organizations scrambled to understand the implications of GenAI, 2024 marks a shift towards recognizing that GenAI is here to stay even as the law and technology continue to change. Here are a few tips for navigating this complex and constantly evolving area.

STAY TUNED

With the legal, regulatory, and technological aspects of GenAI continuously changing, staying up to date is critical. Expect continued regulation of AI at the state, federal, and international levels.8 Regulation may be generally applicable or industry-specific, such as the U.S. Department of Health and Human Service’s algorithm transparency rules.9

Agency activity is also expected. For example, the U.S. Copyright Office in 2023 published Registration Guidance for Works Containing Material Generated by AI and then issued a notice of inquiry regarding AI, which may lead to more guidance in 2024.10 Significant lawsuits involving GenAI are already underway, so judicial decisions and jury verdicts should be closely watched. Every attorney has a general duty to keep abreast of changes in the law and technology,11 but effective counsel relating to GenAI will require particular attention.

UNDERSTAND THE LEGAL EFFECT OF OUTPUT

GenAI users may be unaware of limitations on their rights to content it generates in response to their input.

First, users may assume they have intellectual property rights in the output. But human authorship and inventorship are required under U.S. copyright and patent laws, so output autonomously generated by a GenAI tool is unlikely to be eligible for protection.12 With sufficient human assistance or modification, however, the output may qualify for protection.13 Just how much human involvement is necessary to qualify for inventorship and authorship is unclear and will be the subject of ongoing discussion. GenAI raises additional complexities regarding trade secrets. A trade secret is information that derives independent economic value from not being “generally known to” others or “readily ascertainable by proper means.”14 Some GenAI tools allow the user’s inputs and the generated outputs to serve as training data for the tool or otherwise disclosed to a third party, which can potentially eviscerate trade secret protection.

Second, rights in the output may be limited by third-party IP rights. Numerous lawsuits have already been filed alleging that GenAI tools have made unlawful use of copyrighted works.15 Though some GenAI providers offer a degree of indemnification,16 its scope may be limited — e.g., the indemnification may extend only to outputs autonomously created by the tool and not to any user modifications of the output — thus creating potential liability for the user.

KNOW THE EULAS

End-user license agreements (EULAs) governing GenAI tools vary in how they address user prompts, output rights, ownership, data privacy, compliance, liability, confidentiality, and more. As mentioned previously, the scope of indemnification (if any) must be evaluated. Since some tools may be governed by interconnected policies and EULAs are frequently updated, assessing the terms and conditions governing AI tools is an ongoing process. In some instances, a user may have the ability to pursue a specific enterprise version of a GenAI tool — even though it may be difficult to persuade a GenAI provider to meaningfully deviate from its standard EULA. If this option is available, however, it may offer better protection and more effectively address the user’s particular needs.

BEWARE OF BIAS AND HALLUCINATIONS

GenAI tools may “hallucinate” and present inaccurate information as fact or generate biased or otherwise harmful outputs.17 Some EULAs state that GenAI providers do not warrant the accuracy of output and may limit or disclaim liability for inaccuracies or other damages.

In a well-publicized case, a New York attorney was sanctioned after filing a brief written by a GenAI tool that contained citations from non-existent cases, resulting in the attorney being sanctioned.18 In late 2023, the U.S. District Court for the Eastern District of Michigan proposed rules requiring attorneys to disclose GenAI use in any filings and verify that a human had reviewed the filing.19 Other courts have proposed and adopted similar rules.20 Human review and verification can reduce the risk of potential liability and embarrassment.

EDUCATE

Employee education and policies can help reduce risk while also allowing organizations to benefit from GenAI. Inadvertent disclosure of confidential information is less likely if employees understand what is (and is not) appropriate use of the technology.21 Education is also important to ensure that GenAI tools are being used in a legal and ethical manner consistent with the organization’s policies, goals, and values. An employee using GenAI in a manner that perpetuates bias or disseminates false information, for example, can expose the organization to liability and damage the employer’s reputation.

BALANCE RISK WITH OPPORTUNITY

Immediately following the launch of ChatGPT, some organizations reacted by banning GenAI tools altogether.22 Today, outright bans appear to be less common and may be seen as unrealistic, overly risk-adverse, costly to enforce, and disadvantageous from a competitive and efficiency standpoint. The balancing act of risk versus opportunity will be critical to any GenAI users.

CONCLUSION

GenAI is here to stay and will continue to evolve from a technological, legal, and regulatory perspective. As clients navigate uncharted waters and try to find a balance between risk and opportunity within their organizations, attorneys have a critical role in ensuring that GenAI tools are used in a legal and ethical manner.


“Best Practices” is a regular column of the Michigan Bar Journal, edited by George Strander for the Michigan Bar Journal Committee. To contribute an article, contact Mr. Strander at gstrander@yahoo.com.


ENDNOTES

1. Marr, The Amazing Ways Coca Cola Uses Generative AI in Art and Advertising, Forbes (September 8, 2023) [https://perma.cc/9NXA-LWTY] (all websites accessed April 15, 2024).

2. Toma, Senkaiahliyan, Lawler, Rubin, & Wang, Generative AI Could Revolutionize Health Care, Nature (November 30, 2023) [https://perma.cc/8BXL-JTTG].

3. Yovel & Rechavi, AI and the Doctor Dolittle Challenge, Current Biology (August 7, 2023) http://www.cell.com/current-biology/pdf/S0960-9822(23)00848-5.pdf.

4. Berg, University of Michigan to Provide Custom AI Tools to Campus Community, The Detroit News (August 21, 2023) [https://perma.cc/DX7X-UNYP].

5. 2023 HB 5141.

6. Rocket Pro TPO, What is Pathfinder by Rocket? [https://perma.cc/82WU-GUEP].

7. While not the subject of this article, courts, ADR tribunals, and outside lawyers are also faced with significant questions about how they can make legal and ethical use of GenAI tools so as to enhance outcomes, increase efficiencies, and promote access to justice.

8. See Executive Order 14110; G7 Hiroshima Process on Generative Artificial Intelligence, OECD (September 7, 2023) [https://perma.cc/8QLP-VJCH]; EU Strikes Political Deal on Landmark Artificial Intelligence Act, Jones Day [https://perma.cc/254T-JKJ6].

9. HHS Finalizes Rule to Advance Health IT Interoperability and Algorithm Transparency (December 13, 2023) [https://perma.cc/RJA4-6FKA].

10. 88 Fed Reg 59942 (August 30, 2023); 88 Fed Reg 16190 (May 16, 2023).

11. ABA MPRC 1.1, com 8.

12. Thaler v Vidal, 43 F4th 1207, 1210 (CA Fed, 2022); Thaler v Perlmutter, opinion of the United States District Court for the District of Columbia, issued August 18, 2023 (Case No. CV 22-1564 (BAH)).

13. 88 Fed Reg 16190, 16193 (May 16, 2023).

14. MCL 445.1902.

15. Silverman v OpenAI, Inc., filed July 7, 2023 (ND Cal); Getty Images (US) Inc v Stability AI Ltd, filed February 2, 2023 (D Del); New York Times v OpenAI, Inc, filed December 27, 2023 (SD NY).

16. Universal License Terms for Online Services, Microsoft [https://perma.cc/BWN4-TM52].

17. AI Concerns, United Nations (December 22, 2023) [https://perma.cc/QP3F-5S8N].

18. New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief, Reuters (June 26, 2023) https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22.

19. Proposed LR 5.1 (ED MI).

20. Proposed R 32.3 (CA 5).

21. Radiant Global Logistics, Inc v Furstenau, 368 F Supp 3d 1112, 1128 (ED Mich 2019).

22. See Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak, Bloomberg (May 1, 2023) [https://perma.cc/98XH-6LQ6].