Unless you’ve been living in a cave since late last November, you, like the rest of us, have been inundated with news surrounding the release of artificial intelligence content generators (AICGs) like ChatGPT, Perplexity, Jasper, and other similar platforms. While reports about these new tools have expressed enthusiasm for their capabilities, alarmists among us have warned of rampant plagiarism in academic settings and the decline of traditional content-based industries. The current Hollywood writers’ strike and their ongoing row with major studios over AI-generated content is the most recent high-profile example. Precious little, however, appears to have been written about “AICG best practices” for service professionals, including, of course, us lawyers.

Although empirical data is lacking, anecdotal evidence strongly suggests that both newly minted and seasoned legal practitioners across a variety of practice areas are turning to AICGs to aid them in their everyday tasks. Most obviously, these tasks would include things like answering legal research questions and drafting documents, such as legal briefs and operating agreements. It is also likely that many users are naïve in their use of AICGs because they do not have the knowledge base to identify, much less resolve, the cavalcade of problems that can come from using these tools as a lawyer. But by developing a consciousness for these potential pitfalls, the hope is that lawyers (and other service providers) can use AICGs to significantly aid, but not substitute for, their ethical (and perhaps moral) responsibilities to their clients. This article seeks to raise awareness among practicing lawyers where it otherwise may be lacking.