This article appeared in Cybersecurity Law & Strategy, an ALM publication for privacy and security professionals, Chief Information Security Officers, Chief Information Officers, Chief Technology Officers, Corporate Counsel, Internet and Tech Practitioners, In-House Counsel. Visit the website to learn more.

The use Artificial Intelligence (AI) tools has increased dramatically in the past six months, fueled in no small part by the launch of the ChatGPT last November. Although sophisticated algorithms are part of many commonly-used technology solutions, company management and legal departments have found themselves often unprepared to assess and manage the risks associated with business use of generative AI, and other AI technology. Developing and implementing a policy and governance program for AI use requires understanding the specific use cases for AI, the inputs and outputs and how the processing works, how the AI impacts the company, people and other entities, what laws apply, and what and when notice and consent are required or prudent. Having a policy that outlines acceptable use, and documenting assessments that establish that AI systems are used in a manner consistent with the policy and that the benefits outweigh potential harms, can go a long way in managing legal and reputational risk.

Understand the Terms of Art