Businesses are racing to unlock the value of generative artificial intelligence (AI) tools. Broadly speaking, generative AI refers to AI tools that create new content like text, images, and videos from user prompts, including tools like OpenAI’s ChatGPT, Google’s Bard, and Stability AI’s Stable Diffusion. Businesses can use generative AI to build, edit, and test software code, write ad copy, and create better customer service and support experiences with enhanced chatbot capabilities. But use and deployment of generative AI brings privacy and intellectual property risks, and risk of harm to consumers resulting from inherent bias and unfair or deceptive commercial practices. As regulatory scrutiny and enforcement gain momentum and plaintiffs begin to seek redress for alleged harms caused by the use of generative AI, risks increase. Accordingly, businesses should operationalize best practices to help mitigate that risk.

Privacy Issues

Training generative AI tools requires massive data sets. Common sources of data include data scraped from the internet and licensed data sets, which can include personal information.