Employees around the world are looking for ways to make their jobs easier. With so much buzz surrounding artificial intelligence (AI), and generative artificial intelligence in particular, many employees are looking to AI tools to help reduce their workloads. While traditional AI uses algorithms to process data inputs and generate expected results based upon predefined rules, generative AI is a class of AI that takes the inputs provided and uses algorithms to create new content, including text, images and code. AI tools are capable of analyzing enormous data volumes to synthesize concepts, spot anomalies, detect patterns and triangulate information.

AI tools can analyze such data far faster than humans. But, like with any innovation, there are risks to manage and best practices to employ in order to encourage responsible use of AI. In this installment of our ongoing series on e-discovery and information governance, we will delve into the world of AI and discuss how organizations can mitigate legal and compliance risks and maintain an evolving approach to an equally evolving technology.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]