The massive creation of data from users of information technology is powering what some have called the Fourth Industrial Revolution. The current challenge is the collection, organization, and application of the data. One of the primary tools used to mine this avalanche of data is artificial intelligence (AI). To remain competitive in a rapidly evolving economy, most large organizations have either launched AI systems or are planning to use them in the near future. However, these disruptive technologies can have unintended consequences resulting from poor design, biased data, lack of transparency and lack of accountability. Implementing an artificial intelligence system in any organization requires an awareness of these risks and drives the goal of creating an AI system that is both useful and ethical.

This article will review the key steps to follow for a successful implementation of an ethical AI system. The critical role of lawyers in this process will be highlighted. This column’s analysis is a continuation of a series of articles on Responsible AI published here over the past year. The problem of creating ethical AI systems has also been studied by hundreds of government and private organizations who have issued principles and guidelines for the implementation of AI, including the Singapore Model AI Governance Framework, the Pan-Canadian AI Strategy, the Australian AI Ethics Framework, UNESCO’s Recommendation on the Ethics of AI, and the IEEE’s Ethically Aligned Design V.2. This article will summarize some of the principles set forth in these more detailed commentaries.

Creating the Management Team