“The oldest and strongest emotion is fear, and the oldest and strongest kind of fear is fear of the unknown.” —H.P. Lovecraft

In science fiction, artificial intelligence is often portrayed as something incomprehensible to humans—including the humans who created it. Something, if not to be outright feared, then at least not to be trusted. While we ultimately believe that this “incomprehensibility” will come to fruition, as many of the prognostications of good sci-fi do, there is much that is needed to be understood in the here and now. The concepts of interpretability and explainability in AI are at the forefront of discussions of AI applications in medicine, finance, and defense to name a just few. In many cases, you will find the definitions of interpretability and explainability conflated. And frankly, for many discussions this not a serious breach—one we will perpetuate here.  However, initial clear and concise definitions are necessary to gain an understanding of just how transparent we are attempting to coax that AI black box into being.