With each passing day, we are 
 increasingly living in an algorithmic universe, due to the easy accumulation of big data. In our personal lives, we experience being in a 24/7 world of “filter bubbles,” where Facebook has the ability to customize how liberal or conservative one’s newsfeed is based on prior postings; Google personalizes ads popping up on Gmail based on the content of our conversations; and merchants like Amazon and Pandora feed us personalized recommendations based on our prior purchases and everything we click on.

While (at least in theory) we remain free in our personal lives to make choices in continuing to use these applications or not, increasingly often what we see is the result of hidden bias in the software. Similarly, in the workplace, the use of black box algorithms holds the potential of introducing certain types of bias without an employee’s or prospective employee’s knowledge. The question we wish to address here: From an information governance perspective, how can management provide some kind of check on the sometimes naïve, sometimes sophisticated use of algorithms in the corporate environment?

Algorithms in the Wild