History is filled with examples of unethical, unconsented research, and while some of these studies have contributed to society’s understanding of science, they also demonstrate another key fact: people really don’t like being part of studies they don’t consent to. Facebook is the most recent research platform to learn that lesson.
On June 29, Adam Kramer, a Facebook data scientist, apologized for research he and others had conducted on over 600,000 people without direct consent in 2012. The team behind the experiment slightly altered subject’s news feeds to be either more positive or negative, and then studied the correlating moods of the users. The study found that those exposed to more positive feeds were slightly more likely to leave positive posts, and those with negative feeds were slightly more likely to leave negative posts.
While the results of the research show how readily people can be affected by the moods and views of others, the larger issue revolves around privacy. Critics have lashed out over the manipulations of users, but Facebook’s Data Use Policy states that the company, “may use the information we receive about you… for internal operations, including troubleshooting, data analysis, testing, research and service improvement,” Huffington Post reports.
But many have argued that the Data Use Policy is a minimum bar in terms of consent, and while researchers have justified the exercise because it was minimally invasive, it does not appear to have complied with research standards outside of the use agreement. CBS News points out that the American Psychological Association requires that studies involved subject deception must alert participants to that deception as soon as the research has wrapped.
As of yet, no legal action has been filed against Facebook. However, with citizens more concerned about their private information, and with Facebook’s spotty record on protecting user information, it’s not impossible to see more activity on this case.
It will be interesting to see just how far the companies defined user agreement goes in protecting it, in instances where it intentionally manipulates the user experience.
For more on the details of the research check out Zach Warren’s initial coverage.