The benefits of big data continue to grow exponentially across all sectors – driving innovation, efficiency, and enabling more informed decisions, among many other positive developments. As with any nascent area, legal protections can sometimes be applied in triage fashion, rather than proactively. This ad hoc approach raises the stakes for legal exposure and can prompt a brand crisis if there’s a data disaster. While perfect should not stand in the way of progress, incorporating several key privacy and consumer protection considerations, as discussed below, in shaping an organization’s big data practices can help avoid or mitigate big data bloopers.
Privacy considerations can be efficiently baked in as one of the early steps when exploring and designing big data strategies. Knowing legal risk areas upfront can support informed strategic decisions on what particular types of data to collect and the risks associated with such data collection, as well as what safeguards to apply to manage such risks. Matching up the business case for collecting each data identifier also can assist in evaluating whether that business need is commensurate with related risk. For example, certain types of data are more likely to bring within scope various federal and state laws that mandate particular obligations (and legal and monetary exposure, and sometimes individual liability, for noncompliance). Legal obligations (and scrutiny) also are more likely to be triggered when working with sensitive classes of information, such as children’s data, health, financial, or precise geolocation information (as well as other types of data or individual profiles that collectively may be considered very personal).
“De-Identified Data” Also Comes with Obligations
One common dismissal of a privacy assessment is that big data does not include any PII. Particularly with respect to big data (and the ability to analyze such data sets efficiently to identify users, individual behavioral patterns, and personalized forecasts), that is a dated concept. According to the Federal Trade Commission (FTC), privacy analysis is warranted for data that can be re-identified to become personal information, based on “technological advances and the ability to combine disparate pieces of data [to] lead to identification of a consumer, computer, or device even if the individual pieces of data do not constitute PII.” In a big data era, reasonable protection against such reverse engineering is likely to include a combination of technologic fixes, as well as administrative and contractual protections. While there may not be a perfect solution to firewall against such re-identification, honestly assessing the capabilities and risks here is more likely to lead to a calibrated, informed approach to managing those risks.
Attention to Self-Regulation
Big data participants also should be mindful of self-regulatory codes that directly, or through contract terms, may layer on obligations. For example, the Network Advertising Initiative (NAI) applies strict requirements for the ad tech industry, including requiring opt-in consumer consent before merging personal data with previously-collected data that is linked or reasonably linkable to a particular computer or device (unless involving certain proprietary data), and to pass on those obligations in contract terms with business customers and enforce them. Other self-regulatory or industry standards also may apply. Failure to assess how such restrictions can impact the business case around big data can lead to a mismatch of expectations all-around, in addition to potential legal exposure.
Data For Eligibility Purposes
Big data can create further surprises if resulting reports are considered a consumer or credit report, as defined by the Fair Credit Reporting Act very broadly to include “any written, oral, or other communication of any information by a consumer reporting agency [also broadly defined] bearing on a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer’s eligibility for” credit, insurance, employment, housing, or similar decisions about consumers’ eligibility for certain benefits and transactions. That statute and the Equal Credit Opportunity Act set forth a number of very specific obligations, which require proactive processes to avoid costly missteps (and opportunities for class action claims). A privacy analysis can help guide how to avoid such minefields.
A further word of warning: contract defenses may provide little comfort in the face of a consumer protection-based action in such instances. Even if the vendor (or partner) contract terms expressly disclaim any credit reporting agency status, that doesn’t necessarily make it so, as others have learned. The legal exposure will depend on the information at issue, and how it is being shared and used.
Various reports put the average cost of a data breach anywhere from $4 million to $7 million. If big data systems are compromised, whether intentionally by a hacker or mistakenly by an employee mistake, the sheer number of records makes the possible exposure exponential, not to mention the brand impact. While robust data security is appropriate anytime a system involves protected information, the expectations increase alongside the scope and sensitivity of data at issue, and the party responsible for such data. Business and legal expectations demand that robust, sophisticated cyber protections are in place and are frequently updated to address ever-evolving threats and vulnerabilities.
* * *
Big data may be new(ish), but many of these legal considerations are not. Considering these concepts on the front end, and updating them as technology and the market evolves, is more likely to maximize the possibilities of big data on a sustainable path going forward.