Data Privacy Monitor

Data Privacy Monitor

Commentary on Data Privacy & Information Security Subjects

Safe Harbor Part Deux: The Privacy Shield

Posted in International Privacy Law

connectivityThis week began like many. An arbitrary deadline came and went – this one, January 31, 2016, was set by the Article 29 Working Party for European and United States regulators to address the void created by the invalidation of the Safe Harbor Framework for EU-U.S. data transfers in the Schrems decision back in October. Many of us had given up hope long ago of any meaningful accord, given the short period of time. Business continued as usual. But speculation ran rampant. Then, as those of us on the west coast had our first cup of coffee Tuesday, we began to receive the barrage of emails and social media notifications from the usual suspects indicating that something had changed. Was there a deal? A Safe Harbor 2? What exactly was it?

Just before 7:30 a.m. Pacific time, the news broke that negotiators had reached a “political agreement” on a new data transfer framework. A few minutes later, EU Commission Vice President for the Digital Single Market Andrus Ansip and Commissioner Vera Jourová held a press conference to announce a new framework, the “EU-U.S. Privacy Shield.” According to Commissioner Jourová:

“For the first time ever, the United States has given the EU binding assurances that the access of public authorities for national security purposes will be subject to clear limitations, safeguards and oversight mechanisms. Also for the first time, EU citizens will benefit from redress mechanisms in this area. In the context of the negotiations for this agreement, the US has assured that it does not conduct mass or indiscriminate surveillance of Europeans. We have established an annual joint review in order to closely monitor the implementation of these commitments.” Continue Reading

Legal Developments in Connected Car Arena Provide Glimpse of Privacy and Data Security Regulation in Internet of Things

Posted in Cybersecurity, Online Privacy

Row_of_carsWith the holiday season in the rear view, automobiles equipped with the newest technology connecting carmakers with their vehicles, vehicles with the world around them, and drivers with the consumer marketplace – Connected Cars – have moved from the lots to driveways. Automakers are remaking their fleets to offer unprecedented choice and convenience to drivers. However, as recent studies have shown, the connectivity inherent in Connected Cars, and the fast pace at which the industry is developing, raise privacy, data security, and physical safety concerns about the vulnerability of Connected Car computer systems. Lawmakers and regulators have begun to devote increased attention to this issue while plaintiffs’ attorneys have been emboldened to haul automakers, manufacturers, and computer system developers into court. As one of the earliest entrants into and faster-growing components of the Internet of Things (IoT), Connected Cars represent a testing ground for the development of consumer privacy rights and security standards for the IoT. The approach by Congress and the courts to the governance of Connected Cars will likely guide the development of standards and practices across the IoT spectrum.

Internet of Things

Connected Cars are part of the growing and evolving Internet of Things. The IoT describes the ecosystem of everyday products and services that are equipped with “smart” technology that allows them to connect to other products or services to communicate and transfer information about users to retailers, manufacturers, and the like, typically via a wireless network. The IoT currently includes devices we use every day such as Fitbits, connected appliances, smartphones and smart TVs. As the industry grows, IoT devices will continue to permeate the objects we use on a daily basis.

Continue Reading

Encryption: The Battle Between Privacy and Counterterrorism

Posted in Cybersecurity

For privacy advocates, it is universally accepted that encryption is a very good thing. After all, encrypted data is deemed a safe harbor under HIPAA and state breach-notification laws, providing an “out” from potential fines and penalties when an encrypted device is lost that contains sensitive health or other personal information. In addition to encouraging encryption via statutory safe harbors, federal and state regulators also use the stick of enforcement actions, costing organizations millions of dollars when sensitive information is put at risk due to the lack of encryption.

So the issue would appear to be settled. Use encryption, and more encryption, and the government will be happy – right? Well, not quite. 2016 brings a battle between privacy advocates, who want more security, and U.S. government agencies, such as intelligence and law enforcement, which deem encryption a threat.

Why would the government consider encryption to be a problem? It all comes down to counterterrorism. In the wake of the Paris, California, and other ISIS-directed or inspired attacks, the U.S. government is concerned that terrorists are able to obtain refuge by using encrypted communications.

For example, Apple’s iMessage system uses end-to-end encryption, meaning that text messages are encrypted on the sender’s device and are decrypted only on the recipient’s end. Even Apple cannot decrypt the messages, thwarting court orders to turn them over. Apple doesn’t hide this fact. Rather, it proudly trumpets its encryption capabilities in its privacy policy: “we can’t unlock your device for anyone because you hold the key — your unique password.”

Continue Reading

Data Security in the Financial Industry: Five Key Developments to Keep An Eye on in 2016

Posted in Financial Privacy

Credit Card Smart Chip_481796867According to a 2015 report on threats to the financial services sector, 41% of financial services organizations polled had experienced a data breach or failed a compliance audit in the previous year, and 57% listed preventing a data breach as their top IT priority.  Reflecting the ever-increasing awareness of threats to financial data security, 2015 also saw a number of regulatory enforcement actions and legislative efforts directed at financial institutions.  Below we outline some of the most significant developments of the past year.

  1. SEC Enforcement Action

In September 2015, the SEC reached a settlement with a St. Louis-based investment adviser on charges that it failed to establish required cybersecurity policies and procedures in advance of a breach affecting the personally identifiable information (“PII”) of 100,000 individuals.

The SEC has the power to bring enforcement actions against registered financial entities that fail to meet certain cybersecurity standards. Specifically, the SEC may bring enforcement actions for violations of SEC Regulation S-P (17 CFR § 248.30(a)) (commonly referred to as the “Safeguards Rule”). Under the Safeguards Rule, all registered entities must have written policies and procedures designed to:

  • Insure the security and confidentiality of customer records and information;
  • Protect against any anticipated threats to the security of customer information; and
  • Protect against unauthorized access to or use of customer information that could result in substantial harm or inconvenience to any customer.

Continue Reading

Five Questions Clients Asked Most Often in 2015 About Incident Response

Posted in Incident Response

connectivityWe provided incident response and incident response preparedness services to hundreds of companies in 2015. The questions we answered were as unique and varied as the incidents companies faced. Some were challenging, and occasionally they were easy to answer (e.g., Can we create a fake employee to sign the notification letter?), but often they were focused on what practical steps companies can take to be better prepared to respond, how to make certain decisions during an incident, and what is likely to happen after disclosing the incident.

(1) If incidents and attacks are inevitable, what preparedness steps should be taken? We talk to companies about being “compromise ready”—a constant state of diligence focused on prevention and improvement of response capabilities. The areas of preparedness that go into becoming compromise ready include: (1) preventative and detective security capabilities; (2) threat information gathering; (3) personnel awareness and training; (4) proactive security assessments focusing on identifying the location of critical assets and data and implementing reasonable safeguards and detection capabilities around them; (5) assessing and overseeing vendors; (6) developing, updating, and practicing incident response plans; (7) understanding current and emerging regulatory hot buttons; and (8) evaluating cyber liability insurance.

Obviously, accepting that incidents are inevitable does not mean it is not worth trying to stop them. Companies still need to use preventative technologies to build the proverbial moat around their castle to protect their systems and comply with any applicable security requirements (e.g., statutory, contractual, or formal/informal precedent from enforcement actions by their regulators). The right technological safeguards may prove sufficient to prevent many attacks. But when companies find a way to stop one attack vector, attackers do not give up and look for a new line of work. Rather, they are repeatedly observed finding ways around technological barriers. Most security firms will tell you that a capable attacker will eventually find a way in. Why? Most networks are built, maintained, and used by people, and those people are both fallible (e.g., able to be phished) and subject to a range of constraints (e.g., budgets, production priorities). Thus, companies should assume that even if they install the most advanced technology solutions and receive certain security certifications, their security measures may fail and an unauthorized person may gain access to their environment.

That reality drives the next two areas of preparedness: (1) implementing detective capabilities (e.g., logging and endpoint monitoring tools and procedures) so that unauthorized access is detected quickly, and (2) developing and practicing a flexible incident response plan. Two key parts of incident response planning are identifying the companies you will work with to respond and then building those relationships before an incident arises. In a prior blog post, I discussed “How and Why to Pick a Forensic Firm Before the Inevitable Occurs.” Companies do not always get the “luxury” of having 30 days to investigate, determine who may be affected, and then mail letters. Spending a few days just negotiating and executing a master services agreement and a statement of work with a forensic firm so that the forensic firm can begin to investigate can make the difference between meeting or missing a 30-day disclosure deadline.

Companies can use the Law & Order approach to building a tabletop exercise—read disclosures from other companies and the security firm reports that detail the incidents they investigate. It is often beneficial to have the law firm, forensic firm, and crisis communications firm that will work with you during the incident participate in developing and leading the exercises. An experienced incident responder leading the exercise will be able to provide helpful context during the exercise if the CISO states that he or she will identify, contain, and fully investigate a significant incident in a few days, or if the communications team wants to make notification no later than seven days after discovery or say that the company is implementing “state-of-the-art security measures” to make sure an incident never happens again. Continue Reading

The CFTC’s Proposed Standards Identify Cybersecurity Best Practices

Posted in Cybersecurity

The Commodity Futures Trading Commission (CFTC) offered several reasons for proposing five new cybersecurity testing requirements for the commodity trading platforms it regulates in its December 23, 2015, Notice of Proposed Rulemaking:

  • More than half of the securities exchanges surveyed in 2013 reported that they had been the victim of cyberattacks. 80 Fed Reg. at 80140.
  • Attacks increasingly seek to disrupt financial systems rather than just steal data. Id.
  • Survey respondents reported 42.8 million cyberattacks in 2014, the equivalent of 117,000 attacks per day. Id. at 80141. One of the CFTC commissioners who approved the proposed new standards referred to a bank that faced 30,000 cyberattacks per week, which averages an attack every 34 seconds. Id. at 80189.
  • More stealthy, “advanced persistent threat” attacks have escalated. Id. (See, for example, “How Russian Hackers Stole the Nasdaq.”)
  • Threats to the integrity of financial sector data rival threats to confidentiality. Fed Reg. at 80142.

Continue Reading

FTC Report on Big Data Outlines Usage Limitations Under Federal Law

Posted in Big Data

Binary code

On January 6, 2016 the Federal Trade Commission (“FTC”) issued the report Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues (“Report”), based on prior workshops and subsequent public comments on Big Data usage. The Report concentrates on data usage, not collection, and the application of current law to such usage, and not perceived gaps in regulation. Industry can take comfort in the Report’s non-expansive application of current law and its very limited application of FTC unfairness authority to Big Data usage.

Data brokers, platforms, analytics companies, e-tailers, ad networks, advertisers, creditors, employers, and others that are looking to Big Data to help make decisions and to develop, offer, and market products and services should look to the Report for guidance. Key take-aways include:

  • Beware Application of the Fair Credit Reporting Act (“FCRA”): FCRA applies to companies that prepare and sell reports on consumers to be used for credit, employment, insurance, housing, and similar eligibility determinations, and companies that obtain and use such reports. The FTC warns that use of Big Data (e.g., predictive analytics) in such manner is covered under FCRA. This results in obligations to ensure maximum accuracy and to provide notice to consumers and an opportunity to correct errors. The FTC has brought recent enforcement actions regarding “risk based” pricing of credit or insurance (e.g., related to cell phone services), and the FTC makes clear that Big Data analytics to establish such pricing triggers FCRA.
  • Consider Equal Employment Opportunity Laws and the Impact Big Data Analytics-Based Decision Making May Have on Protected Classes Such as Race, Gender, Disabilities, and Ethnic Origin. The federal nondiscrimination laws prohibit discrimination based on certain protected characteristics, including in many instances “disparate treatment” or “disparate impact” – the unintended discriminatory impact unexplained by neutral factors. The FTC warns, as examples, that use of zip codes to establish preferential treatment could implicate race and that certain data characteristics could implicate gender. The FTC also counsels that such data usage to make marketing decisions about what offers are made to what consumers (e.g., prime versus subprime mortgages) could violate equal opportunity laws. The key is whether opportunities are offered, and eligibility determined, based on a particular consumer’s qualifications, not class-based generalizations.
  • Authority Under the FTC Act Is Narrow, but the Scope of the FTC’s Deception and Unfairness Authority Needs to Be Considered. The Report reminds companies that making inaccurate statements regarding their Big Data practices will constitute a deceptive practice actionable under Section 5 of the FTC Act. It further warns that failure to disclose material information that consumers ought to know could lead to a deception claim. The Report also explains that although unfairness authority under Section 5 is limited, where Big Data practices lead to there is a likelihood of harm to consumers not outweighed by any benefits to consumers or competition, the FTC can prosecute for unfairness. As an example, based on a recent enforcement action, the Report describes a situation where data brokers knowingly provide consumer data to fraudsters and identity thieves, or where reasonable data security is not maintained. Section 5 violations can be prevented by accurately describing all material data practices (transparency and accurate disclosure) and taking reasonable steps to protect data (data security) and prevent its illegal use (diligence and contractual use restrictions).

The Report goes on to suggest best practices for companies to prevent violations of current federal law, including a list of questions designed to identify potentially problematic uses of Big Data.

Companies using Big Data should also consider the application of state laws. For a detailed analysis of potential state law issues regarding use of Big Data for dynamic pricing, see our prior blog post here.

FTC Prosecutes Serving of Behavior Ads on Kids’ Apps

Posted in Children’s Privacy

The Federal Trade Commission reminded publishers and advertisers recently that the Children’s Online Privacy Protection Act (COPPA) prohibits data collection, absent verified parental consent, for behavioral (interest-based) advertising on websites or mobile apps directed at children under 13. App publisher TapBlaze paid $60,000 and entered into a 20-year consent (available here) to settle charges.

The revised COPPA Rule is unequivocal in that absent verified parental consent, unique identifiers (e.g., device ID or ad IDs) cannot be used in connection with services directed to children for advertising that uses data collection over time or third-party services to create profiles on users to deliver them interest-based ads. This means not only no behavioral ads on kids’ online services, but also no “retargeting” by the publishers (serving the user with an ad for the publisher when the user leaves the service and goes to a third-party service) and no data collection on kids’ services for the purpose of building behavioral profiles.

One solution publishers can consider is the development of mixed-use sites and apps where, based on age gating, they offer two versions of the service, with behavioral advertising on only the version for users who identify as 13 or over. However, even those older users need to have notice of behavioral advertising and the ability to opt out of it consistent with the Digital Advertising Alliance’s self-regulatory program (

For more on COPPA’s requirements, and other laws and self-regulatory requirements for children’s advertising and privacy, see our report:  Enforcement Priorities and Trends in Children’s and Minor’s Advertising and Privacy.

LabMD and Wyndham Decisions Curtail FTC’s Data Privacy and Security Reach

Posted in Children’s Privacy, HIPAA/HITECH, Online Privacy

Both the administrative law judge’s decision in LabMD and the Third Circuit’s recent decision in Wyndham, which we previously blogged about, put the FTC on notice that it cannot assume that in the wake of a security breach, allegedly inadequate data security will necessarily constitute an unfair practice under Section 5 of the FTC Act. Further, the FTC’s body of data security consent orders – basically private settlements of uncontested and unadjudicated cases (most of which also include deception claims), where the remedies include “fencing in” that goes beyond what the law requires – are merely indications of best practices and not some sort of “common law” as some have contended. Indeed, to treat consent orders as precedential would fly in the face of Congress’ purposeful curtailment of the FTC’s rulemaking authority under Mag Moss, as compared to the APA standards applicable to other federal agencies. Finally, the decisions suggest that the application of Section 5 unfairness authority to consumer privacy, especially in the context of interest-based advertising, is limited.

The decisions are consistent with the history of Section 5. In the late 1970s, the FTC was moving to prohibit or greatly limit advertising to children, known as “kid vid,” based on its unfairness authority. There was Congressional backlash, and the end result was that the FTC’s unfairness authority was significantly curtailed statutorily. In order to prevail in an unfairness claim arising out of a data security incident, the FTC has to prove that allegedly unfair data security practices in effect during the relevant time period of a breach –

  • caused or are likely to cause substantial injury to consumers [not, e.g., to other businesses];
  • that this injury is not reasonably avoidable by consumers themselves; and
  • that this injury is not outweighed by countervailing benefits to consumers or to competition.

15 U.S.C. § 45(n)[2]. And to be considered substantial, the harm has to be a real and significant injury, arguably even with financial impact on consumers.

Continue Reading

Five Practice Pointers: Risk Allocation in Enterprise Cloud Service Agreements

Posted in Cloud Computing

Padlock circuit

Outsourcing information technology functions to the cloud entails risk for both companies and cloud service providers, especially when sensitive data is stored in the cloud. Sensitive data carries business risk and may be subject to a host of legal and regulatory requirements. Cloud service agreements, which typically use the cloud service provider’s forms, do not by default align enterprise risks with provider obligations.

Risk allocation may shift based on a variety of factors, including the cloud service model (Software as a Service, Platform as a Service, or Infrastructure as a Service), deployment model (public, private, hybrid, or community cloud), and the data being hosted. The degree to which a cloud transaction can be negotiated likewise varies, so companies should involve legal counsel early in the procurement process to help tailor their agreement to fit the organization’s risk profile. At a minimum, such tailoring includes clearly documenting the cloud service provider’s responsibilities (particularly those related to both data privacy/security and allocations of liability), and providing for meaningful remedies in the event of a breach of contract. Continue Reading