Feeling BLU: What You Need to Know About Overseeing Your Service Providers

On April 30,2018, the Federal Trade Commission (FTC) announced a settlement with mobile phone manufacturer BLU Products and its owner over allegations that the company failed to implement appropriate procedures to oversee their service providers’ security practices, which allowed the service provider to install software containing commonly known security vulnerabilities on consumers’ mobile devices and to collect detailed personal information about consumers, such as text messages and location information, without consumers’ notice and consent.

According to the FTC’s complaint, BLU and its owner contracted with China-based ADUPS Technology to preinstall certain security software on BLU devices. The complaint alleged that, unbeknownst to consumers, the ADUPS software on BLU devices transmitted their personal information to ADUPS servers, including contents of text messages, real-time location data, call and text message logs, contact lists, and a list of applications installed on the device. The FTC did not allege that ADUPS used or disclosed consumers’ personal information.

Continue Reading

Facebook Biometric Suit Moves Forward

The U.S. District Court for the Northern District of California recently ruled that a certified class action on behalf of Illinois Facebook users alleging that the social network unlawfully collects biometric data from photo tagging will go forward, denying both parties’ summary judgment motions. This case is one of the first major tests of the scope of Illinois’s Biometric Information Privacy Act (BIPA).1 The litigation was originally filed in 2015, in response to Facebook’s launch of its “Tag Suggestions” feature, which used facial recognition algorithms to deliver suggested names for individuals in photos. Specifically, Facebook’s Tag Suggestions feature matched photos of an individual against other photos the individual was tagged in to suggest the name of the individual in the photo.

Illinois’s BIPA is one of only three state biometric privacy statutes on the books in the U.S., and the only one that allows for a private right of action.2 BIPA, generally speaking, prohibits an entity from collecting, capturing, purchasing, or otherwise obtaining a person’s biometric information unless it satisfies certain notice, consent, and data retention requirements. For example, entities must notify the person that their biometric information is being collected and stored; state the purpose for collecting, storing, and using the biometric information; and state the length of time the biometric information will be retained. The entity must also obtain written consent from the individual before it obtains the biometric information. Biometric information is defined as a retina or iris scan, fingerprint, voiceprint, or scan of face geometry. BIPA authorizes damages of $1,000 per violation for negligent violations of the law, and $5,000 per violation for intentional or reckless violations. Damages in the Facebook case could amount to billions.

Continue Reading

WashingTECH Tech Policy Podcast: Privacy Law After LabMD

In the latest episode of the WashingTECH Tech Policy Podcast, one of the leading national podcasts focused on tech law and policy debates driving the technology and communications sectors, Lydia Parnes, chair of the privacy and cybersecurity practice at Wilson Sonsini Goodrich & Rosati, discusses the state of privacy law after the Eleventh Circuit’s recent decision to vacate the Federal Trade Commission’s order directing LabMD to create and implement a variety of privacy protections.

Click here to hear the podcast.

Click here to read our complete WSGR Alert on the Eleventh Circuit’s LabMD decision.

 

California Enacts Sweeping Privacy Law to Avert Potential Ballot Measure

In a surprising twist, the California legislature rushed last week to pass one of the most comprehensive privacy laws in the country. The bill was introduced only a week prior, and within hours of passage, it was signed into law by Governor Jerry Brown. As strict as the act is, it was enacted to avoid an even more restrictive ballot initiative, which the initiative’s sponsor agreed to withdraw.

The California Consumer Privacy Act of 2018 requires covered businesses to make new disclosures to consumers about their data collection, use, and sharing practices; allows consumers to opt out of certain data sharing with third parties; and provides a new cause of action for consumers and the California Attorney General to bring lawsuits against companies that suffer data breaches. In some respects, the act may well go beyond the requirements of the European Union’s General Data Protection Regulation (GDPR), which recently came into force. The act takes effect on January 1, 2020, and, without revisions, may upend the ad-supported business model that underlies much of the modern digital economy.

Click here to read our complete WSGR Alert discussing the new law.

Eleventh Circuit LabMD Decision Significantly Restrains FTC’s Remedial Powers in Data Security and Privacy Actions

The U.S. Court of Appeals for the Eleventh Circuit recently released its highly anticipated decision in the long-running case pitting the now-defunct medical laboratory LabMD against the Federal Trade Commission (FTC), vacating the FTC’s data security order. In reaching its conclusion, the court held that the order’s requirement that LabMD establish a comprehensive information security program was unenforceable. This holding has broad implications for the FTC’s remedial powers in data security and privacy actions going forward, as requirements to establish a comprehensive security or privacy program have become common in FTC security and privacy settlements over the past 16 years. If the court’s decision stands, the FTC will likely need to enjoin specific acts or practices in its security and privacy orders, rather than relying on broad requirements that companies implement comprehensive security or privacy programs.

Click here to read our complete WSGR Alert on the Eleventh Circuit’s LabMD decision.

What’s Old Is New Again: FTC Takes Rare Step of Withdrawing and Reissuing Expanded Data Security Settlement with Uber in Light of 2016 Data Breach

On April 12, 2018, the Federal Trade Commission (FTC) announced that it was withdrawing its proposed August 2017 privacy and data security settlement with Uber Technologies and issuing a new and expanded proposed settlement.1 According to the FTC, the reason for this extraordinary step was to address additional allegations of misconduct by the ride-sharing company in connection with a data breach it suffered in 2016. The revised complaint includes new factual allegations regarding that breach,2 and the revised consent order includes significant new reporting obligations for the company regarding future breaches, new obligations for the order’s mandated privacy program, and additional reporting and recordkeeping obligations that will last for longer periods of time.3

Those that closely follow the FTC know that any modifications to consumer protection settlements after they have been proposed by the FTC are extremely rare, so it’s worth taking a closer look at what triggered this unusual action and the important new insight it provides into the FTC’s current thinking on what it considers unreasonable security practices. Additionally, the FTC’s revised complaint provides, for the first time, concrete guidance on what it considers “legitimate” uses of a bug bounty program. Continue Reading

LexBlog