Privacy & Data Security

On 11 November 2020, the European Data Protection Board (“EDPB”) issued two draft recommendations relating to the rules on how organizations may lawfully transfer personal data from the EU to countries outside the EU (“third countries”).  These draft recommendations, which are non-final and open for public consultation until 30 November 2020, follow the EU Court of Justice (“CJEU”) decision in Case C-311/18 (“Schrems II”).  (For a more in-depth summary of the CJEU decision, please see our blog post here and our audiocast here. The EDPB also published on 24 July 2020 FAQs on the Schrems II decision here).

The two recommendations adopted by the EDPB are:


Continue Reading EDPB adopts recommendations on international data transfers following Schrems II decision

FCC Chairman Pai announced today that the FCC will move forward with a rulemaking to clarify the meaning of Section 230 of the Communications Decency Act (CDA).  To date, Section 230 generally has been interpreted to mean that social media companies, ISPs, and other “online intermediaries” have not been subject to liability for their users’ actions.

On July 27, the Trump Administration—acting through the National Telecommunications and Information Administration—submitted a Petition for Rulemaking on Section 230, and Chairman Pai announced on August 3 that the FCC would seek public comment on the petition.  That petition asked the FCC to adopt rules to “clarify” the circumstances under which the liability shield of Section 230 applies.  Citing the FCC General Counsel’s reported position that the Commission has the legal authority to interpret Section 230, Chairman Pai today stated that a forthcoming agency rulemaking will strive to “clarify its meaning.”

Continue Reading FCC Announces Section 230 Rulemaking

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.

Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

In this update, we detail the key legislative updates in the second quarter of 2020 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), cybersecurity as it relates to AI and IoT, and connected and automated vehicles (“CAVs”). The volume of legislation on these topics has slowed but not ceased, as lawmakers increasingly focus on the pandemic and the upcoming national election. As Congress processes Appropriations bills, it continues to look to support and fund these technologies. We will continue to update you on meaningful developments between these quarterly updates across our blogs.
Continue Reading U.S. AI, IoT, and CAV Legislative Update – Second Quarter 2020

Senators Lindsey Graham (R-S.C.), Tom Cotton (R-Ark.) and Marsha Blackburn (R-Tenn.) have introduced the Lawful Access to Encrypted Data Act, a bill that would require tech companies to assist law enforcement in executing search warrants that seek encrypted data.  The bill would apply to law enforcement efforts to obtain data at rest as well as data in motion.  It would also apply to both criminal and national security legal process.  This proposal comes in the wake of the Senate Judiciary Committee’s December 2019 hearing on encryption and lawful access to data.  According to its sponsors, the purpose of the bill is to “end[] the use of ‘warrant-proof’ encrypted technology . . . to conceal illicit behavior.”
Continue Reading Lawful Access to Encrypted Data Act Introduced

On June 2, 2020, the French Supervisory Authority (“CNIL”) published a paper on algorithmic discrimination prepared by the French independent administrative authority known as “Défenseur des droits”.  The paper is divided into two parts: the first part discusses how algorithms can lead to discriminatory outcomes, and the second part includes recommendations on how to identify and minimize algorithmic biases.  This paper follows from a 2017 paper published by the CNIL on “Ethical Issues of Algorithms and Artificial Intelligence”.
Continue Reading French CNIL Publishes Paper on Algorithmic Discrimination

The COVID-19 pandemic has created both speed bumps and accelerants for connected and automated vehicle (“CAV”) developments in the United States.  In our Quarterly Update earlier this month, we covered recent legislative and regulatory activity around CAVs, both specifically targeted efforts and those impacting AI and IoT technologies generally.  Although some CAV legislative efforts have been sidelined due to the government’s focus on COVID-19, the pandemic is incentivizing policymakers at the federal and state levels to support CAV-related initiatives.

Continue Reading IoT Update: COVID-19 Drives Forward Connected and Automated Vehicle Legislative and Regulatory Efforts

On April 6, 2020, Tapplock, Inc., a Canadian maker of internet-connected smart locks, entered into a settlement with the Federal Trade Commission (“FTC”) to resolve allegations that the company deceived consumers by falsely claiming that it had implemented reasonable steps to secure user data and that its locks were “unbreakable.”  The FTC alleged that these representations amounted to deceptive conduct under Section 5 of the FTC Act.  In its press release accompanying the settlement, the FTC provided guidance for IoT companies regarding the design and implementation of privacy and security measures for “smart” devices, as discussed further below in this post.

Continue Reading IoT Update: FTC Settles with Smart Lock Manufacturer and Provides Guidance for IoT Companies

On February 4, 2020, the United Kingdom’s Centre for Data Ethics and Innovation (“DEI”) published its final report on “online targeting” (the “Report”), examining practices used to monitor a person’s online behaviour and subsequently customize their experience. In October 2018, the UK government appointed the DEI, an expert committee that advises the UK government on how to maximize the benefits of new technologies, to explore how data is used in shaping peoples’ online experiences. The Report sets out its findings and recommendations.
Continue Reading Centre for Data Ethics and Innovation publishes final report on “online targeting”

U.S. federal policymakers continued to focus on artificial intelligence (“AI”) and the Internet of Things (“IoT”) in the fourth quarter of 2019, including by introducing substantive bills that would regulate the use of such technology and by supporting bills aimed at further study of how such technology may impact different sectors. In our fourth AI & IoT Quarterly Legislative Update, we detail the notable legislative events from this quarter on AI, IoT, cybersecurity as it relates to AI and IoT, and connected and autonomous vehicles (“CAVs”).
Continue Reading U.S. AI and IoT Quarterly Legislative Update: Fourth Quarter 2019