On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.

Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

Senators Lindsey Graham (R-S.C.), Tom Cotton (R-Ark.) and Marsha Blackburn (R-Tenn.) have introduced the Lawful Access to Encrypted Data Act, a bill that would require tech companies to assist law enforcement in executing search warrants that seek encrypted data.  The bill would apply to law enforcement efforts to obtain data at rest as well as data in motion.  It would also apply to both criminal and national security legal process.  This proposal comes in the wake of the Senate Judiciary Committee’s December 2019 hearing on encryption and lawful access to data.  According to its sponsors, the purpose of the bill is to “end[] the use of ‘warrant-proof’ encrypted technology . . . to conceal illicit behavior.”
Continue Reading Lawful Access to Encrypted Data Act Introduced

On April 6, 2020, Tapplock, Inc., a Canadian maker of internet-connected smart locks, entered into a settlement with the Federal Trade Commission (“FTC”) to resolve allegations that the company deceived consumers by falsely claiming that it had implemented reasonable steps to secure user data and that its locks were “unbreakable.”  The FTC alleged that these representations amounted to deceptive conduct under Section 5 of the FTC Act.  In its press release accompanying the settlement, the FTC provided guidance for IoT companies regarding the design and implementation of privacy and security measures for “smart” devices, as discussed further below in this post.

Continue Reading IoT Update: FTC Settles with Smart Lock Manufacturer and Provides Guidance for IoT Companies

On 19 February 2020, the new European Commission published two Communications relating to its five-year digital strategy: one on shaping Europe’s digital future, and one on its European strategy for data (the Commission also published a white paper proposing its strategy on AI; see our previous blogs here and here).  In both Communications, the Commission sets out a vision of the EU powered by digital solutions that are strongly rooted in European values and EU fundamental rights.  Both Communications also emphasize the intent to strengthen “European technological sovereignty”, which in the Commission’s view will enable the EU to define its own rules and values in the digital age.  The Communications set out the Commission’s plans to achieve this vision.

Continue Reading AI Update: European Commission’s plans on data and Europe’s digital future (Part 3 of 4)

On 19 February 2020, the European Commission presented its long-awaited strategies for data and AI.  These follow Commission President Ursula von der Leyen’s commitment upon taking office to put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within the new Commission’s first 100 days.  Although the papers published this week do not set out a comprehensive EU legal framework for AI, they do give a clear indication of the Commission’s key priorities and anticipated next steps.

The Commission strategies are set out in four separate papers—two on AI, and one each on Europe’s digital future and the data economy.  Read together, it is clear that the Commission seeks to position the EU as a digital leader, both in terms of trustworthy AI and the wider data economy.

Continue Reading AI Update: European Commission Presents Strategies for Data and AI (Part 1 of 4)

On February 4, 2020, the United Kingdom’s Centre for Data Ethics and Innovation (“DEI”) published its final report on “online targeting” (the “Report”), examining practices used to monitor a person’s online behaviour and subsequently customize their experience. In October 2018, the UK government appointed the DEI, an expert committee that advises the UK government on how to maximize the benefits of new technologies, to explore how data is used in shaping peoples’ online experiences. The Report sets out its findings and recommendations.
Continue Reading Centre for Data Ethics and Innovation publishes final report on “online targeting”

On December 3, 2019, the EU’s new Commissioner for the Internal Market, Thierry Breton, suggested a change of approach to the proposed e-Privacy Regulation may be necessary.  At a meeting of the Telecoms Council, Breton indicated that the Commission would likely develop a new proposal, following the Council’s rejection of a compromise text on November 27.

The proposed Regulation is intended as a replacement to the existing e-Privacy Directive, which sets out specific rules for traditional telecoms companies, in particular requiring that they keep communications data confidential and free from interference (e.g., preventing wiretapping).  It also sets out rules that apply regardless of whether a company provides telecoms services, including restrictions on unsolicited direct marketing and on accessing or storing information on users’ devices (e.g., through the use of cookies and other tracking technologies).

Continue Reading New E-Privacy Proposal on the Horizon?

Earlier this month, Covington’s Brussels, Frankfurt and London offices hosted a webinar on EU regulatory developments impacting connected and automated vehicles (CAVs). The seminar attracted participants from across the globe, predominantly from tech and automotive industries. This post features an overview of the introduction, and sections on data access and competition, data protection and cybersecurity. Part 2 will focus on other important CAV areas in the EU.
Continue Reading AI/IoT Update: Connected and Automated Vehicles Webinar Series: EU Key Developments PART 1

On August 27, 2019, the U.S. Patent and Trademark Office (“USPTO”) published a Request for Comments on Patenting Artificial Intelligence Inventions in the Federal Register. The Request follows Director Iancu’s statement that America’s national security and economic prosperity depend on the United States’ ability to maintain a leadership role in Artificial Intelligence (AI) and other emerging technologies, as explained in another post on an artificial intelligence conference held by the USPTO earlier this year.

Recent Rapid Advances in AI Technologies

The recent confluence of big data, increasingly faster and more specialized hardware, improved algorithms, and increased investment has led to rapid advancement in AI technologies and applications such as computer vision, natural language processing, medical diagnostics, robotics, autonomous vehicles, and drug development, among others. And while the Request does not define the term “artificial intelligence,” the USPTO does provide a class definition for the examination of AI inventions and patent applications, and Class 706 identifies several technologies encompassed by AI technology.

Continue Reading AI Update: USPTO Publishes Request for Comments on Patenting Artificial Intelligence Inventions

On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.

Continue Reading ICO publishes blog post on AI and trade-offs between data protection principles