On July 24, 2019, the European Parliament published a study entitled “Blockchain and the General Data Protection Regulation: Can distributed ledgers be squared with European data protection law?” The study explores the tension between blockchain technology and compliance with the General Data Protection Regulation (the “GDPR”), the EU’s data protection law. The study also explores how blockchain technology can be used as a tool to assist with GDPR compliance. Finally, it recommends the adoption of certain policies to address the tension between blockchain and the GDPR, to ensure that “innovation is not stifled and remains responsible”. This blog post highlights some of the key findings in the study and provides a summary of the recommended policy options.
On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”). The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions. The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.
The ICO invites organizations with experience of considering these complex issues to provide their views. This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI. See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.
On July 29, 2019, the Court of Justice of the European Union (“CJEU”) handed down its judgment in the Fashion ID case (Case C-40/17). The CJEU found that when a website operator embeds Facebook’s “Like” button on its website, Facebook and the website operator become joint controllers. The case clarifies the relationship between website operators and social networking sites whose plug-ins are embedded into websites for user tracking and online marketing purposes. The ruling is expected to influence the contractual terms that companies will need to have in place when embedding such social plug-ins to their websites, and may also have ramifications for adtech practices more generally.
On July 16, 2019, the UK’s Information Commissioner’s Office (“ICO”) released a new draft Data sharing code of practice (“draft Code”), which provides practical guidance for organizations on how to share personal data in a manner that complies with data protection laws. The draft Code focuses on the sharing of personal data between controllers, with a section referring to other ICO guidance on engaging processors. The draft Code reiterates a number of legal requirements from the GDPR and DPA, while also including good practice recommendations to encourage compliance. The draft Code is currently open for public consultation until September 9, 2019, and once finalized, it will replace the existing Data sharing code of practice (“existing Code”).
IoT Update: Federal Lawmakers Focus on Smart Cities
Since the beginning of the year, lawmakers in this Congress have introduced a number of proposals to study, cultivate, and guide the growth of smart cities. This blog post summarizes seven smart cities bills introduced in this Congress. Some bills focus broadly on Federal efforts to prioritize smart cities, whereas others focus on specific topics, like transportation and smart utilities.
Interestingly, most smart cities legislation introduced this year includes a grant program, which could reflect Congressional interest in demonstrating best practices capable of being replicated, as well as interest in providing financial support to accelerate smart cities growth. The size of these grant programs typically hovers around $20-50M, though some bills leave the funding amount to the grant administrator.
Federal and state policymakers continued to focus on artificial intelligence (“AI”) and the Internet of Things (“IoT”) in the second quarter of 2019, including by introducing both substantive measures that would regulate the use of the technology and by supporting funding bills aimed at increasing investment. In our second AI & IoT Quarterly Legislative Update, we detail the notable legislative events from this quarter on AI, IoT, cybersecurity as it relates to AI and IoT, and connected and autonomous vehicles (“CAVs”).
In the second quarter of 2019, members in both the House and Senate introduced legislation focused on issues at the core of President Trump’s February 11, 2019, Executive Order, “Maintaining American Leadership in Artificial Intelligence” (the “AI Executive Order”). In particular:
- Senator Martin Heinrich introduced the Artificial Intelligence Initiative Act (S.1558), which would establish a coordinated federal initiative to accelerate research and development on AI.
- Representative Dan Lipinski introduced the Growing Artificial Intelligence Through Research Act (“GrAITR Act”) (H.R. 2202), which would direct the President to establish and implement the “National Artificial Intelligence Initiative,” to create a comprehensive research and development strategy and increase coordination among federal agencies.
- Representative Jerry McNerney introduced the AI in Government Act of 2019 (H.R.2575), which would create an “AI Center of Excellence” to advise and promote the efforts of the federal government in developing innovative uses of AI, and require the Director of the Office of Management and Budget to issue guidance to federal agencies on developing AI governance plans.
Like the AI Executive Order, these bills address AI research and development, the development of AI technical standards, the needs of an AI workforce, and governance frameworks for AI technologies. (You can find more information on the AI Executive Order in our prior blog post.)
Internet of Things
Federal lawmakers have also introduced a number of measures to encourage development of smart cities, which collect, use and analyze data collected through IoT technologies to efficiently manage assets and resources. For example, the Smart Cities and Communities Act of 2019, was introduced in in the House by Representative Suzan DelBene (H.R. 2636) and in the Senate by Senator Maria Cantwell (S.1398), to promote the advancement of smart cities technologies. The legislation would establish a council of federal agencies to prioritize activities that demonstrate the value of smart cities, and establish a grant program to facilitate the adoption of smart city technologies, including in small- and medium-sized cities. Other smart city proposals focus on specific types of technologies used in smart cities, including:
- smart transportation, addressed in the Smart Technologies Advancing Reliable Transportation Act (H.R. 3156) introduced by Representative Yvette Clarke; the Less Traffic with Smart Stop Lights Act of 2019 (H.R. 3261), introduced by Representative Tony Cardenas, and the Moving and Fostering Innovation to Revolutionize Smarter Transportation Act (S. 1939), introduced by Senator Catherine Cortez Masto, and (H.R. 3388) introduced by Representative Mark DelSaulnier
- smart utilities, addressed in the Distributed Energy Demonstration Act of 2019 (S. 1742), introduced by Senator Ron Wyden; and
- smart buildings, addressed in the Smart Building Acceleration Act (H.R. 2044), introduced by Representative Peter Welch.
Legislators are also focused on the state of the IoT industry broadly, with Representative Robert Latta in May re-introducing the SMART IoT Act, which passed the House last year. The new bill (H.R. 2644) would direct the Secretary of Commerce to conduct a study and submit to Congress a report on the state of internet-connected devices industry in the United States.
Cybersecurity – Relating to AI and IoT
Last month, committees in both chambers considered and advanced amended versions of the IoT Cybersecurity Improvement Act (S. 734, H.R. 1668), which was introduced in the Senate by Senators Mark Warner and Cory Gardner and in the House by Representative Robin Kelly. The bills seek to strengthen cybersecurity requirements for IoT devices purchased by the federal government, with the goal of affecting IoT cybersecurity standards more broadly, as detailed in our prior blog post.
The Senate amendment removes the definition of “covered devices” subject to the Act and instead refers to “Internet of Things devices,” without defining them. It also requires OMB only to issue “principles” for federal agencies on the use of IoT devices, rather than policies, principles, standards, or guidance. The Senate amendment would also authorize agencies to waive compliance with those principles when use of an IoT device is (1) “necessary for national security or for research purposes”; (2) “appropriate to the function of the covered device”; (3) “secured using alternative and effective methods”; or (4) “of substantially higher quality or affordability than a product that meets such policies, principles, standards, or guidelines.”
In addition, several measures introduced in the second quarter focus on supply chain and infrastructure cybersecurity. They include:
- The Leading Infrastructure for Tomorrow’s America Act (H.R. 2741), a wide-ranging bill introduced by Representative Frank Pallone, which incorporates cybersecurity requirements throughout, aims to “rebuild and modernize the Nation’s infrastructure to expand access to broadband and Next Generation 9–1–1, rehabilitate drinking water infrastructure, modernize the electric grid and energy supply infrastructure, redevelop brownfields, strengthen health care infrastructure, create jobs, and protect public health and the environment,” among other things.
- The SUPPLY CHAIN Act (S. 1457), introduced by Senator Marsha Blackburn and co-sponsored by Senators John Cornyn and Marco Rubio, aims to coordinate federal agencies to secure the American communications equipment supply chain.
- The U.S.-China Economic and Security Review Act of 2019 (H.R. 2565, S. 987), a bipartisan measure introduced in the House by Representatives Brad Sherman and Mike Gallagher and in the Senate by Senators Christopher Coons, Tim Kaine, and Mitt Romney. The bill aims to address IoT supply chain vulnerabilities associated with China by requiring the Chief Information Officers Council to submit an annual report to Congress.
Connected and Autonomous Vehicles
Federal lawmakers have yet to reintroduce two comprehensive bills that died in the previous Congress: the Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution (“SELF DRIVE”) Act (H.R. 3388) and American Vision for Safer Transportation through Advancement of Revolutionary Technologies (“AV START”) Act (S. 1885). Rather, new legislation has focused on CAV-related grant programs. The Preparing Localities for an Autonomous and Connected Environment (“PLACE”) Act (H.R. 2542), introduced by Representative Earl Blumenauer, would direct the Secretary of Transportation to make grants to study secondary influences of CAVs on communities (e.g., influences on land use, urban design, transportation, real estate, and municipal budgets). Another measure introduced by Representative Mark DeSaulnier (H.R. 3388), would direct the Secretary of Transportation to establish the Strengthening Mobility and Revolutionizing Transportation (“SMART”) Challenge Grant Program, to encourage technological innovation, including with respect to CAVs, in communities nationwide.
In the absence of comprehensive federal legislation on CAVs, states continue to introduce measures to foster innovation in the CAV industry and protect consumers and communities. In the second quarter of this year, Washington State enacted a law governing delivery robots on sidewalks (H.B. 1325), which goes into effect September 1, 2019. California legislators continue to consider a number of CAV bills, including a new measure that would establish a working group on autonomous passenger vehicle policy development (S.B. 59) and another to require transit operators to ensure certain automated transit vehicles are staffed by employees (S.B. 336). California’s Department of Motor Vehicles is also considering a proposed rule to allow the testing and deployment of certain autonomous motor trucks. And in Florida, the state legislature recently enacted a law (H.B. 311) to allow CAVs without human operators, effective July 1, 2019.
This is the second installment in Covington’s quarterly update on AI and IoT legislative developments.
If you have any questions concerning the material discussed in this post, please contact the following members of our Artificial Intelligence and Internet of Things initiatives:
As the policy debate concerning government oversight of artificial intelligence evolves, public procurement regulations have become a potential entry point for regulating artificial intelligence. Earlier this year, the White House issued an Executive Order on AI mandating that the National Institute of Standards and Technology develop a guide to federal engagement on AI technical standards. While the federal government’s actions have understandably garnered significant attention, state and local governments are also undertaking preliminary efforts to engage on the technical standards for AI procured and utilized by their agencies. Continue Reading
On July 10, 2019, the White House Office of Management and Budget (“OMB”) published a Request for Information (“RFI”) in the Federal Register, requesting comments on how to improve Federal data sets and models for artificial intelligence (“AI”) research and development (“R&D”) and testing. The RFI is a part of the White House’s AI Initiative, as kicked off by the Executive Order on Maintaining American Leadership in Artificial Intelligence. Continue Reading
On June 25, 2019, as part of their continuing work on the AI Auditing Framework, the UK Information Commissioner’s Office (ICO) published a blog setting out their views on human bias and discrimination in AI systems. The ICO has also called for input on specific questions relating to human bias and discrimination, set out below.
The ICO explains in its blog how flaws in training data can result in algorithms that perpetuate or magnify unfair biases. The ICO identifies three broad approaches to mitigate this risk in machine learning models:
- Anti-classification: making sure that algorithms do not make judgments based on protected characteristics such as sex, race or age, or on proxies for protected characteristics (e.g., occupation or post code);
- Outcome and error parity: comparing how the model treats different groups. Outcome parity means all groups should have equal numbers of positive and negative outcomes. Error parity means all groups should have equal numbers of errors (such as false positives or negatives). A model is fair if it achieves outcome parity and error parity across members of different protected groups.
- Equal calibration: comparing the model’s estimate of the likelihood of an event and the actual frequency of said event for different groups. A model is fair if it is equally calibrated between members of different protected groups.
The guidance stresses the importance of appropriate governance measures to manage the risks of discrimination in AI systems. Organizations may take different approaches depending on the purpose of the algorithm, but they should document the approach adopted from start to finish. The ICO also recommends that organizations adopt clear, effective policies and practices for collecting representative training data to reduce discrimination risk; that organizations’ governing bodies should be involved in approving anti-discrimination approaches; and that organizations continually monitor algorithms by testing them regularly to identify unfair biases. Organizations should also consider using a diverse team when implementing AI systems, which can provide additional perspectives that may help to spot areas of potential discrimination.
The ICO seeks input from industry stakeholders on two questions:
- If your organisation is already applying measures to detect and prevent discrimination in AI, what measures are you using or have you considered using?
- In some cases, if an organisation wishes to test the performance of their ML model on different protected groups, it may need access to test data containing labels for protected characteristics. In these cases, what are the best practices for balancing non-discrimination and privacy requirements?
The ICO also continues to seek input from industry on the development of an auditing framework for AI; organizations should contact the ICO if they wish to provide feedback.
On June 10, 2019, the UK Government’s Digital Service and the Office for Artificial Intelligence released guidance on using artificial intelligence in the public sector (the “Guidance”). The Guidance aims to provide practical guidance for public sector organizations when they implement artificial intelligence (AI) solutions.
The Guidance will be of interest to companies that provide AI solutions to UK public sector organizations, as it will influence what kinds of AI projects public sector organizations will be interested in pursuing, and the processes that they will go through to implement AI systems. Because the UK’s National Health Service (NHS) is a public sector organization, this Guidance is also likely to be relevant to digital health service providers that are seeking to provide AI technologies to NHS organizations.
The Guidance consists of three sections: (1) understanding AI; (2) assessing, planning and managing AI; (3) using AI ethically and safely, as summarized below. The guidance also has links to summaries of examples where AI systems have been used in the public sector and elsewhere.