A recent AAA study revealed that, although the pandemic has resulted in fewer cars on the road, traffic deaths have surged. Speeding, alcohol-impairment, and reckless driving has caused the highest levels of crashes seen in decades, and the National Safety Council estimates a 9% increase in roadway fatalities from 2020. Autonomous vehicles (AVs) have the potential to increase traffic safety, and the California Public Utilities Commission (CPUC) just took a step to advance their commercialization and deployment.
In 2021, European lawmakers and agencies issued a number of proposals to regulate artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAV”), and data privacy, as well as reports and funding programs to pursue the developments in these emerging areas. From the adoption of more stringent cybersecurity standards for IoT devices to the deployment of standards-based autonomous vehicles, federal lawmakers and agencies have also promulgated new rules and guidance to promote consumer awareness and safety. While our team tracks developments across EMEA, this roundup focuses on a summary of the key developments in Europe in 2021 and what is likely to happen in 2022.
Part I: Internet of Things
With digital policy being a core priority for the current European Commission, the EU has pursued a range of initiatives in the area of IoT. These developments tend to be interspersed throughout a range of policy and legislative decisions, which are highlighted below.
Connecting Europe Facility and IoT Funding
In July 2021, the European Parliament and Council of the EU adopted a regulation establishing the Connecting Europe Facility (€33.7 billion for 2021-2027) to accelerate investment in trans-European networks while respecting technological neutrality. In particular, the regulation noted that the viability of “Internet of Things” services will require uninterrupted cross-border coverage with 5G systems, to enable users and objects to remain connected while on the move. Given that 5G deployment in Europe is still sparse, road corridors and train connections are expected to be key areas for the first phase of new applications in the area of connected mobility and therefore constitute vital cross-border projects for funding under the Connecting Europe Facility. The Parliament had also called earlier for “stable and adequate funding” for investments in AI and IoT, as well as for building transport and ICT infrastructure for intelligent transport systems (ITS), to ensure the success of the EU’s data economy.
In May 2021, the Council adopted a decision establishing a specific research funding programme (€83.4 billion for 2021-2027) under Horizon Europe. In specifying the EU’s priorities, the decision identified the importance of IoT in health care, cybersecurity, key digital technologies including quantum technologies, next generation Internet, space, and satellite communications.
Safety and Security of IoT
In June 2021, the Parliament adopted a resolution calling for tighter EU cybersecurity standards for connected devices, apps and operating systems, amid recent cyberattacks on critical infrastructure in the EU. It recommended that connected products and associated services, including supply chains, be made secure-by-design, resilient to cyber incidents, and quickly patched if vulnerabilities are discovered.
The resolution welcomed the European Commission’s plans to propose horizontal legislation on cybersecurity requirements for connected products and associated services and recommended that the Commission harmonize national laws in order to avoid the fragmentation of the Single Market. The text also demanded legislation imposing cybersecurity requirements for apps, software, embedded software (that control various devices and machines that are not computers) and operating systems (software that runs a computer’s basic functions) by 2023.
In January 2022, the Commission published the results of its inquiry into the consumer IoT sector launched earlier in July 2020. The report’s aim was to assess the sector’s competitive landscape, emerging trends and potential competition issues. It noted that European smart home revenue will more than double between 2020 and 2025 (from €17 billion to €38.1 billion). While the consumer IoT sector is still developing, the sector inquiry was prompted by indications of company behavior that may be conducive to distortion of competition. The Commission’s report will contribute to its standardization strategy and upcoming legislative and non-legislative initiatives aimed at clarifying and improving the standard essential patent (SEP) framework. It will also feed into the ongoing legislative debate on the scope of the Digital Markets Act (DMA) and specifically into some of the obligations proposed.
Part II: Connected and Automated Vehicles
In 2021, the groundwork has been laid for regulating the CAV sector. Legislative developments at the national level were the main focus in this context. However, it is also apparent at the EU level that substantial legislative changes are on their way, so we can expect to see some developments in this regard in 2022. This is reflected by the fact that the authorities are paving the way for increasing regulation of the automotive sector through funding programs and standards.
The development dynamics in the field of automated, autonomous and connected driving is evident these days. In 2021, Federal lawmakers focused their legislative proposals on adopting a legal framework for the use of autonomous vehicles. As a pioneer, the German government has come forward with the enactment of the German Autonomous Driving Act as of 12 July 2021 which is supposed to provide a temporary solution until harmonized rules are in place at the EU level (so far Regulation (EU) 2018/858: always requires a person in charge of the vehicle and thus full steerability of the vehicle). The law includes regulations of the technical requirements for the manufacturing, design and equipment of motor vehicles with autonomous driving functions, the inspection and procedure for the granting of an operating licence by the Federal Motor Vehicle Transport Authority (Kraftfahrt-Bundesamt), obligations of the persons involved in the operation of the autonomous vehicles, data processing which will be needed for the operation, and the adaptation and creation of uniform provisions to facilitate autonomous vehicle testing.
In this respect, the French decree amending the provisions of the Highway Code and the Transport Code as of 1 July 2021 is also worth mentioning which allows a driver to disclaim liability when the automated driving system operates in accordance with its conditions of use. It further regulates the interaction between the driver and the automated driving system as well as the expected attention from the driver when the automatic driving system is engaged and allows the autonomous vehicles to be operated on predefined routes and zones starting in September 2022.
The European Commission just recently adopted the first Work Programme for the digital part of the Connecting Europe Facility (CEF Digital), which defines the scope and objectives of the EU-supported actions that are necessary to improve Europe’s digital connectivity infrastructures. These actions will receive more than €1 billion in funding between 2021 and 2023. A key action that CEF Digital supports is the implementation of digital connectivity infrastructures related to cross-border projects in the areas of transport or energy and supporting operational digital platforms directly associated to transport or energy infrastructures.
Besides, the International Organization for Standardization (ISO) and SAE International published a standard that addresses the cybersecurity perspective in engineering of electrical and electronic (E/E) systems within road vehicles, and is supposed to help manufacturers keep abreast of changing technologies and cyber attack methods.
In terms of legislative proposals, the European Commission plans to adopt a new directive “Adapting liability rules to the digital age and circular economy”. The initiative was prompted by an evaluation of the Product Liability Directive 85/374/EEC and addresses challenges which arise when liability rules are applied to new emerging technologies (e.g. AI, IoT, CAV). This Inception Impact Assessment proposes to adapt the framework to take into account developments related to the transition to a circular and digital economy in terms of liability for damage caused by new and refurbished products, and to address challenges associated with artificial intelligence. This includes gaps and limitations that could potentially limit the scope and effectiveness of the Product Liability Directive if applied to the mobility systems on CAV.
Furthermore, the European Commission published a proposal for a directive amending Directive 2010/40/EU on the framework for the deployment of Intelligent Transport Systems in the field of road transport and for interfaces with other modes of transport (“proposed ITS Directive”). The proposed ITS Directive is supposed to cover new developments such as connected and automated mobility and online platforms allowing users to access several modes of transport. The ecosystem envisaged in the proposed ITS Directive would be based on a set of standards and aims enable interoperability and continuity of ITS applications, systems and services, and therefore connectivity and data exchange between vehicles, transport providers and infrastructure operators. This shall be implemented by making essential ITS services mandatory throughout the EU.
Apart from the EU developments, the UK Centre for Connected and Autonomous Vehicles has issued a series of reports about research projects regarding CAV issues, for example, on future transport innovations, and a market forecast capturing the latest changes in the global CAV market and advances in technology.
Part III: Artificial Intelligence and Data Privacy
* * *
We will continue to closely monitor the regulatory and policy developments on IoT and CAV in EMEA – please watch this space for further updates.
The EU was particularly active in furthering its digital strategy in 2021, and will likely continue this high level of activity into 2022. Below, we briefly summarize last year’s key legislative and regulatory updates from the EU across the following areas:
- data transfers;
- cookies (and alike) and unsolicited marketing communications;
- open data;
- intermediary services; and
- artificial intelligence.
Finally, in point seven (below), we list a number of guidance documents issued by the European Data Protection Board (“EDPB”) in related areas.
1. Data Transfers
The GDPR’s rules on data transfers was one of the main areas of policy and regulatory focus in 2021. We expect more developments in this area in 2022. Below, we outline what we consider to be last year’s five main developments in this area.
First, the European Commission issued two (adequacy) decisions: one for the UK and another for South Korea. The Commission granted the UK adequacy decision on June 28, 2021, three days before the expiry of the EU-UK trade agreement’s 6-month period during which personal data could freely flow between the EU and UK. The UK adequacy decision covers transfers governed by (1) the GDPR and (2) the Law Enforcement Directive (as explained in more detail in our blog post here). On December 17, 2021, the European Commission issued an adequacy decision for South Korea, which had earlier that year obtained a favorable opinion of the EDPB (see our blog post here). EU controllers and processors may freely transfer personal data to the UK and South Korea (without having to implement any of the transfer mechanisms of Chapter V of the GDPR).
Second, on June 4, 2021, the European Commission published the final version of its new standard contractual clauses (“SCCs”) for the international transfer of personal data, as well as standard Article 28 GDPR clauses for contracts between controllers and processors. The international transfer standard clauses entered into force on June 27, 2021. However, organizations may continue using the old SCCs in new agreements until September 27, 2021, and have until December 27, 2022 to introduce the new SCCs into existing agreements that relied on the old SCCs. (Find more information about the SCCs in our blog post here.)
Following the release of the new SCCs, a number of regulators announced that they would start enforcing the implementation of the new clauses. Notably, on June 1, 2021, the German supervisory authorities announced the launch of a “nationwide investigation” into German companies transferring personal data outside of the European Economic Area (see our blog post here).
Third, on June 18, 2021, the EDPB released a final version of its recommendations on measures that supplement transfer tools to ensure compliance with the GDPR, where organizations transfer personal data from the EEA to a country outside the EEA. The recommendations set out the following six-step process on how to handle transfers: (1) know your transfers; (2) identify the transfer tools you are relying on; (3) assess whether the GDPR Article 46 transfer tool you are relying on is effective in light of all circumstances of the transfer; (4) adopt supplementary measures; (5) take procedural steps if you have identified effective supplementary measures; and (6) re-evaluate all transfers at appropriate intervals (see a summary of the guidelines in our blog post here).
Fourth, on July 14, 2021, the EDPB issued draft guidelines on codes of conduct as tools for transfers. These guidelines complement the EDPB’s earlier guidelines on codes of conduct and monitoring bodies. They focus on the requirements for a code of conduct to be approved as a legal mechanism for transferring personal data outside the EEA to third countries that do not provide an adequate level of data protection (see a summary of the guidelines in our blog post here).
Fifth, on November 19, 2021, the EDPB published draft guidelines on the interplay between the application of the GDPR’s territorial scope and its provisions on international transfers. The guidelines clarify the meaning of the term “transfers” under the GDPR (see a summary of the guidelines in our blog post here).
2. Cookies (and alike) and Unsolicited Marketing Communications
The EU ePrivacy Directive currently regulates cookies and similar technologies, as well as unsolicited marketing. The EU seeks to replace the ePrivacy Directive with an ePrivacy Regulation, which aims to achieve a greater level of harmonization. The European Commission approved a first draft of the ePrivacy Regulation in January 2017. The draft regulation has since been under discussion in the Council of the EU.
In 2021, we saw some progress in the Council’s discussions concerning adoption of a final version of the ePrivacy Regulation. However, these discussions were not as fruitful as one may have hoped and, thus, the ePrivacy Regulation continues to be in draft form.
The Council released the new version of the draft ePrivacy Regulation on January 5, 2021. This draft version substantially amended the previously rejected drafts (find a summary of the main changes in our blog post here and more information in our podcast here). But on March 9, 2021, the EDPB issued a statement on the draft ePrivacy Regulation pointing out to several deficiencies.
After several months of standstill, on November 4, 2021, the Council and the European Parliament agreed to a number of amendments to the draft ePrivacy Regulation, in particular to the sections concerning: (1) direct marketing; and (2) remedies, liability and penalties (see our blog post here). Germany appears to have given up waiting for the draft ePrivacy Regulation to be enacted and has in the meantime implemented new rules on cookies and direct marketing (see our blog post here and here; the German law went into force on December 1, 2021).
Amid a flurry of activity in the cybersecurity space last year, below we highlight the three most notable regulatory developments relating to the EU’s rules in this area.
First, throughout 2021, the Council of the EU and the European Parliament negotiated the adoption of new EU cybersecurity rules, which were published on December 16, 2020 (see our blog post here). The EU plans to adopt a revised Directive on measures for a high common level of cybersecurity across the Union (“NIS2”) and a Directive on the resilience of critical entities (“Critical Entities Resilience Directive”). The Directives aim to (1) reflect the technological developments of the past years and (2) provide a better response to the new and emergent cybersecurity threats. On December 3, 2021, the Council agreed on a draft version of the NIS2 Directive (see latest available version here). Notably, this latest draft reduced the maximum fines Member States must permit competent authorities to impose, from EUR 10 million or 2% of the worldwide annual turnover of the undertaking involved (whichever higher) to EUR 4 million or 2% of the worldwide annual turnover of the undertaking involved (whichever higher). The Council and the Parliament are now in negotiations to reach a compromise text, which is planned for 2022.
Second, on January 19, 2021, the EDPB issued for public consultation draft guidelines on examples regarding data breach notification. The guidelines aim to assist data controllers in responding to and assessing the risk of personal data breaches, providing “practice-oriented, case-based guidance” which draws from the experiences of European supervisory authorities since the GDPR went into effect (see our blog post here). The EDPB released a final version of these guidelines on January 3, 2022, which includes limited changes to the draft version.
Third, the EDPB issued two statements on the new draft provisions of the second additional protocol to the Council of Europe Convention on Cybercrime (also known as Budapest Convention): one on February 2, 2021 (see here) and another on May 4, 2021 (see here). The second additional protocol aims to strengthen the convention’s capability to combat cybercrime. Among other provisions, it “provides a legal basis for disclosure of domain name registration information and for direct co-operation with service providers for subscriber information, effective means to obtain subscriber information and traffic data, immediate co-operation in emergencies, mutual assistance tools, as well as personal data protection safeguards” (find more information here).
4. Open Data
Below, we outline the two main open data regulatory initiatives in the EU last year.
First, throughout 2021, the Council and the European Parliament further negotiated the adoption of the draft Data Governance Act, which had been published by the European Commission on November 25, 2020 (see our blog post here). The proposed act aims to facilitate data sharing across the EU and between sectors. In particular, it sets out rules relating to the following: (1) conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection; (2) obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services; (3) the newly-introduced concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and (4) establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission. On March 11, 2021, the EDPB and the European Data Protection Supervisory issued an opinion on the draft Data Governance Act pointing out several deficiencies. After more amendments to the draft, on November 30, 2021, the Council and European Parliament reached a provisional agreement on the Data Governance Act. The provisional agreement is subject to Council and European Parliament’s formal approval, which is expected in Spring 2022.
Second, to complement the Regulation on Data Governance (see above), the European Commission is currently preparing a proposal for a so-called Data Act, which was initially expected towards the end of 2021. With this Data Act, the Commission intends to create a data economy that fosters data flows between countries and sectors. Several aspects in the planned proposal could potentially have an impact on the research sector, such as: (1) making private-sector data available for use by public sector; (2) investigating the potential benefits of B2B data sharing for the research sector; (3) revising intellectual property rights in the Database Directive; (4) providing safeguards for non-personal data in an international context; and (5) establishing more competitive markets for cloud computing services.
The Commission conducted a public consultation on its Inception Impact assessment of the Data Act, the results of which were released on December 6, 2021. The impact assessment was rejected by a committee within the Commission end of 2021 and the Commission is still working on a new draft.
5. Intermediary Services
Throughout 2021, the Council of the EU and the European Parliament further negotiated the adoption of the draft Digital Services Act and the draft Digital Markets Act.
The draft acts lay down rules for intermediary service providers (e.g., Internet access providers, cloud providers, search engines, social networks, and online marketplaces) covering areas such as: (1) liability of mere conduit, caching and hosting services; (2) content moderation; (3) transparency of services and electronic communications; (4) transparency of online advertising; (5) openness and interoperability of the services to businesses and consumers; and (6) fair competition between service providers (see our blog post here).
On November 25, 2021, the Council reached an agreement on the draft acts. Also in November 2021, the EDPB issued a statement on the draft acts citing a number of deficiencies. On January 20, 2022, the European Parliament agreed on several amendments to the draft version of the Digital Services Act (see our blog post here). As a next step, the Parliament will discuss these amendments with the Council, with the goal of reaching a compromise text that both can adopt.
6. Artificial Intelligence
We have addressed last year’s developments with respect to artificial intelligence in a separate post: see here.
7. Guidance in Other Areas
On February 2, 2021, the European Data Protection Board issued a response to the request from the European Commission for clarifications on the consistent application of the GDPR to health research (see our blog post here). The Commission’s questions covered the following seven topics: (1) legal basis for processing of health-related data for scientific research purposes; (2) further processing of previously collected health data; (3) the notion of broad consent; (4) transparency of data processing; (5) anonymization; (6) processing of special categories of data on a large scale; and (7) international cooperation.
On February 26, 2021, the European Commission released a report on the EU Member States’ laws governing the processing of health data. The report discusses three general types of health data uses: (1) primary use for health care services; (2) secondary use for public health purposes; and (3) secondary use for scientific research purposes (see our blog post here).
Virtual Voice Assistants
On July 7, 2021, the EDPB adopted its final guidelines on virtual voice assistants, which discusses how the GDPR and the ePrivacy Directive apply to these devices (including the software they integrate).
On March 9, 2021, the EDPB adopted its final guidelines on processing personal data in the context of connected vehicles and mobility related applications. The guidelines discuss how the GDPR and the ePrivacy Directive apply to automated vehicles (including the software they integrate).
Targeting of Social Media Users
On April 13, 2021, the EDPB adopted guidelines on the targeting of social media users. The Guidelines aim to clarify the roles and responsibilities of social media providers and “targeters” with regard to the processing of personal data for the purposes of targeting social media users (see a summary of the guidelines in our blog post here).
Storing Credit Card Data
On May 19, 2021, the EDPB adopted recommendations on the legal basis for storing credit card data for the sole purpose of facilitating further online transactions. The guidelines discuss how the GDPR and the ePrivacy Directive apply to the storage of this information.
* * *
We will continue to closely monitor the regulatory and policy developments in the EU – please watch this space for further updates.
In 2021, countries in EMEA continued to focus on the legal constructs around artificial intelligence (“AI”), and the momentum continues in 2022. The EU has been particularly active in AI—from its proposed horizontal AI regulation to recent enforcement and guidance—and will continue to be active going into 2022. Similarly, the UK follows closely behind with its AI strategy and recent reports and standards. While our team monitors developments across EMEA, this roundup will focus on summarizing the leading developments within Europe in 2021 and what that means for 2022.
The Proposed EU AI Act
In April 2021, the European Commission published its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Commission Proposal”). The Commission Proposal sets out a horizontal approach to AI regulation that establishes rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU (see our previous blog post here). The proposal is currently under negotiation between the co-legislators, the European Parliament and the Council of the European Union (“Council”).
Slovenia held the Council Presidency for the last six months of 2021, and France assumed the Presidency in January 2022. During its Presidency, Slovenia published a partial compromise text of the EU AI Act, focusing on edits to the classification of high-risk AI systems. The French Presidency circulated additional proposed amendments on 13 January 2022, focusing on the requirements for high-risk AI systems. Notable amendments in each version include:
Slovenian Council Presidency:
- Scope. New Article 52a (and corresponding Recital 70a) would clarify that “general purpose AI systems” do not fall within the scope of the Act. Although the compromise text does not define this term, Recital 70a states they are “understood as AI system[s] that are able to perform generally applicable functions such as image / speech recognition, audio / video generation, pattern detection, question answering, translation etc.”
- Social scoring. Article 5(1)(c) (and corresponding Recital 17) would extend the prohibition on AI systems used for social scoring as set out in the Commission Proposal, which is limited to public authorities, to private actors as well. Also, while the Commission Proposal limits the prohibition to social scoring used to evaluate the “trustworthiness” of natural persons, the Slovenian Presidency text removes this limitation, which would thereby broaden the scope of the prohibition.
- Biometric identification. Amendments to Article 3(33) would broaden the definition of “biometric data” to include systems that do not “uniquely” identify people, while other amendments would make the Act apply not only to “remote” biometric identification systems, but to biometric identification systems broadly. For instance, Article 5 would prohibit law enforcement use of any biometric identification systems in publically available spaces, subject to certain exceptions.
- High risk AI systems. Annex III would add to the list of AI systems qualifying as “high risk” those that are intended to be used to control “digital infrastructure” or “emissions and pollution.”
French Council Presidency:
- Risks. Amendments to Article 9 (Risk management system) would clarify that high-risk AI systems must have a risk-management system allowing for the identification of known / foreseeable risks “most likely to occur to health, safety and fundamental rights in view of the [system’s] intended purpose.”
- Trade-offs. Amendments to Article 9(3) specify that risk-management measures must aim to “minimis[e] risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.”
- Error tolerance. Amendments to Article 10(3)—concerning training, validation, and testing data—slightly relax the requirement that the data be “free of errors and complete”, now requiring data sets to be so “to the best extent possible.”
- Human oversight. Amendments to Article 14(4) make clear that the supplier of high-risk AI systems must enable the system to allow for human oversight by natural persons.
The EU AI Act is also being considered by the European Parliament. Although it is listed as a high-priority piece of legislation in the Commission’s 2022 work program (see here), it may be some time before it is finalized.
EU Recommendations, Consultations and Reports on AI
In addition to activity on the EU AI Act, the EU has published additional recommendations, consultations and reports on AI:
- The Council of Europe published a Recommendation (see here) that responds to the changes in profiling techniques in the last decade. It recognizes that profiling can impact individuals by placing them in predetermined categories without their knowledge and that the lack of transparency can pose significant risks to human rights. The recommendation encourages EU Member States to promote and make legally binding the use of a ‘privacy by design’ approach in the context of profiling, and sets out additional safeguards that should be imposed on profiling.
- The European Commission published a public consultation (see here) to adapt product liability rules to ensure that they sufficiently protect consumers against the harms of new technologies, including AI. The consultation is split into two parts and gathers views on: (i) how to ensure that consumers and users continue to be protected against the harm caused by AI systems, particularly with respect to compensation, and (ii) how to address the problems purportedly linked to certain types of AI (e.g., where there is difficulty with identifying the potentially liable person, or proving that person’s fault or proving a product’s defect and the causal link with damage). The consultation period has ended, and the Commission intends to propose an update to the Product Liability Directive by the end of the third quarter of 2022.
- On 6 October 2021, the European Parliament voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces (see our previous blog post here). The resolution forms part of a non-legislative report on the use of AI by the police and judicial authorities in criminal matters (“AI Report”) published by the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.
Enforcement on Clearview AI
From an enforcement perspective, in 2021, a number of EU data protection authorities (“DPAs”) have taken enforcement actions on specific AI use cases, particularly relating to FRT. The most significant action has been the investigation against Clearview AI Inc. (“Clearview AI”) in relation to their personal information handling practices, especially the company’s use of data scraped from the internet and the use of biometrics for facial recognition. The UK Information Commissioner’s Office (“ICO”) and the Office of the Australian Information Commissioner (“OAIC”) conducted a joint investigation. In November 2021, the ICO issued a provisional intention to fine Clearview AI over £17 million for its breach of data protection laws, and its final decision is expected in 2022 (see here). Additionally, the French privacy regulator ordered Clearview AI to cease collecting images from the internet and to delete existing data within two-months (see here in French). Due to the significant processing of personal data involved in AI, DPAs have taken an interest in applying the GDPR to AI.
AI Activity in the United Kingdom
Following the UK’s exit from the EU on 1 January 2021, the UK government announced plans to reform UK data protection law and published its own National AI Strategy in September 2021 (see here and our previous blog post here). According to the UK’s AI strategy, the Office of AI is expected to publish a White Paper on regulating AI in early 2022. Further to this, the UK government has published a number of reports and standards relating to AI, for example:
- The UK government’s Central Digital and Data Office (“CDDO”) published the Algorithmic Transparency Standard (see here) as part of the UK AI Strategy’s commitment to delivering greater transparency on algorithm-assisted decision making in the public sector. The Algorithmic Transparency Standard seeks to help public sector organizations provide clear information about the algorithmic tools they use, and why they use them.
- The UK government’s Centre for Data Ethics and Innovation (“CDEI”) published an independent report setting out the roadmap to an effective AI assurance ecosystem (see here).
- A new AI Standards Hub was launched by the Office of AI, supported by the British Standards Institution, in January 2022 (see here) to develop AI standards.
* * *
We will continue to closely monitor the regulatory and policy developments on AI in EMEA – please watch this space for further updates. For more information on developments related to AI and data privacy, please visit our AI Toolkit and our Data Privacy and Cybersecurity website.
On January 27, 2022, the Federal Communications Commission (“FCC”) adopted a Notice of Proposed Rulemaking (“NPRM”) that would require internet service providers (“ISPs”) to display labels disclosing certain service information, including prices, introductory rates, data allowances, broadband speeds, and network management practices. Notably, the NPRM proposes to adopt—with some modifications—the labels developed by an advisory committee and published by the Commission in a 2016 Public Notice.
On November 15, 2021, the Infrastructure Investment and Jobs Act (“IIJA”) became law, authorizing $65 billion in federal broadband investments with the goal of connecting all Americans to reliable, high speed, and affordable broadband. The IIJA directed the National Telecommunications and Information Administration (“NTIA”) to oversee the distribution of $48.2 billion in infrastructure grants to states, Tribal governments, and companies through four programs: the Broadband Equity, Access, and Deployment (“BEAD”) Program ($42.45 billion), Tribal Broadband Connectivity Program ($2 billion), Digital Equity Act Programs ($2.75 billion), and Enabling Middle Mile Broadband Infrastructure Program ($1 billion).
NTIA has already held two of its five planned virtual listening sessions designed to solicit information from interested stakeholders, and the third session – billed as a “deep dive” on the $42.5 billion BEAD program – is scheduled for January 26. NTIA also recently announced a Request for Comment (“RFC”) on the implementation of the following broadband programs of the Infrastructure Investment and Jobs Act (“IIJA”):
- Broadband Equity, Access, and Deployment Program;
- State Digital Equity Planning Grant Program; and
- Enabling Middle Mile Broadband Infrastructure Program.
The RFC seeks comment on how NTIA should generally administer IIJA funds as well as “program design, policy issues, and implementation considerations” of each initiative listed above. Comments are due by 5 p.m. EST on February 4, 2022.
NTIA has explained that it will use the comments submitted in response to the RFC, along with other sources of public input (such as the virtual listening sessions), to “improve the number and quality of ideas under consideration” as the agency develops the Notice of Funding Opportunity (“NOFO”) that will be issued for each program. However, it bears emphasis that NTIA’s operational and grant-funding decisions are explicitly excluded from Administrative Procedure Act challenges, so this stakeholder input process is strictly advisory.
The RFC’s 36 questions are divided into four sections: an initial section with questions about general IIJA administration and then one section for each of the IIJA programs listed above.
- On general IIJA administration issues, NTIA appears to be interested in what methods, data collection processes, and standards NTIA should use to support IIJA’s broader goal of connecting all Americans to broadband. NTIA also requests comment on the structure and format of the state subgrant award process, as well as workforce shortages and supply chain issues and their effect the IIJA’s goal of ensuring broadband infrastructure is made and installed by U.S. workers.
- The BEAD Program questions focus on technical requirements for project service speeds, security, reliability, and sustainable service, as well as criteria for connecting unserved and underserved communities. NTIA also requests comment from stakeholders on what speeds, throughput, and latencies will be required to connect all Americans over the next five, ten, and twenty years. Other BEAD Program questions cover the interaction between how to assess “served” areas vis-à-vis unfinished broadband projects, the definitions of “high-cost area” and “eligible subscriber,” and whether NTIA should define a baseline standard so providers are not required to offer disparate plans in each state, and what additional factors NTIA should adopt to drive affordability beyond the low-cost option.
- Regarding the State Digital Equity Planning Grant Program, the RFC asked about how NTIA should advise states as they produce their plans for the program, particularly how programs can achieve the goals of the IIJA and ensure states consult with historically marginalized communities.
- The RFC’s questions on the Enabling Middle Mile Broadband Infrastructure Program center around how NTIA should ensure middle-mile investments are targeted in areas where middle-mile service is non-existent or expensive, as well as prioritization and scalability of projects.
As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month. Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces. In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety. We are providing this year-end round up in four parts. In this post, we detail IoT updates in Congress, the states, and federal agencies.
As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month. Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces. In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety. We are providing this year-end round up in four parts. In this post, we detail CAV updates in Congress and federal agencies.
As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month. Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces. In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety. We are providing this year-end round up in four parts. In this post, we detail data privacy updates in Congress and federal agencies.
As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month. Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces. In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety. We are providing this year-end round up in four parts. In this post, we detail AI updates in Congress, state legislatures, and federal agencies.