On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.

Continue Reading AI Update: The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

On February 4, 2020, the United Kingdom’s Centre for Data Ethics and Innovation (“DEI”) published its final report on “online targeting” (the “Report”), examining practices used to monitor a person’s online behaviour and subsequently customize their experience. In October 2018, the UK government appointed the DEI, an expert committee that advises the UK government on how to maximize the benefits of new technologies, to explore how data is used in shaping peoples’ online experiences. The Report sets out its findings and recommendations. Continue Reading Centre for Data Ethics and Innovation publishes final report on “online targeting”

On 19 September 2019, the European Parliamentary Research Service (“EPRS”)—the European Parliament’s in-house research service—released a briefing paper that summarizes the current status of the EU’s approach to developing a regulatory framework for ethical AI.  Although not a policymaking body, the EPRS can provide useful insights into the direction of EU policy on an issue.  The paper summarises recent calls in the EU for adopting legally binding instruments to regulate AI, in particular to set common rules on AI transparency, set common requirements for fundamental rights impact assessments, and provide an adequate legal framework for facial recognition technology.

The briefing paper follows publication of the European Commission’s high-level expert group’s Ethics Guidelines for Trustworthy Artificial Intelligence (the “Guidelines”), and the announcement by incoming Commission President Ursula von der Leyen that she will put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within her first 100 days in office.

Continue Reading European Parliamentary Research Service issues a briefing paper on implementing EU’s ethical guidelines on AI

On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase. Continue Reading AI Update: EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI

On February 27th, Reps. Brenda Lawrence (D-Mich.) and Ro Khanna (D-Calif.) introduced a resolution emphasizing the need to ethically develop artificial intelligence (“AI”). H. RES. 153, titled “Supporting the development of guidelines for ethical development of artificial intelligence,” calls on the government to work with stakeholders to ensure that AI is developed in a “safe, responsible, and democratic” fashion. The resolution has nine Democratic sponsors and was referred to the House Committee on Science, Space, and Technology.

Continue Reading AI Update: U.S. House Resolution on AI Ethical Development Introduced

On 18 December 2018, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published new draft guidance on “AI Ethics” (the “guidance”).  The AI HLEG is a European Commission-backed working group made up of representatives from industry, academia and NGOs, and was formed as part of the Commission’s ongoing work to develop EU policy responses to the development, challenges and new opportunities posed by AI technologies.  Stakeholders are invited to comment on the draft through the European AI Alliance before it is finalized in March 2019.

The guidance recognizes the potential benefits of AI technologies for Europe, but also stresses that AI must be developed and implemented with a “human-centric approach” that results in “Trustworthy AI”. The guidance then explains in detail the concept of “Trustworthy AI” and the issues stakeholders should navigate in order to achieve it.  A more detailed summary of the guidance is set out below.

This guidance is not binding, but it is likely to influence EU policymakers as they consider whether and how to legislate in the AI space going forwards. AI HLEG also envisages that the final version of the guidance in March 2019 will include a mechanism to allow stakeholders to voluntarily endorse its principles.  The guidance also states that the AI HLEG will consider making legislative recommendations in its separate deliverable on “Policy & Investment Recommendations,” due May 2019.

Continue Reading EU Working Group Publishes Draft Guidance on AI Ethics

On 20 November 2018, the UK government published its response (the “Response”) to the June 2018 consultation (the “Consultation”) regarding the proposed new Centre for Data Ethics and Innovation (“DEI”). First announced in the UK Chancellor’s Autumn 2017 Budget, the DEI will identify measures needed to strengthen the way data and AI are used and regulated, advising on addressing potential gaps in regulation and outlining best practices in the area. The DEI is described as being the first of its kind globally, and represents an opportunity for the UK to take the lead the debate on how data is regulated. Continue Reading IoT Update: The UK Government’s Response to Centre for Data Ethics and Innovation Consultation

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  Sam Jungyun Choi, Associate in Covington’s Technology Regulatory Group, and Anna Oberschelp, Associate in Covington’s Data Privacy & Cybersecurity Practice Group, discussed global regulatory trends that affect robotics, highlights of which are captured here.  A recording of the forum is available here until May 31, 2022.

Trends on Regulating Artificial Intelligence

            According to the Organization for Economic Cooperation and Development  Artificial Intelligence Policy Observatory (“OECD”), since 2017, at least 60 countries have adopted some form of AI policy, a torrent of government activity that nearly matches the pace of modern AI adoption.  Countries around the world are establishing governmental and intergovernmental strategies and initiatives to guide the development of AI.  These AI initiatives include: (1) AI regulation or policy; (2) AI enablers (e.g., research and public awareness); and (3) financial support (e.g., procurement programs for AI R&D).  The anticipated introduction of AI regulations raises concerns about looming challenges for international cooperation.

United States

            The U.S. has not yet enacted comprehensive AI legislation, though many AI initiatives have emerged at both the state and federal level.  The number of federal proposed bills introduced with AI provisions grew from 2 in 2012 to 131 in 2021.  Despite the dramatic increase of bills introduced, the number of bills actually enacted by the U.S. Congress remains low, with only 2% of the proposed bills ultimately becoming law. 

            At the same time, U.S. state legislation, either focused on AI technologies or comprehensive privacy bills with AI provisions, have passed at much higher rates than their federal counterparts.  Some states have proposed bills that would regulate AI technologies in the context of a broader data protection framework, such as those laws recently passed in Virginia, Colorado, and Connecticut which set forth requirements for certain profiling activities that could implicate AI. In addition, states have also introduced bills and passed laws that directly regulate AI technologies, such as Colorado’s statute that sets forth requirements for the use of AI technologies in the insurance space. In contrast to the 2% pass rate at the federal level, 20% of the 131 state-proposed bills with AI provisions were passed into law in 2021. Massachusetts proposed the most AI-related bills in 2021 with 20, followed by Illinois with 15, and Alabama with 12.

            Another emerging trend in the U.S. is to regulate the use of AI at the sector-specific level, such as the use of AI by financial institutions, healthcare organizations, or in other regulated contexts.  For example, the Food and Drug Administration (“FDA”) has outlined a plan with the agency’s intended actions to further develop a regulatory framework for applications of AI and machine learning within the FDA’s authority.

European Union

            On April 22, 2021, the European Commission published a proposal for AI regulation as part of its broader “AI package,” which includes (i) a legal framework (the EU Artificial Intelligence Act proposed in April, 2021) to address rights and safety risks, (ii) a review of the existing rules on liability (e.g., product liability in the EU) that could apply to AI systems, and (iii) revisions to sector-specific safety regulations (e.g., EU Machinery Regulation). 

            The material scope of the proposal would apply to “AI systems,” which are defined as systems that (i) receive machine or human inputs or data; (ii) infer how to achieve certain objectives using specified “techniques and approaches,” which are defined as machine learning (“ML”), logic- or knowledge-based, and statistical processes; and (iii) generate outputs like content (audio, video, or text), recommendations, or predictions.  The breadth of the proposal would be relevant for the entire chain of actors from AI systems providers, manufacturers, distributors, importers, and  users of AI.  The territorial scope of the proposal extends to AI systems “placed” or “used” in the EU, or to AI systems used outside of the EU but whose “outputs” are used in the EU.

            The EU model adopts a “risk-based” approach to regulate AI systems by creating four categories of risk: (1) unacceptable, (2) high, (3) limited, and (4) minimal.  AI systems with unacceptable risk would be banned and deemed to present a “clear threat to safety, livelihood, and rights.”  AI systems with high risk would be heavily regulated — including through pre-market conformity assessments.  AI systems with limited risk would be made transparent to users, and AI systems with low-minimal risk could be freely used but encouraged to adhere to codes of conduct.

United Kingdom

            The UK is taking an innovation-friendly approach to AI regulation.  On September 22, 2021, the UK Government published the “UK AI Strategy,” a 10-year strategy with three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all UK industry sectors and geographic regions; and (3) ensuring that the UK gets “right” the national and international governance of AI technologies.

            The UK AI Strategy’s pro-innovation outlook aligns with the UK Government’s “Plan for Digital Regulation,” which it published in July of 2021.  The UK AI Strategy notes that, while the UK currently regulates many aspects of the development and use of AI through cross-sectoral legislation (including competition, data protection, and financial services), the sector-led approach can lead to overlaps or inconsistencies.  To remove potential inconsistencies, the UK AI Strategy’s third pillar proposes publishing a white paper on regulating AI by early 2022 which will set out the risks and harms of AI, and outline proposals to address them.

Brazil

            On March 30, 2022, Brazil’s Senate announced the creation of a commission tasked with drafting new regulation on AI.  The Commission will lead a study into existing experiences, such as those in the EU, for inspiration of the application for the same concepts within Brazil.  Brazil’s approach to AI is similar to that taken with Brazil’s General Data Protection Law (“LGPD”), which mirrors the GDPR.  On April 4, 2022, Brazil’s Senate opened a public consultation on its AI strategy and interested stakeholders could submit responses until May 13, 2022.

India

            On February 22, 2022, the Indian Department of Telecommunications published a request for comment on a potential framework for fairness assessments in relation to AI and ML systems.  In light of bias and the need for ethical principles in the design, development, and deployment of AI, the Department noted in particular that it seeks to establish voluntary fairness assessment procedures.

Jordan

            On February 9, 2022, Jordan’s Minister of Digital Economy and Entrepreneurship launched a public consultation of the National Charter of AI, which includes principles and guidelines that support the application of AI within ethical principles, that responsibly promote innovation and creativity, and that ensure an investment-stimulating economy.

China

            China is one of the first countries in the world to regulate AI algorithms.  China’s AI algorithm regulations took effect on March 1, 2022; they require businesses to provide explainable AI algorithms that are transparent about their purpose.  The regulations also prohibit businesses that rely on AI algorithms from offering different prices to different people based on personal data that they collect.

International Organizations

OECD

            On February 22, 2022, the OECD published the “Framework for the Classification of Artificial Intelligence Systems.”  The Framework’s primary purpose is to characterize the application of an AI system deployed in a specific project and context, although some aspects are also relevant to general AI systems.  Additionally, the Framework provides a baseline to:

  • promote a common understanding of AI to identify features of AI systems that matter the most to help governments and developers tailor policies to specific AI applications and help identify or develop metrics to assess subjective criteria;
  • support sector-specific frameworks by providing the basis for more detailed applications or domain-specific catalogues of criteria in sectors such as healthcare and finance; and
  • support risk assessments by providing the basis to develop a risk assessment framework.

UNESCO

            On November 25, 2021, all UN Educational, Scientific and Cultural Organization (“UNESCO”) member states adopted the first global agreement on the ethics of AI.  In particular, the agreement classifies AI as technological systems which have the capacity to process information in a manner that resembles intelligent behavior and typically includes aspects of reasoning, learning, perception, prediction, planning, or control.  Specifically, the agreement focuses on the broader ethical implications of AI systems in relation to UNESCO’s central domains of education, science, culture, communication, and information, and highlights core principles and values such as diversity and inclusiveness, fairness and non-discrimination, privacy, and human oversight and determination.

Trends on Regulating Robotics

            There has been an uptick in regulations imposed by countries around the world with direct relevance to robotics.  These broad categories or regulations include:

  • Data Protection
    • The United Nations International Children’s Emergency Fund (“UNICEF”) issued a Memorandum on Artificial Intelligence and Child Rights, which discusses how AI strategies impact children’s rights, including the right of portability of personal data and automated data processing.
  • Product Safety and Liability
    • The EU is reviewing its product liability rules to cover robotics through its legal framework for the safety of robotics.
    • Japan’s government has adopted a bill that will make driverless cars legal. 
    • Germany has adopted a bill that will allow driverless vehicles on public roads by 2022, laying the groundwork for companies to deploy “robotaxis” and delivery services in the country at scale.  While autonomous vehicle testing is currently permitted in Germany, the bill will allow operations of driverless vehicles without a human safety operator behind the wheel. 
  • Facial Recognition
    • In 2021, the Supreme People’s Court of China issued regulations for use of facial recognition technology by private businesses.
    • The European Data Protection Board has published draft guidelines on the use of facial recognition technology in the area of law enforcement.

Trends on Regulating Cybersecurity

            While 156 countries (80% of all countries) have enacted cybercrime legislation, the pattern varies significantly by region.

United States

            Every U.S. state has its own breach notification statute, which prescribes notice requirements for the unauthorized access or disclosure of certain types of personal information.  Additionally, there are efforts to create a uniform federal framework in Congress.  On March 2, 2022, the Senate unanimously passed the Strengthening American Cybersecurity Act of 2022, which would impose a 72-hour notification requirement on certain entities that own or operate critical infrastructure in the event of substantial cybersecurity incidents, as defined in the bill.  The bill has not yet been passed by the House of Representatives.  On March 23, the Senate introduced the Healthcare Cybersecurity Act of 2022, which would direct the Cybersecurity and Infrastructure Security Agency (“CISA”) and the Department of Health and Human Services (“HHS”) to collaborate on how to improve cybersecurity measures across healthcare providers.

European Union

            In 2022, the EU is expected to adopt the Proposal for Directive on Measures for High Common Level of Cybersecurity Across the Union (“NIS2 Directive”).  The NIS2 Directive would apply to entities providing services falling within the below sectors:

  • Essential Entities – Energy, transportation, banking, financial market infrastructure, drinking water, waste water, public administration; space, health, research and manufacture of pharmaceutical products, manufacture of medical devices critical during public health emergencies; and digital infrastructure sectors such as cloud computing providers, DNS service providers, and content delivery network providers.
  • Important Entities – Postal and courier services; waste management; chemicals; food; manufacturing of medical devices, computers and electronics, machinery equipment, and motor vehicles; and digital providers such as online market places, search engines, and social networking service platforms.

            Each of these entities would have to implement various measures set out in the Directive to ensure that they can detect and manage the security risks to their networks and information systems.  The European Commission and member states may require these entities to obtain European cybersecurity certifications, and impose an obligation to notify incidents having a significant impact on the provision of their services to regulators and recipients of their service.  Under this Directive, essential entities are subject to ex ante regulation, while important entities are subject to ex post regulation.

            Under the NIS2 Directive, member states would have to establish national cybersecurity frameworks that include a cybersecurity strategy, a crisis management framework, and competent authorities and computer security incident response teams.  The authorities must maintain a list of known vulnerabilities in network and information systems, and pool them in a centralized database.  Authorities may also impose fines of up to the higher of 10 million or 2% of the worldwide annual turnover of the “undertaking” of the preceding financial year.

United Kingdom

            As part of the UK’s National Cyber Strategy of 2022, on January 19, 2022, the UK Government launched a public consultation for a proposal for legislation to improve the UK’s cyber resilience (“UK Cyber Security Proposal”).  The objectives for the consultation are based on two pillars: (1) to expand the scope of digital services under the UK Network and Information Systems (“NIS”) Regulations in response to gaps and evolving threats to cybersecurity and (2) to update and future-proof the UK NIS Regulations in order to more easily manage future risks.  The feedback period ended on April 10, 2022.

Australia

            On March 31, 2022, the Security Legislation Amendment Bill of 2022 passed both houses of Australia’s Parliament.  The bill sets out a number of additional measures, including the obligation to adopt and maintain a Risk Management Program, the ability to declare Systems of National Significance, and enhanced cybersecurity obligations that may apply to these systems.  Australia’s Cyber and Infrastructure Security Centre (“CISC”) highlighted that the bill seeks to make risk management, preparedness, prevention, and resilience “business as usual” for the owners and operators of critical infrastructure assets and to improve information exchange between industry and the government. 

International Organizations

            On January 28, 2022, the Association of Southeast Asian Nations’ (“ASEAN”) Digital Ministers’ Meeting announced the launch of the ASEAN Cybersecurity Cooperation Strategy of 2021-2025.  The meeting noted that it welcomed the draft strategy as an update to its previous strategy, and noted that the updated strategy is needed to respond to new cyber developments since 2017.

* * *

            We will provide other developments related to robotics on our blog.  To learn more about the work discussed in this post, please visit the Technology Industry and Data Privacy & Cybersecurity pages of our web site.  For more information on developments related to AI, IoT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.

Technology equity markets took a sharp turn in the last two months of Q1 2022, with S&P Technology Index reaching to over 18% in the red in mid-March, before closing the quarter at 7% off.  In the last month, across all sectors, Russia’s attack on Ukraine has rattled markets and dented investor appetite amid increased volatility and uncertainty.  The decline in valuations is being impacted by the combined headwinds of rising inflation and interest rates, as well as geopolitical uncertainty. 

Russia’s invasion of Ukraine triggered an unprecedented phenomenon: global technology firms responded to the invasion by suspending or terminating business operations, effectively self-sanctioning beyond regulatory requirements, often at great expense to bottom lines.  This trend will likely continue – in 2022 decisions about where to invest and who to accept investment from will be driven by ethical concerns, as well as the shifting geopolitical risks.  However, as we will see in this article, many tech businesses struggle to fully abandon their presence in Russia.

This article highlights some of the ways in which the Ukraine crisis is changing tech M&A.

Expanded scope of Due Diligence

As tech companies embark on M&A deals, proactive and effective risk management will be more essential than ever.  Enhanced focus on these issues is likely to translate to expansion of transaction timelines.

  • Sanctions:  The evolving sanctions regime froze the cross-border M&A market for Russian assets and non-Russian assets owned or part-owned by Russian parties.  The first question any M&A team should be checking is whether the deal is permitted under the current sanctions regime.  That requires looking carefully at the ownership structure.  Buyers should look for situations where there may be individuals with proxies or non-sanctioned family members holding their shares.  Recent changes in share ownership will be a definite diligence red flag.
    • It is common for transactions relating to Russian assets to be structured as overseas joint ventures, commonly established in Cyprus, the Netherlands, Luxembourg, Malta or Switzerland.  Tech companies looking to divest their stakes in these entities will need to consider the impact of EU/UK/U.S. sanctions, as well as increasing Russian counter-sanctions.  The restricted number of potential buyers is likely to have a knock-on effect on valuations.
    • The sanctions regime is evolving rapidly, so this particular diligence issue will need to be repeated regularly throughout the deal process as new sanctions measures are introduced.
  • Business Continuity:  More uncertainty and greater risk is also sharpening focus on the impact of the conflict in Ukraine on business continuity.  For tech companies that were already struggling with talent retention, the conflict in Ukraine significantly impacted talent available in the Ukraine tech hub.  However, the concern extends more broadly to Russia as the “brain drain” of top talent continues.  Tech companies looking to leave the Russian market will need to consider (i) whether it is possible to relocate the existing employees; (ii) the availability of local talent in the new location; (iii) the impact of severance costs, which in line with market practice are not insignificant, on the bottom line.  The availability of talent in the region will likely be impacted for some time after the end of the conflict.
  • Commercial Contracts:  Buyers will be concerned about the enforceability of the target’s contractual arrangements.  Provisions such as material adverse change, change in law or force majeure will be the focus of any diligence exercise of material contracts.  Even where a commercial agreement provides for arbitration as the dispute resolution mechanisms, arbitration awards may need to be enforced by a court.  Parties will need to carefully consider such provisions in the context of the legislative and regulatory response to the Ukrainian crisis.

Deal Execution:  A (simplified) way forward

Even where a transaction is permitted under the current sanctions regime, tech companies expanding their businesses through acquisitions should ensure that all contractual payments are front-loaded to the maximum extent permissible, to minimize the risk that new sanctions may make certain payments unlawful.  Other risk mitigation strategies include minimizing the gap between signing and completion and avoiding deferred consideration or significant holdbacks or escrow. 

Tech buyers should not assume that a MAC clause will give them a walk-away right if circumstances change between signing and closing a deal.  MAC clauses are rarely used and even more rarely upheld by courts.  They are unlikely to offer a buyer an “easy exit”, unless the target is disproportionately impacted by the conflict.  A few weeks into the conflict, sellers will increasingly argue that this is a known and assessable risk for the buyer, that should be carved out of the scope of the MAC provision.

Tech M&A transactions continue to be under intense regulatory scrutiny, in part due to the political pressure on regulators to safeguard technology assets.  In recent years, with a hot tech M&A market, a sharp focus was drawn on very significant break fees.  Parties will need to consider how to address this trend, in a world where a break fee may not be allowed to be paid to a sanctioned entity.  Transfers of funds to a Russia connected entity will require careful analysis for compliance with the sanctions regime.

*           *           *

Ukraine Crisis: Resources for Responding to the Impact of the Escalating Conflict

Our lawyers are actively engaged in advising clients on the full range of implications of the current conflict on their business and operations in Russia, Ukraine and globally. This includes advice on potential acquisitions and disposals of assets in Ukraine and Russia in light of the evolving sanctions regime, mitigating exposure to investments in Russia, managing legal and reputational risks for joint venture partners and commercial advice with respect to the impact on current business operations in the region. Our team includes lawyers with extensive experience representing clients on complex transactions and challenging situations in the region across a broad set of asset classes, as well as excellent relations with trusted local lawyers. Please visit the Ukraine and M&A pages on our web site to learn more about this work.

If you have any questions concerning the material discussed in this client alert, please contact the following members of our Mergers and Acquisitions practice:
Louise Nash                                       +44 20 7067 2028                  lnash@cov.com
Peter Laveran-Stiebar                       +1 212 841 1024                    plaveran@cov.com
Philipp Tamussino                            +49 69 768063 392                ptamussino@cov.com
Luciana Griebel                                 +44 20 7067 2268                  lgriebel@cov.com

In 2021, countries in EMEA continued to focus on the legal constructs around artificial intelligence (“AI”), and the momentum continues in 2022. The EU has been particularly active in AI—from its proposed horizontal AI regulation to recent enforcement and guidance—and will continue to be active going into 2022. Similarly, the UK follows closely behind with its AI strategy and recent reports and standards. While our team monitors developments across EMEA, this roundup will focus on summarizing the leading developments within Europe in 2021 and what that means for 2022.

The Proposed EU AI Act

In April 2021, the European Commission published its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Commission Proposal”). The Commission Proposal sets out a horizontal approach to AI regulation that establishes rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU (see our previous blog post here). The proposal is currently under negotiation between the co-legislators, the European Parliament and the Council of the European Union (“Council”).

Slovenia held the Council Presidency for the last six months of 2021, and France assumed the Presidency in January 2022. During its Presidency, Slovenia published a partial compromise text of the EU AI Act, focusing on edits to the classification of high-risk AI systems. The French Presidency circulated additional proposed amendments on 13 January 2022, focusing on the requirements for high-risk AI systems. Notable amendments in each version include:

Slovenian Council Presidency:

  • Scope. New Article 52a (and corresponding Recital 70a) would clarify that “general purpose AI systems” do not fall within the scope of the Act. Although the compromise text does not define this term, Recital 70a states they are “understood as AI system[s] that are able to perform generally applicable functions such as image / speech recognition, audio / video generation, pattern detection, question answering, translation etc.”
  • Social scoring. Article 5(1)(c) (and corresponding Recital 17) would extend the prohibition on AI systems used for social scoring as set out in the Commission Proposal, which is limited to public authorities, to private actors as well. Also, while the Commission Proposal limits the prohibition to social scoring used to evaluate the “trustworthiness” of natural persons, the Slovenian Presidency text removes this limitation, which would thereby broaden the scope of the prohibition.
  • Biometric identification. Amendments to Article 3(33) would broaden the definition of “biometric data” to include systems that do not “uniquely” identify people, while other amendments would make the Act apply not only to “remote” biometric identification systems, but to biometric identification systems broadly. For instance, Article 5 would prohibit law enforcement use of any biometric identification systems in publically available spaces, subject to certain exceptions.
  • High risk AI systems. Annex III would add to the list of AI systems qualifying as “high risk” those that are intended to be used to control “digital infrastructure” or “emissions and pollution.”

French Council Presidency:

  • Risks. Amendments to Article 9 (Risk management system) would clarify that high-risk AI systems must have a risk-management system allowing for the identification of known / foreseeable risks “most likely to occur to health, safety and fundamental rights in view of the [system’s] intended purpose.”
  • Trade-offs. Amendments to Article 9(3) specify that risk-management measures must aim to “minimis[e] risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.”
  • Error tolerance. Amendments to Article 10(3)—concerning training, validation, and testing data—slightly relax the requirement that the data be “free of errors and complete”, now requiring data sets to be so “to the best extent possible.”
  • Human oversight. Amendments to Article 14(4) make clear that the supplier of high-risk AI systems must enable the system to allow for human oversight by natural persons.

The EU AI Act is also being considered by the European Parliament. Although it is listed as a high-priority piece of legislation in the Commission’s 2022 work program (see here), it may be some time before it is finalized.

EU Recommendations, Consultations and Reports on AI

In addition to activity on the EU AI Act, the EU has published additional recommendations, consultations and reports on AI:

  • The Council of Europe published a Recommendation (see here) that responds to the changes in profiling techniques in the last decade. It recognizes that profiling can impact individuals by placing them in predetermined categories without their knowledge and that the lack of transparency can pose significant risks to human rights. The recommendation encourages EU Member States to promote and make legally binding the use of a ‘privacy by design’ approach in the context of profiling, and sets out additional safeguards that should be imposed on profiling.
  • The European Commission published a public consultation (see here) to adapt product liability rules to ensure that they sufficiently protect consumers against the harms of new technologies, including AI. The consultation is split into two parts and gathers views on: (i) how to ensure that consumers and users continue to be protected against the harm caused by AI systems, particularly with respect to compensation, and (ii) how to address the problems purportedly linked to certain types of AI (e.g., where there is difficulty with identifying the potentially liable person, or proving that person’s fault or proving a product’s defect and the causal link with damage). The consultation period has ended, and the Commission intends to propose an update to the Product Liability Directive by the end of the third quarter of 2022.
  • On 6 October 2021, the European Parliament voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces (see our previous blog post here). The resolution forms part of a non-legislative report on the use of AI by the police and judicial authorities in criminal matters (“AI Report”) published by the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.

Enforcement on Clearview AI

From an enforcement perspective, in 2021, a number of EU data protection authorities (“DPAs”) have taken enforcement actions on specific AI use cases, particularly relating to FRT. The most significant action has been the investigation against Clearview AI Inc. (“Clearview AI”) in relation to their personal information handling practices, especially the company’s use of data scraped from the internet and the use of biometrics for facial recognition. The UK Information Commissioner’s Office (“ICO”) and the Office of the Australian Information Commissioner (“OAIC”) conducted a joint investigation. In November 2021, the ICO issued a provisional intention to fine Clearview AI over £17 million for its breach of data protection laws, and its final decision is expected in 2022 (see here). Additionally, the French privacy regulator ordered Clearview AI to cease collecting images from the internet and to delete existing data within two-months (see here in French). Due to the significant processing of personal data involved in AI, DPAs have taken an interest in applying the GDPR to AI.

AI Activity in the United Kingdom

Following the UK’s exit from the EU on 1 January 2021, the UK government announced plans to reform UK data protection law and published its own National AI Strategy in September 2021 (see here and our previous blog post here). According to the UK’s AI strategy, the Office of AI is expected to publish a White Paper on regulating AI in early 2022. Further to this, the UK government has published a number of reports and standards relating to AI, for example:

  • The UK government’s Central Digital and Data Office (“CDDO”) published the Algorithmic Transparency Standard (see here) as part of the UK AI Strategy’s commitment to delivering greater transparency on algorithm-assisted decision making in the public sector. The Algorithmic Transparency Standard seeks to help public sector organizations provide clear information about the algorithmic tools they use, and why they use them.
  • The UK government’s Centre for Data Ethics and Innovation (“CDEI”) published an independent report setting out the roadmap to an effective AI assurance ecosystem (see here).
  • A new AI Standards Hub was launched by the Office of AI, supported by the British Standards Institution, in January 2022 (see here) to develop AI standards.

*          *          *

We will continue to closely monitor the regulatory and policy developments on AI in EMEA – please watch this space for further updates. For more information on developments related to AI and data privacy, please visit our AI Toolkit and our Data Privacy and Cybersecurity website.