On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  Sam Jungyun Choi, Associate in Covington’s Technology Regulatory Group, and Anna Oberschelp, Associate in Covington’s Data Privacy & Cybersecurity Practice Group, discussed global regulatory trends that affect robotics, highlights of which are captured here.  A recording of the forum is available here until May 31, 2022.

Trends on Regulating Artificial Intelligence

            According to the Organization for Economic Cooperation and Development  Artificial Intelligence Policy Observatory (“OECD”), since 2017, at least 60 countries have adopted some form of AI policy, a torrent of government activity that nearly matches the pace of modern AI adoption.  Countries around the world are establishing governmental and intergovernmental strategies and initiatives to guide the development of AI.  These AI initiatives include: (1) AI regulation or policy; (2) AI enablers (e.g., research and public awareness); and (3) financial support (e.g., procurement programs for AI R&D).  The anticipated introduction of AI regulations raises concerns about looming challenges for international cooperation.

United States

            The U.S. has not yet enacted comprehensive AI legislation, though many AI initiatives have emerged at both the state and federal level.  The number of federal proposed bills introduced with AI provisions grew from 2 in 2012 to 131 in 2021.  Despite the dramatic increase of bills introduced, the number of bills actually enacted by the U.S. Congress remains low, with only 2% of the proposed bills ultimately becoming law. 

            At the same time, U.S. state legislation, either focused on AI technologies or comprehensive privacy bills with AI provisions, have passed at much higher rates than their federal counterparts.  Some states have proposed bills that would regulate AI technologies in the context of a broader data protection framework, such as those laws recently passed in Virginia, Colorado, and Connecticut which set forth requirements for certain profiling activities that could implicate AI. In addition, states have also introduced bills and passed laws that directly regulate AI technologies, such as Colorado’s statute that sets forth requirements for the use of AI technologies in the insurance space. In contrast to the 2% pass rate at the federal level, 20% of the 131 state-proposed bills with AI provisions were passed into law in 2021. Massachusetts proposed the most AI-related bills in 2021 with 20, followed by Illinois with 15, and Alabama with 12.

            Another emerging trend in the U.S. is to regulate the use of AI at the sector-specific level, such as the use of AI by financial institutions, healthcare organizations, or in other regulated contexts.  For example, the Food and Drug Administration (“FDA”) has outlined a plan with the agency’s intended actions to further develop a regulatory framework for applications of AI and machine learning within the FDA’s authority.

European Union

            On April 22, 2021, the European Commission published a proposal for AI regulation as part of its broader “AI package,” which includes (i) a legal framework (the EU Artificial Intelligence Act proposed in April, 2021) to address rights and safety risks, (ii) a review of the existing rules on liability (e.g., product liability in the EU) that could apply to AI systems, and (iii) revisions to sector-specific safety regulations (e.g., EU Machinery Regulation). 

            The material scope of the proposal would apply to “AI systems,” which are defined as systems that (i) receive machine or human inputs or data; (ii) infer how to achieve certain objectives using specified “techniques and approaches,” which are defined as machine learning (“ML”), logic- or knowledge-based, and statistical processes; and (iii) generate outputs like content (audio, video, or text), recommendations, or predictions.  The breadth of the proposal would be relevant for the entire chain of actors from AI systems providers, manufacturers, distributors, importers, and  users of AI.  The territorial scope of the proposal extends to AI systems “placed” or “used” in the EU, or to AI systems used outside of the EU but whose “outputs” are used in the EU.

            The EU model adopts a “risk-based” approach to regulate AI systems by creating four categories of risk: (1) unacceptable, (2) high, (3) limited, and (4) minimal.  AI systems with unacceptable risk would be banned and deemed to present a “clear threat to safety, livelihood, and rights.”  AI systems with high risk would be heavily regulated — including through pre-market conformity assessments.  AI systems with limited risk would be made transparent to users, and AI systems with low-minimal risk could be freely used but encouraged to adhere to codes of conduct.

United Kingdom

            The UK is taking an innovation-friendly approach to AI regulation.  On September 22, 2021, the UK Government published the “UK AI Strategy,” a 10-year strategy with three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all UK industry sectors and geographic regions; and (3) ensuring that the UK gets “right” the national and international governance of AI technologies.

            The UK AI Strategy’s pro-innovation outlook aligns with the UK Government’s “Plan for Digital Regulation,” which it published in July of 2021.  The UK AI Strategy notes that, while the UK currently regulates many aspects of the development and use of AI through cross-sectoral legislation (including competition, data protection, and financial services), the sector-led approach can lead to overlaps or inconsistencies.  To remove potential inconsistencies, the UK AI Strategy’s third pillar proposes publishing a white paper on regulating AI by early 2022 which will set out the risks and harms of AI, and outline proposals to address them.

Brazil

            On March 30, 2022, Brazil’s Senate announced the creation of a commission tasked with drafting new regulation on AI.  The Commission will lead a study into existing experiences, such as those in the EU, for inspiration of the application for the same concepts within Brazil.  Brazil’s approach to AI is similar to that taken with Brazil’s General Data Protection Law (“LGPD”), which mirrors the GDPR.  On April 4, 2022, Brazil’s Senate opened a public consultation on its AI strategy and interested stakeholders could submit responses until May 13, 2022.

India

            On February 22, 2022, the Indian Department of Telecommunications published a request for comment on a potential framework for fairness assessments in relation to AI and ML systems.  In light of bias and the need for ethical principles in the design, development, and deployment of AI, the Department noted in particular that it seeks to establish voluntary fairness assessment procedures.

Jordan

            On February 9, 2022, Jordan’s Minister of Digital Economy and Entrepreneurship launched a public consultation of the National Charter of AI, which includes principles and guidelines that support the application of AI within ethical principles, that responsibly promote innovation and creativity, and that ensure an investment-stimulating economy.

China

            China is one of the first countries in the world to regulate AI algorithms.  China’s AI algorithm regulations took effect on March 1, 2022; they require businesses to provide explainable AI algorithms that are transparent about their purpose.  The regulations also prohibit businesses that rely on AI algorithms from offering different prices to different people based on personal data that they collect.

International Organizations

OECD

            On February 22, 2022, the OECD published the “Framework for the Classification of Artificial Intelligence Systems.”  The Framework’s primary purpose is to characterize the application of an AI system deployed in a specific project and context, although some aspects are also relevant to general AI systems.  Additionally, the Framework provides a baseline to:

  • promote a common understanding of AI to identify features of AI systems that matter the most to help governments and developers tailor policies to specific AI applications and help identify or develop metrics to assess subjective criteria;
  • support sector-specific frameworks by providing the basis for more detailed applications or domain-specific catalogues of criteria in sectors such as healthcare and finance; and
  • support risk assessments by providing the basis to develop a risk assessment framework.

UNESCO

            On November 25, 2021, all UN Educational, Scientific and Cultural Organization (“UNESCO”) member states adopted the first global agreement on the ethics of AI.  In particular, the agreement classifies AI as technological systems which have the capacity to process information in a manner that resembles intelligent behavior and typically includes aspects of reasoning, learning, perception, prediction, planning, or control.  Specifically, the agreement focuses on the broader ethical implications of AI systems in relation to UNESCO’s central domains of education, science, culture, communication, and information, and highlights core principles and values such as diversity and inclusiveness, fairness and non-discrimination, privacy, and human oversight and determination.

Trends on Regulating Robotics

            There has been an uptick in regulations imposed by countries around the world with direct relevance to robotics.  These broad categories or regulations include:

  • Data Protection
    • The United Nations International Children’s Emergency Fund (“UNICEF”) issued a Memorandum on Artificial Intelligence and Child Rights, which discusses how AI strategies impact children’s rights, including the right of portability of personal data and automated data processing.
  • Product Safety and Liability
    • The EU is reviewing its product liability rules to cover robotics through its legal framework for the safety of robotics.
    • Japan’s government has adopted a bill that will make driverless cars legal. 
    • Germany has adopted a bill that will allow driverless vehicles on public roads by 2022, laying the groundwork for companies to deploy “robotaxis” and delivery services in the country at scale.  While autonomous vehicle testing is currently permitted in Germany, the bill will allow operations of driverless vehicles without a human safety operator behind the wheel. 
  • Facial Recognition
    • In 2021, the Supreme People’s Court of China issued regulations for use of facial recognition technology by private businesses.
    • The European Data Protection Board has published draft guidelines on the use of facial recognition technology in the area of law enforcement.

Trends on Regulating Cybersecurity

            While 156 countries (80% of all countries) have enacted cybercrime legislation, the pattern varies significantly by region.

United States

            Every U.S. state has its own breach notification statute, which prescribes notice requirements for the unauthorized access or disclosure of certain types of personal information.  Additionally, there are efforts to create a uniform federal framework in Congress.  On March 2, 2022, the Senate unanimously passed the Strengthening American Cybersecurity Act of 2022, which would impose a 72-hour notification requirement on certain entities that own or operate critical infrastructure in the event of substantial cybersecurity incidents, as defined in the bill.  The bill has not yet been passed by the House of Representatives.  On March 23, the Senate introduced the Healthcare Cybersecurity Act of 2022, which would direct the Cybersecurity and Infrastructure Security Agency (“CISA”) and the Department of Health and Human Services (“HHS”) to collaborate on how to improve cybersecurity measures across healthcare providers.

European Union

            In 2022, the EU is expected to adopt the Proposal for Directive on Measures for High Common Level of Cybersecurity Across the Union (“NIS2 Directive”).  The NIS2 Directive would apply to entities providing services falling within the below sectors:

  • Essential Entities – Energy, transportation, banking, financial market infrastructure, drinking water, waste water, public administration; space, health, research and manufacture of pharmaceutical products, manufacture of medical devices critical during public health emergencies; and digital infrastructure sectors such as cloud computing providers, DNS service providers, and content delivery network providers.
  • Important Entities – Postal and courier services; waste management; chemicals; food; manufacturing of medical devices, computers and electronics, machinery equipment, and motor vehicles; and digital providers such as online market places, search engines, and social networking service platforms.

            Each of these entities would have to implement various measures set out in the Directive to ensure that they can detect and manage the security risks to their networks and information systems.  The European Commission and member states may require these entities to obtain European cybersecurity certifications, and impose an obligation to notify incidents having a significant impact on the provision of their services to regulators and recipients of their service.  Under this Directive, essential entities are subject to ex ante regulation, while important entities are subject to ex post regulation.

            Under the NIS2 Directive, member states would have to establish national cybersecurity frameworks that include a cybersecurity strategy, a crisis management framework, and competent authorities and computer security incident response teams.  The authorities must maintain a list of known vulnerabilities in network and information systems, and pool them in a centralized database.  Authorities may also impose fines of up to the higher of 10 million or 2% of the worldwide annual turnover of the “undertaking” of the preceding financial year.

United Kingdom

            As part of the UK’s National Cyber Strategy of 2022, on January 19, 2022, the UK Government launched a public consultation for a proposal for legislation to improve the UK’s cyber resilience (“UK Cyber Security Proposal”).  The objectives for the consultation are based on two pillars: (1) to expand the scope of digital services under the UK Network and Information Systems (“NIS”) Regulations in response to gaps and evolving threats to cybersecurity and (2) to update and future-proof the UK NIS Regulations in order to more easily manage future risks.  The feedback period ended on April 10, 2022.

Australia

            On March 31, 2022, the Security Legislation Amendment Bill of 2022 passed both houses of Australia’s Parliament.  The bill sets out a number of additional measures, including the obligation to adopt and maintain a Risk Management Program, the ability to declare Systems of National Significance, and enhanced cybersecurity obligations that may apply to these systems.  Australia’s Cyber and Infrastructure Security Centre (“CISC”) highlighted that the bill seeks to make risk management, preparedness, prevention, and resilience “business as usual” for the owners and operators of critical infrastructure assets and to improve information exchange between industry and the government. 

International Organizations

            On January 28, 2022, the Association of Southeast Asian Nations’ (“ASEAN”) Digital Ministers’ Meeting announced the launch of the ASEAN Cybersecurity Cooperation Strategy of 2021-2025.  The meeting noted that it welcomed the draft strategy as an update to its previous strategy, and noted that the updated strategy is needed to respond to new cyber developments since 2017.

* * *

            We will provide other developments related to robotics on our blog.  To learn more about the work discussed in this post, please visit the Technology Industry and Data Privacy & Cybersecurity pages of our web site.  For more information on developments related to AI, IoT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.

Technology equity markets took a sharp turn in the last two months of Q1 2022, with S&P Technology Index reaching to over 18% in the red in mid-March, before closing the quarter at 7% off.  In the last month, across all sectors, Russia’s attack on Ukraine has rattled markets and dented investor appetite amid increased volatility and uncertainty.  The decline in valuations is being impacted by the combined headwinds of rising inflation and interest rates, as well as geopolitical uncertainty. 

Russia’s invasion of Ukraine triggered an unprecedented phenomenon: global technology firms responded to the invasion by suspending or terminating business operations, effectively self-sanctioning beyond regulatory requirements, often at great expense to bottom lines.  This trend will likely continue – in 2022 decisions about where to invest and who to accept investment from will be driven by ethical concerns, as well as the shifting geopolitical risks.  However, as we will see in this article, many tech businesses struggle to fully abandon their presence in Russia.

This article highlights some of the ways in which the Ukraine crisis is changing tech M&A.

Expanded scope of Due Diligence

As tech companies embark on M&A deals, proactive and effective risk management will be more essential than ever.  Enhanced focus on these issues is likely to translate to expansion of transaction timelines.

  • Sanctions:  The evolving sanctions regime froze the cross-border M&A market for Russian assets and non-Russian assets owned or part-owned by Russian parties.  The first question any M&A team should be checking is whether the deal is permitted under the current sanctions regime.  That requires looking carefully at the ownership structure.  Buyers should look for situations where there may be individuals with proxies or non-sanctioned family members holding their shares.  Recent changes in share ownership will be a definite diligence red flag.
    • It is common for transactions relating to Russian assets to be structured as overseas joint ventures, commonly established in Cyprus, the Netherlands, Luxembourg, Malta or Switzerland.  Tech companies looking to divest their stakes in these entities will need to consider the impact of EU/UK/U.S. sanctions, as well as increasing Russian counter-sanctions.  The restricted number of potential buyers is likely to have a knock-on effect on valuations.
    • The sanctions regime is evolving rapidly, so this particular diligence issue will need to be repeated regularly throughout the deal process as new sanctions measures are introduced.
  • Business Continuity:  More uncertainty and greater risk is also sharpening focus on the impact of the conflict in Ukraine on business continuity.  For tech companies that were already struggling with talent retention, the conflict in Ukraine significantly impacted talent available in the Ukraine tech hub.  However, the concern extends more broadly to Russia as the “brain drain” of top talent continues.  Tech companies looking to leave the Russian market will need to consider (i) whether it is possible to relocate the existing employees; (ii) the availability of local talent in the new location; (iii) the impact of severance costs, which in line with market practice are not insignificant, on the bottom line.  The availability of talent in the region will likely be impacted for some time after the end of the conflict.
  • Commercial Contracts:  Buyers will be concerned about the enforceability of the target’s contractual arrangements.  Provisions such as material adverse change, change in law or force majeure will be the focus of any diligence exercise of material contracts.  Even where a commercial agreement provides for arbitration as the dispute resolution mechanisms, arbitration awards may need to be enforced by a court.  Parties will need to carefully consider such provisions in the context of the legislative and regulatory response to the Ukrainian crisis.

Deal Execution:  A (simplified) way forward

Even where a transaction is permitted under the current sanctions regime, tech companies expanding their businesses through acquisitions should ensure that all contractual payments are front-loaded to the maximum extent permissible, to minimize the risk that new sanctions may make certain payments unlawful.  Other risk mitigation strategies include minimizing the gap between signing and completion and avoiding deferred consideration or significant holdbacks or escrow. 

Tech buyers should not assume that a MAC clause will give them a walk-away right if circumstances change between signing and closing a deal.  MAC clauses are rarely used and even more rarely upheld by courts.  They are unlikely to offer a buyer an “easy exit”, unless the target is disproportionately impacted by the conflict.  A few weeks into the conflict, sellers will increasingly argue that this is a known and assessable risk for the buyer, that should be carved out of the scope of the MAC provision.

Tech M&A transactions continue to be under intense regulatory scrutiny, in part due to the political pressure on regulators to safeguard technology assets.  In recent years, with a hot tech M&A market, a sharp focus was drawn on very significant break fees.  Parties will need to consider how to address this trend, in a world where a break fee may not be allowed to be paid to a sanctioned entity.  Transfers of funds to a Russia connected entity will require careful analysis for compliance with the sanctions regime.

*           *           *

Ukraine Crisis: Resources for Responding to the Impact of the Escalating Conflict

Our lawyers are actively engaged in advising clients on the full range of implications of the current conflict on their business and operations in Russia, Ukraine and globally. This includes advice on potential acquisitions and disposals of assets in Ukraine and Russia in light of the evolving sanctions regime, mitigating exposure to investments in Russia, managing legal and reputational risks for joint venture partners and commercial advice with respect to the impact on current business operations in the region. Our team includes lawyers with extensive experience representing clients on complex transactions and challenging situations in the region across a broad set of asset classes, as well as excellent relations with trusted local lawyers. Please visit the Ukraine and M&A pages on our web site to learn more about this work.

If you have any questions concerning the material discussed in this client alert, please contact the following members of our Mergers and Acquisitions practice:
Louise Nash                                       +44 20 7067 2028                  lnash@cov.com
Peter Laveran-Stiebar                       +1 212 841 1024                    plaveran@cov.com
Philipp Tamussino                            +49 69 768063 392                ptamussino@cov.com
Luciana Griebel                                 +44 20 7067 2268                  lgriebel@cov.com

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  One segment of the Robotics Forum covered risks of automation and AI, highlights of which are captured here.  A full recording of the Robotics Forum is available here until May 31, 2022.

            As AI and robotics technologies mature, the use-cases are expected to grow in increasingly complex areas and to pose new risks. Because lawsuits have settled prior to a court deciding liability questions, no settled case law yet exists to identify where the liability rests between robotics engineers, AI designers, and manufacturers.  Scholars and researchers have proposed addressing these issues through products liability and discrimination doctrines, including the creation of new legal remedies specific to AI technology and particular use-cases, such as self-driving cars.  Proposed approaches for liability through existing doctrines have included:

  • Strict Liability Approach – Manufacturer Liability
    • Courts could apply the “consumer expectations” test where manufacturers would be responsible for defects in design or software that create unreasonably dangerous conditions.  Under this approach, there would be no need to show a reasonable alternative design.  Some argue that this approach would dampen innovation.
  • Negligence Approach
    • Courts could apply the “risk-utility” test where plaintiffs must show that adopting a reasonable design alternative could have reduced the foreseeable risks of harm the product posed.  Courts also could perform a cost-benefit analysis that balances the cost to the manufacturer for an alternative design in relation to the amount of harm reduced.
  • Breach of Warranty Approach
    • Commercial remedies could apply to robotics-related accidents.  The Uniform Commercial Code (“UCC”) governs many aspects of product warranties and commercial transactions, and some have argued that it also could govern robotics liability.  Express warranties are created when a seller promises something to a prospective buyer in association with the sale of goods.
  • Multiple Actor – Joint Liability
    • Under this approach, various parties involved in the design and use of a robotics product could be held liable for harms associated with the product’s performance or malfunction.  Such an approach could prove particularly challenging for complex technologies, such as self-driving cars.

            Stakeholders also must be mindful of how human bias can affect robotics and AI.  Bias in AI can be created via statistical bias where an algorithm produces results are not representative of the true population or social bias where an algorithm treats groups unequally within a system.  There are a number of data practices that can result in AI bias, such as: (1) relying on past biased data in a machine learning algorithm; (2) collecting data for use in AI that is non-representative or not impartial; (3) making broad generalizations with respect to data inputs or results; (4) relying on factors that become a proxy for protected classes based on correlations in society; and (5) using the neutral face of AI to mask intentional discrimination.  The good news is that companies can proactively remedy potential bias or discrimination by avoiding these pitfalls, testing algorithms on diverse population sets, and following evolving legal developments and best practices.

            We will provide additional updates about the 2022 Covington Robotics Forum and other developments related to robotics on our blog.  To learn more about our commercial litigation work, please visit the Commercial Litigation page of our web site.  For more information on developments related to AI, IOT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  The global robotics market has been experiencing a significant transformation, with robotics growing beyond traditional industrial uses and taking on an ever-increasing amount of new roles, such as personal assistants, surgical assistants, connected and automated vehicles, delivery vehicles, lawnmowers, and autonomous aircraft.  In 2021, the global robotics market was valued at $55.8 billion—by 2026, it is expected to grow to $91.8 billion.

            David Fagan, Co-Chair of Covington’s Cross-Border Investment and National Security Practice Group, and Steve Bartenstein, Partner in Covington’s International Trade Controls Practice Group, discussed the national security issues raised by foreign investments in robotics transactions, highlights of which are captured here.  A full recording of the Robotics Forum is available here until May 31, 2022.

CFIUS Reviews of Robotics and Artificial Intelligence Transactions

            As a quick primer, the Committee on Foreign Investment in the United States (“CFIUS”) is an interagency committee chaired by the U.S. Department of Treasury charged by statute with reviewing foreign investments in U.S. businesses to determine any impairment to U.S. national security.  CFIUS includes nine executive branch agencies that are full-time voting members and several other agencies in advisory or ex officio capacities.  Historically, CFIUS review is triggered if a “foreign person” acquires “control” over an existing “U.S. business” and there is a nexus to “U.S. national security.”  “National security” is not precisely defined, and CFIUS decides it on a case-by-case basis.  Amendments in 2018 expanded CFIUS’s authority over certain types of non-controlling but non-passive investments in certain U.S. businesses, which can include emerging technology areas such as robotics and AI.  All transactions are now being examined by CFIUS through the lens of U.S.-China military and industrial competition and how that impacts U.S. national security, and CFIUS is actively monitoring transactions in the robotics and AI areas in particular.

            In our experience, the following technologies are of particular interest in CFIUS reviews:

  • Autonomous vehicles (on-road, off-road, aerial);
  • Industrial robots;
  • Service robots;
  • Industrial automation (warehousing and packaging); and
  • Supporting technologies, including software (AI, decision-making, and sensor integration) and hardware (control systems, sensors, motors, actuators, and power systems).

            Within these areas, CFIUS is likely to focus on whether the transaction implicates any of the following issues (in addition to considering whether the transaction presents any nexus to a country of concern): 

  • Technology Transfer – whether the U.S. business at issue includes technology that is sensitive from a U.S. national security standpoint, including taking into consideration that “robotics” has been identified as an area of emerging technology that could be subject to enhanced export control protections under the Export Control Reform Act of 2018 (“ECRA”), companion legislation to the 2018 CFIUS reform legislation. 
  • Protection of Data – whether the transaction includes intellectual property that was developed jointly with the U.S. government or funded in part by the U.S. government, and similarly whether there is any customer data, including data from government customers, that could be sensitive from a national security standpoint.
  • Industrial Policy/Security and Supply Chain – the extent to which the transaction includes capabilities in the United States that are important to U.S. national security, and how the government can maintain access to those capabilities in the United States, while denying those capabilities to potential adversaries.
  • Supply Assurance – whether the U.S. business is an important supplier to the U.S. government or government contractors, and whether it is necessary to seek commitments to maintain that supply.

Trade Controls and Robotics

            Robotics companies operating globally face a range of trade control risks, including:

  • Existing export control restrictions on certain robotics-related hardware, software, and technology;
  • New emerging technology controls on the horizon for robotics, AI/ML, and IC technologies;
  • Broad restrictions on access to U.S. technologies for end-users and end-uses of concern (military applications and parties on the Entity List); and
  • Increasing use of sanctions, which create compliance challenges.

            It is important for companies to be attentive to these risks, including but not limited to the context of M&A and investment transactions.  In the M&A/investment context, trade controls due diligence by buyers and investors can reduce compliance risks (including successor liability), and identify issues that may be “deal-killers” or impact transaction value.  Trade controls may also impact post-closing integration and technology transfer efforts and other business activities.  Monitoring these issues creates an opportunity to identify and correct ongoing compliance issues. 

            We will provide additional updates about the 2022 Covington Robotics Forum and other developments related to robotics on our blog.  To learn more about the work discussed in this post, please visit the Technology Industry, CFIUS, and Trade Controls pages of our web site.  For more information on developments related to AI, IoT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  Winslow Taub, Partner in Covington’s Technology Transactions Practice Group, and Jennifer Plitsch, Chair of Covington’s Government Contracts Practice Group, discussed the robotics issues presented in private transactions and government contracts, highlights of which are captured here.  A recording of the forum is available here until May 31, 2022.

            A business in the robotics space may acquire, develop, and use a variety of technology assets, including specialized hardware, control software, and AI models that depend on large training data sets.  Understanding these assets, and the forms of IP protection available for them, is critical when engaging in a transaction the robotics space—whether an M&A transaction, a commercial transaction, or a transaction with the government.

            By way of example, in an M&A transaction, the seller may rely on disparate sets of data (including sets of data acquired from its customers) in developing robotics products.  Verifying that the seller has sufficient rights to that data is a critical part of diligence.  And the data is often processed heavily in order to make it useful for a robotics application.  Because data rights are not protectable under patent and copyright laws, it is important to verify that adequate contractual protections are in place to secure exclusive use by the seller.

            As another example, in commercial agreements for the deployment of robotics technology, it is important to take special care in negotiating the rights and obligations during the “support” phase of the project, after the solution is put into operation.  The technology provider will often require access to data from the production environment in order to fix bugs and improve performance—including sets of production data that can be used to further train the relevant AI models.

            The U.S. Government frequently collaborates in the development of specialized technology, or tests or procures finished products from the private sector.  The U.S. Government has specialized rules which must be carefully considered when entering into any transaction with the U.S. Government as these rules are often complicated and can impose significant compliance obligations.  Collaboration or development agreements should be carefully considered as many standard government procurement requirements do not easily translate to new and emerging technologies.

            U.S. Government rights in intellectual property and data related to government agreements should be carefully considered before entering into any U.S. Government agreement or acquiring any technology that has been funded by the U.S. Government.  If a company accepts U.S. Government funding to develop robotics technology, there may be significant intellectual property implications if patentable technology is conceived of or for the first time actually reduced to practice in the performance of the government agreement.  The inventor will hold the patent, but the U.S. Government will receive a license and potentially greater rights if certain reporting and other requirements are not met.  This issue can also arise for companies acquiring already developed technologies that may have been developed with U.S. Government funding as the U.S. Government rights will remain after the transaction. 

            The U.S. Government also generally obtains unlimited rights in data first produced or delivered in the performance of a U.S. Government contract, and companies should think about what data might be produced or delivered in the course of performance of an agreement with the government, and consider what consequences could result from the U.S. Government receiving the right to use, share and even publish such data.  This issue is particularly important in consideration of the potential impact on trade secret protections for data in which the U.S. Government receives unlimited rights.

            We will provide additional updates about the topics covered in the 2022 Covington Robotics Forum and other developments related to robotics on our blog.  To learn more about our work related to this post, please visit the Technology Industry, Technology Transactions and Government Contracts pages of our web site.  For more information on developments related to AI, IoT and connected and automated vehicles, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.

Last Friday, the National Telecommunications and Information Administration (“NTIA”) took a major step in furtherance of the Biden Administration’s goal of connecting all Americans to broadband by releasing its widely anticipated Notice of Funding Opportunity (“NOFO”) for the landmark $42.5 billion Broadband Equity, Access, and Deployment (“BEAD”) Program, along with NOFOs for two smaller programs.  All of these programs were created by the Infrastructure Investment and Jobs Act (“IIJA” or “the Act”) which was signed into law in November 2021.  The NOFO marks the beginning of the BEAD Program’s implementation and provides important guidance to states on the process for obtaining funds that they, in turn, will award to service providers and other qualified recipients for building out broadband to unserved and underserved areas. 

On timing, the NOFO clarifies that states will be eligible to receive some initial funding in the near-term, and that the bulk of funding will become available after the FCC releases broadband maps identifying broadband access across the country.  Commerce Secretary Gina Raimondo and FCC Chairwoman Jessica Rosenworcel have indicated that the FCC expects to release the maps as early as November 2022, meaning that BEAD funds will begin to flow to states and service providers and other qualified recipients in 2023. 

On substance, the NOFO expresses a clear preference (but not an absolute requirement) for fiber-to-the-home projects by defining “Priority Broadband Projects” as those “that will provision service via end-to-end fiber-optic facilities.”  Further, the NOFO retains the IIJA’s requirement that service providers must offer broadband service at a speed of at least 100 Mbps downstream/20 Mbps upstream and a latency less than or equal to 100 milliseconds.  Last, the NOFO defines “eligible subscribers” that will be eligible for a low-cost broadband service option that providers are required to offer as “any household seeking to subscribe to broadband internet access that (1) qualifies for the Affordable Connectivity Program (ACP) or any successor program, or (2) is a member of a household that” was at or below 200 percent of the Federal poverty line or qualifies under a low-income program such as Medicaid.

Despite the NOFO’s guidance in these areas and supply chain concerns raised by broadband providers, the NOFO leaves unanswered the question of whether there will be any waiver of the “Build America, Buy America” requirement that funded broadband networks be deployed using at least 55% domestic materials.

For an expanded discussion of the BEAD Program, please visit our Client Alert here

On Friday, April 22, 2022, the National Telecommunications and Information Administration (NTIA), which is part of the Department of Commerce, issued a request for comment (RFC) on the state of competition in the mobile app marketplace.  According to the RFC, the record developed will be used to inform the Biden Administration’s competition agenda, including a report on competition in the mobile app ecosystem.  Comments are due on May 23, 2022.

Continue Reading NTIA Seeks Comment on Competition in the Mobile App Marketplace

This quarterly update summarizes key federal legislative and regulatory developments in the first quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in the States.  In the first quarter of 2022, Congress and the Administration focused on required assessments and funding for AI, restrictions on targeted advertising using personal data collected from individuals and connected devices, creating rules to enhance CAV safety, and children’s privacy topics.

Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – First Quarter 2022

A recent AAA study revealed that, although the pandemic has resulted in fewer cars on the road, traffic deaths have surged.  Speeding, alcohol-impairment, and reckless driving has caused the highest levels of crashes seen in decades, and the National Safety Council estimates a 9% increase in roadway fatalities from 2020.  Autonomous vehicles (AVs) have the potential to increase traffic safety, and the California Public Utilities Commission (CPUC) just took a step to advance their commercialization and deployment.

Continue Reading CPUC Issues First Autonomous Vehicle Drivered Deployment Permits

In 2021, European lawmakers and agencies issued a number of proposals to regulate artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAV”), and data privacy, as well as reports and funding programs to pursue the developments in these emerging areas.  From the adoption of more stringent cybersecurity standards for IoT devices to the deployment of standards-based autonomous vehicles, federal lawmakers and agencies have also promulgated new rules and guidance to promote consumer awareness and safety. While our team tracks developments across EMEA, this roundup focuses on a summary of the key developments in Europe in 2021 and what is likely to happen in 2022.

Part I: Internet of Things

With digital policy being a core priority for the current European Commission, the EU has pursued a range of initiatives in the area of IoT.  These developments tend to be interspersed throughout a range of policy and legislative decisions, which are highlighted below.

Connecting Europe Facility and IoT Funding

In July 2021, the European Parliament and Council of the EU adopted a regulation establishing the Connecting Europe Facility (€33.7 billion for 2021-2027) to accelerate investment in trans-European networks while respecting technological neutrality.  In particular, the regulation noted that the viability of “Internet of Things” services will require uninterrupted cross-border coverage with 5G systems, to enable users and objects to remain connected while on the move.  Given that 5G deployment in Europe is still sparse, road corridors and train connections are expected to be key areas for the first phase of new applications in the area of connected mobility and therefore constitute vital cross-border projects for funding under the Connecting Europe Facility.  The Parliament had also called earlier for “stable and adequate funding” for investments in AI and IoT, as well as for building transport and ICT infrastructure for intelligent transport systems (ITS), to ensure the success of the EU’s data economy.

In May 2021, the Council adopted a decision establishing a specific research funding programme (€83.4 billion for 2021-2027) under Horizon Europe.  In specifying the EU’s priorities, the decision identified the importance of IoT in health care, cybersecurity, key digital technologies including quantum technologies, next generation Internet, space, and satellite communications.

Safety and Security of IoT

In June 2021, the Parliament adopted a resolution calling for tighter EU cybersecurity standards for connected devices, apps and operating systems, amid recent cyberattacks on critical infrastructure in the EU.  It recommended that connected products and associated services, including supply chains, be made secure-by-design, resilient to cyber incidents, and quickly patched if vulnerabilities are discovered.

The resolution welcomed the European Commission’s plans to propose horizontal legislation on cybersecurity requirements for connected products and associated services and recommended that the Commission harmonize national laws in order to avoid the fragmentation of the Single Market.  The text also demanded legislation imposing cybersecurity requirements for apps, software, embedded software (that control various devices and machines that are not computers) and operating systems (software that runs a computer’s basic functions) by 2023.

Consumer IoT

In January 2022, the Commission published the results of its inquiry into the consumer IoT sector launched earlier in July 2020.  The report’s aim was to assess the sector’s competitive landscape, emerging trends and potential competition issues.  It noted that European smart home revenue will more than double between 2020 and 2025 (from €17 billion to €38.1 billion).  While the consumer IoT sector is still developing, the sector inquiry was prompted by indications of company behavior that may be conducive to distortion of competition.  The Commission’s report will contribute to its standardization strategy and upcoming legislative and non-legislative initiatives aimed at clarifying and improving the standard essential patent (SEP) framework.  It will also feed into the ongoing legislative debate on the scope of the Digital Markets Act (DMA) and specifically into some of the obligations proposed.

Part II:  Connected and Automated Vehicles

In 2021, the groundwork has been laid for regulating the CAV sector. Legislative developments at the national level were the main focus in this context. However, it is also apparent at the EU level that substantial legislative changes are on their way, so we can expect to see some developments in this regard in 2022. This is reflected by the fact that the authorities are paving the way for increasing regulation of the automotive sector through funding programs and standards.

Legislative Updates

The development dynamics in the field of automated, autonomous and connected driving is evident these days.  In 2021, Federal lawmakers focused their legislative proposals on adopting a legal framework for the use of autonomous vehicles.  As a pioneer, the German government has come forward with the enactment of the German Autonomous Driving Act as of 12 July 2021 which is supposed to provide a temporary solution until harmonized rules are in place at the EU level (so far Regulation (EU) 2018/858: always requires a person in charge of the vehicle and thus full steerability of the vehicle).  The law includes regulations of the technical requirements for the manufacturing, design and equipment of motor vehicles with autonomous driving functions, the inspection and procedure for the granting of an operating licence by the Federal Motor Vehicle Transport Authority (Kraftfahrt-Bundesamt), obligations of the persons involved in the operation of the autonomous vehicles, data processing which will be needed for the operation, and the adaptation and creation of uniform provisions to facilitate autonomous vehicle testing.

In this respect, the French decree amending the provisions of the Highway Code and the Transport Code as of 1 July 2021 is also worth mentioning which allows a driver to disclaim liability when the automated driving system operates in accordance with its conditions of use.  It further regulates the interaction between the driver and the automated driving system as well as the expected attention from the driver when the automatic driving system is engaged and allows the autonomous vehicles to be operated on predefined routes and zones starting in September 2022.

Regulatory Updates

The European Commission just recently adopted the first Work Programme for the digital part of the Connecting Europe Facility (CEF Digital), which defines the scope and objectives of the EU-supported actions that are necessary to improve Europe’s digital connectivity infrastructures.  These actions will receive more than €1 billion in funding between 2021 and 2023.  A key action that CEF Digital supports is the implementation of digital connectivity infrastructures related to cross-border projects in the areas of transport or energy and supporting operational digital platforms directly associated to transport or energy infrastructures.

Besides, the International Organization for Standardization (ISO) and SAE International published a standard that addresses the cybersecurity perspective in engineering of electrical and electronic (E/E) systems within road vehicles, and is supposed to help manufacturers keep abreast of changing technologies and cyber attack methods.

Legislative Proposals

In terms of legislative proposals, the European Commission plans to adopt a new directive “Adapting liability rules to the digital age and circular economy”.  The initiative was prompted by an evaluation of the Product Liability Directive 85/374/EEC and addresses challenges which arise when liability rules are applied to new emerging technologies (e.g. AI, IoT, CAV).  This Inception Impact Assessment proposes to adapt the framework to take into account developments related to the transition to a circular and digital economy in terms of liability for damage caused by new and refurbished products, and to address challenges associated with artificial intelligence.  This includes gaps and limitations that could potentially limit the scope and effectiveness of the Product Liability Directive if applied to the mobility systems on CAV.

Furthermore, the European Commission published a proposal for a directive amending Directive 2010/40/EU on the framework for the deployment of Intelligent Transport Systems in the field of road transport and for interfaces with other modes of transport (“proposed ITS Directive”).  The proposed ITS Directive is supposed to cover new developments such as connected and automated mobility and online platforms allowing users to access several modes of transport.  The ecosystem envisaged in the proposed ITS Directive would be based on a set of standards and aims enable interoperability and continuity of ITS applications, systems and services, and therefore connectivity and data exchange between vehicles, transport providers and infrastructure operators.  This shall be implemented by making essential ITS services mandatory throughout the EU.

UK Developments

Apart from the EU developments, the UK Centre for Connected and Autonomous Vehicles has issued a series of reports about research projects regarding CAV issues, for example, on future transport innovations, and a market forecast capturing the latest changes in the global CAV market and advances in technology.

Part III:  Artificial Intelligence and Data Privacy

We have addressed the developments with respect to Artificial Intelligence and data privacy separately.

*          *          *

We will continue to closely monitor the regulatory and policy developments on IoT and CAV in EMEA – please watch this space for further updates.