Policymakers and candidates of both parties have increased their focus on how technology is changing society, including by blaming platforms and other participants in the tech ecosystem for a range of social ills even while recognizing them as significant contributors to U.S. economic success globally.  Republicans and Democrats have significant interparty—and intraparty—differences in the form of their grievances and on many of the remedial measures to combat the purported harms.  Nonetheless, the growing inclination to do more on tech has apparently driven one key congressional committee to have compromised on previously intractable issues involving data privacy.  Rules around the use of algorithms and artificial intelligence, which have attracted numerous legislative proposals in recent years, may be the next area of convergence. 

Continue Reading Artificial Intelligence and Algorithms in the Next Congress

On July 13, the Federal Trade Commission published a notice of proposed rulemaking regarding the Motor Vehicle Dealers Trade Regulation Rule.  The Motor Vehicle Dealers Trade Regulation Rule is aimed at combating certain unfair and deceptive trade practices by dealers and promoting pricing transparency.  Comments to the proposed rule are due on or before September 12, 2022

The proposed rule:

  • Prohibits dealers from making certain misrepresentations in the sales process, enumerated in proposed § 463.3.  The list of prohibited misrepresentations includes misrepresentations regarding the “costs or terms of purchasing, financing, or leasing a vehicle” or “any costs, limitation, benefit, or any other Material aspect of an Add-on Product or Service.”
  • Includes new disclosure requirements regarding pricing, financing and add-on products and services.  Notably, the proposed rule would obligate dealers to disclose the offering price in many advertisements and communications with consumers.
  • Prohibits charges for add-on products and services that confer no benefit to the consumer and prohibits charges for items without “Express, Informed Consent” from the consumer (which, notably, as defined, excludes any “signed or initialed document, by itself”).  The proposed rule outlines a specific process for presenting charges for add-on products and services to the consumer, which obligates the dealer to disclose and offer to close the transaction for the “Cash Price without Optional Add-Ons” and obtain confirmation in writing that the consumer has rejected that price.
  • Imposes additional record-keeping requirements on the dealer, in order to demonstrate compliance with the rule.  The record-keeping requirements apply for a period of 24 months from the date the applicable record is created.

The proposed rulemaking focuses only upon “Dealers”, at a time when Tesla is now selling direct-to-consumer, Ford has announced its own designs to launch an e-commerce platform, and companies such as BMW have begun to unbundle services from vehicle sales and create new standalone offerings (see this recent article on subscription seat warmers).  Under the proposed rule, to meet the definition of a “Dealer”, a person/entity must be “predominantly engaged in the sale and servicing of motor vehicles, the leasing and servicing of motor vehicles, or both” (emphasis added). 

Gesturing at some of the developments in automotive sales models, Commissioner Christine Wilson dissented, expressing her concern that despite the “best of intentions”, a complex regulatory scheme could “stifle innovation”. She requested comment on (among other items) “Anticipated changes in the automobile marketplace with respect to technology, marketing, and sales, and whether it is possible to future-proof the proposed Rule so that it avoids inhibiting beneficial changes in these areas.”

On July 5, 2022, the Cybersecurity and Infrastructure Security Agency (“CISA”) and the National Institute of Standards and Technology (“NIST”) strongly recommended that organizations begin preparing to transition to a post-quantum cryptographic standard.  “The term ‘post-quantum cryptography’ is often referred to as ‘quantum-resistant cryptography’ and includes, ‘cryptographic algorithms or methods that are assessed not to be specifically vulnerable to attack by” a CRQC (cryptanalytically relevant quantum computer) or a classical computer.  NIST “has announced that a new post-quantum cryptographic standard will replace current public-key cryptography, which is vulnerable to quantum-based attacks.”  NIST does not intend to publish the new post-quantum cryptographic standard for commercial products until 2024 but urges companies to begin preparing now by following the Post-Quantum Cryptography Roadmap

Continue Reading CISA and NIST Urge Companies to Prepare to Transition to a Post-Quantum Cryptographic Standard

This quarterly update summarizes key federal legislative and regulatory developments in the second quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things, connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in U.S. state legislatures.  To summarize, in the second quarter of 2022, Congress and the Administration focused on addressing algorithmic bias and other AI-related risks and introduced a bipartisan federal privacy bill.

Continue Reading U.S. AI, IoT, CAV, and Data Privacy Legislative and Regulatory Update – Second Quarter 2022

Recent months have seen a growing trend of data privacy class actions asserting claims for alleged violations of federal and state video privacy laws.  In this year alone, plaintiffs have filed dozens of new class actions in courts across the country asserting claims under the federal Video Privacy Protection Act (“VPPA”), Michigan’s Preservation of Personal Privacy Act (“MPPPA”), and New York’s Video Consumer Privacy Act (“NYVCPA”).

Continue Reading Emerging Trends: Renewed Wave of Video Privacy Class Actions

On June 3, the New York State legislature passed their version of a right to repair bill—titled the “Digital Fair Repair Act”—that would allow consumers to repair their digital electronic equipment without involving the manufacturer.

Continue Reading Right to Repair: New York State Passes Right to Repair Law

Facial recognition technology (“FRT”) has attracted a fair amount of attention over the years, including in the EU (e.g., see our posts on the European Parliament vote and CNIL guidance), the UK (e.g., ICO opinion and High Court decision) and the U.S. (e.g., Washington state and NTIA guidelines). This post summarizes two recent developments in this space: (i) the UK Information Commissioner’s Office (“ICO”)’s announcement of a £7.5-million fine and enforcement notice against Clearview AI (“Clearview”), and (ii) the EDPB’s release of draft guidelines on the use of FRT in law enforcement.

Continue Reading Facial Recognition Update: UK ICO Fines Clearview AI £7.5m & EDPB Adopts Draft Guidelines on Use of FRT by Law Enforcement

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  Sam Jungyun Choi, Associate in Covington’s Technology Regulatory Group, and Anna Oberschelp, Associate in Covington’s Data Privacy & Cybersecurity Practice Group, discussed global regulatory trends that affect robotics, highlights of which are captured here.  A recording of the forum is available here until May 31, 2022.

Trends on Regulating Artificial Intelligence

            According to the Organization for Economic Cooperation and Development  Artificial Intelligence Policy Observatory (“OECD”), since 2017, at least 60 countries have adopted some form of AI policy, a torrent of government activity that nearly matches the pace of modern AI adoption.  Countries around the world are establishing governmental and intergovernmental strategies and initiatives to guide the development of AI.  These AI initiatives include: (1) AI regulation or policy; (2) AI enablers (e.g., research and public awareness); and (3) financial support (e.g., procurement programs for AI R&D).  The anticipated introduction of AI regulations raises concerns about looming challenges for international cooperation.

United States

            The U.S. has not yet enacted comprehensive AI legislation, though many AI initiatives have emerged at both the state and federal level.  The number of federal proposed bills introduced with AI provisions grew from 2 in 2012 to 131 in 2021.  Despite the dramatic increase of bills introduced, the number of bills actually enacted by the U.S. Congress remains low, with only 2% of the proposed bills ultimately becoming law. 

            At the same time, U.S. state legislation, either focused on AI technologies or comprehensive privacy bills with AI provisions, have passed at much higher rates than their federal counterparts.  Some states have proposed bills that would regulate AI technologies in the context of a broader data protection framework, such as those laws recently passed in Virginia, Colorado, and Connecticut which set forth requirements for certain profiling activities that could implicate AI. In addition, states have also introduced bills and passed laws that directly regulate AI technologies, such as Colorado’s statute that sets forth requirements for the use of AI technologies in the insurance space. In contrast to the 2% pass rate at the federal level, 20% of the 131 state-proposed bills with AI provisions were passed into law in 2021. Massachusetts proposed the most AI-related bills in 2021 with 20, followed by Illinois with 15, and Alabama with 12.

            Another emerging trend in the U.S. is to regulate the use of AI at the sector-specific level, such as the use of AI by financial institutions, healthcare organizations, or in other regulated contexts.  For example, the Food and Drug Administration (“FDA”) has outlined a plan with the agency’s intended actions to further develop a regulatory framework for applications of AI and machine learning within the FDA’s authority.

European Union

            On April 22, 2021, the European Commission published a proposal for AI regulation as part of its broader “AI package,” which includes (i) a legal framework (the EU Artificial Intelligence Act proposed in April, 2021) to address rights and safety risks, (ii) a review of the existing rules on liability (e.g., product liability in the EU) that could apply to AI systems, and (iii) revisions to sector-specific safety regulations (e.g., EU Machinery Regulation). 

            The material scope of the proposal would apply to “AI systems,” which are defined as systems that (i) receive machine or human inputs or data; (ii) infer how to achieve certain objectives using specified “techniques and approaches,” which are defined as machine learning (“ML”), logic- or knowledge-based, and statistical processes; and (iii) generate outputs like content (audio, video, or text), recommendations, or predictions.  The breadth of the proposal would be relevant for the entire chain of actors from AI systems providers, manufacturers, distributors, importers, and  users of AI.  The territorial scope of the proposal extends to AI systems “placed” or “used” in the EU, or to AI systems used outside of the EU but whose “outputs” are used in the EU.

            The EU model adopts a “risk-based” approach to regulate AI systems by creating four categories of risk: (1) unacceptable, (2) high, (3) limited, and (4) minimal.  AI systems with unacceptable risk would be banned and deemed to present a “clear threat to safety, livelihood, and rights.”  AI systems with high risk would be heavily regulated — including through pre-market conformity assessments.  AI systems with limited risk would be made transparent to users, and AI systems with low-minimal risk could be freely used but encouraged to adhere to codes of conduct.

United Kingdom

            The UK is taking an innovation-friendly approach to AI regulation.  On September 22, 2021, the UK Government published the “UK AI Strategy,” a 10-year strategy with three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all UK industry sectors and geographic regions; and (3) ensuring that the UK gets “right” the national and international governance of AI technologies.

            The UK AI Strategy’s pro-innovation outlook aligns with the UK Government’s “Plan for Digital Regulation,” which it published in July of 2021.  The UK AI Strategy notes that, while the UK currently regulates many aspects of the development and use of AI through cross-sectoral legislation (including competition, data protection, and financial services), the sector-led approach can lead to overlaps or inconsistencies.  To remove potential inconsistencies, the UK AI Strategy’s third pillar proposes publishing a white paper on regulating AI by early 2022 which will set out the risks and harms of AI, and outline proposals to address them.

Brazil

            On March 30, 2022, Brazil’s Senate announced the creation of a commission tasked with drafting new regulation on AI.  The Commission will lead a study into existing experiences, such as those in the EU, for inspiration of the application for the same concepts within Brazil.  Brazil’s approach to AI is similar to that taken with Brazil’s General Data Protection Law (“LGPD”), which mirrors the GDPR.  On April 4, 2022, Brazil’s Senate opened a public consultation on its AI strategy and interested stakeholders could submit responses until May 13, 2022.

India

            On February 22, 2022, the Indian Department of Telecommunications published a request for comment on a potential framework for fairness assessments in relation to AI and ML systems.  In light of bias and the need for ethical principles in the design, development, and deployment of AI, the Department noted in particular that it seeks to establish voluntary fairness assessment procedures.

Jordan

            On February 9, 2022, Jordan’s Minister of Digital Economy and Entrepreneurship launched a public consultation of the National Charter of AI, which includes principles and guidelines that support the application of AI within ethical principles, that responsibly promote innovation and creativity, and that ensure an investment-stimulating economy.

China

            China is one of the first countries in the world to regulate AI algorithms.  China’s AI algorithm regulations took effect on March 1, 2022; they require businesses to provide explainable AI algorithms that are transparent about their purpose.  The regulations also prohibit businesses that rely on AI algorithms from offering different prices to different people based on personal data that they collect.

International Organizations

OECD

            On February 22, 2022, the OECD published the “Framework for the Classification of Artificial Intelligence Systems.”  The Framework’s primary purpose is to characterize the application of an AI system deployed in a specific project and context, although some aspects are also relevant to general AI systems.  Additionally, the Framework provides a baseline to:

  • promote a common understanding of AI to identify features of AI systems that matter the most to help governments and developers tailor policies to specific AI applications and help identify or develop metrics to assess subjective criteria;
  • support sector-specific frameworks by providing the basis for more detailed applications or domain-specific catalogues of criteria in sectors such as healthcare and finance; and
  • support risk assessments by providing the basis to develop a risk assessment framework.

UNESCO

            On November 25, 2021, all UN Educational, Scientific and Cultural Organization (“UNESCO”) member states adopted the first global agreement on the ethics of AI.  In particular, the agreement classifies AI as technological systems which have the capacity to process information in a manner that resembles intelligent behavior and typically includes aspects of reasoning, learning, perception, prediction, planning, or control.  Specifically, the agreement focuses on the broader ethical implications of AI systems in relation to UNESCO’s central domains of education, science, culture, communication, and information, and highlights core principles and values such as diversity and inclusiveness, fairness and non-discrimination, privacy, and human oversight and determination.

Trends on Regulating Robotics

            There has been an uptick in regulations imposed by countries around the world with direct relevance to robotics.  These broad categories or regulations include:

  • Data Protection
    • The United Nations International Children’s Emergency Fund (“UNICEF”) issued a Memorandum on Artificial Intelligence and Child Rights, which discusses how AI strategies impact children’s rights, including the right of portability of personal data and automated data processing.
  • Product Safety and Liability
    • The EU is reviewing its product liability rules to cover robotics through its legal framework for the safety of robotics.
    • Japan’s government has adopted a bill that will make driverless cars legal. 
    • Germany has adopted a bill that will allow driverless vehicles on public roads by 2022, laying the groundwork for companies to deploy “robotaxis” and delivery services in the country at scale.  While autonomous vehicle testing is currently permitted in Germany, the bill will allow operations of driverless vehicles without a human safety operator behind the wheel. 
  • Facial Recognition
    • In 2021, the Supreme People’s Court of China issued regulations for use of facial recognition technology by private businesses.
    • The European Data Protection Board has published draft guidelines on the use of facial recognition technology in the area of law enforcement.

Trends on Regulating Cybersecurity

            While 156 countries (80% of all countries) have enacted cybercrime legislation, the pattern varies significantly by region.

United States

            Every U.S. state has its own breach notification statute, which prescribes notice requirements for the unauthorized access or disclosure of certain types of personal information.  Additionally, there are efforts to create a uniform federal framework in Congress.  On March 2, 2022, the Senate unanimously passed the Strengthening American Cybersecurity Act of 2022, which would impose a 72-hour notification requirement on certain entities that own or operate critical infrastructure in the event of substantial cybersecurity incidents, as defined in the bill.  The bill has not yet been passed by the House of Representatives.  On March 23, the Senate introduced the Healthcare Cybersecurity Act of 2022, which would direct the Cybersecurity and Infrastructure Security Agency (“CISA”) and the Department of Health and Human Services (“HHS”) to collaborate on how to improve cybersecurity measures across healthcare providers.

European Union

            In 2022, the EU is expected to adopt the Proposal for Directive on Measures for High Common Level of Cybersecurity Across the Union (“NIS2 Directive”).  The NIS2 Directive would apply to entities providing services falling within the below sectors:

  • Essential Entities – Energy, transportation, banking, financial market infrastructure, drinking water, waste water, public administration; space, health, research and manufacture of pharmaceutical products, manufacture of medical devices critical during public health emergencies; and digital infrastructure sectors such as cloud computing providers, DNS service providers, and content delivery network providers.
  • Important Entities – Postal and courier services; waste management; chemicals; food; manufacturing of medical devices, computers and electronics, machinery equipment, and motor vehicles; and digital providers such as online market places, search engines, and social networking service platforms.

            Each of these entities would have to implement various measures set out in the Directive to ensure that they can detect and manage the security risks to their networks and information systems.  The European Commission and member states may require these entities to obtain European cybersecurity certifications, and impose an obligation to notify incidents having a significant impact on the provision of their services to regulators and recipients of their service.  Under this Directive, essential entities are subject to ex ante regulation, while important entities are subject to ex post regulation.

            Under the NIS2 Directive, member states would have to establish national cybersecurity frameworks that include a cybersecurity strategy, a crisis management framework, and competent authorities and computer security incident response teams.  The authorities must maintain a list of known vulnerabilities in network and information systems, and pool them in a centralized database.  Authorities may also impose fines of up to the higher of 10 million or 2% of the worldwide annual turnover of the “undertaking” of the preceding financial year.

United Kingdom

            As part of the UK’s National Cyber Strategy of 2022, on January 19, 2022, the UK Government launched a public consultation for a proposal for legislation to improve the UK’s cyber resilience (“UK Cyber Security Proposal”).  The objectives for the consultation are based on two pillars: (1) to expand the scope of digital services under the UK Network and Information Systems (“NIS”) Regulations in response to gaps and evolving threats to cybersecurity and (2) to update and future-proof the UK NIS Regulations in order to more easily manage future risks.  The feedback period ended on April 10, 2022.

Australia

            On March 31, 2022, the Security Legislation Amendment Bill of 2022 passed both houses of Australia’s Parliament.  The bill sets out a number of additional measures, including the obligation to adopt and maintain a Risk Management Program, the ability to declare Systems of National Significance, and enhanced cybersecurity obligations that may apply to these systems.  Australia’s Cyber and Infrastructure Security Centre (“CISC”) highlighted that the bill seeks to make risk management, preparedness, prevention, and resilience “business as usual” for the owners and operators of critical infrastructure assets and to improve information exchange between industry and the government. 

International Organizations

            On January 28, 2022, the Association of Southeast Asian Nations’ (“ASEAN”) Digital Ministers’ Meeting announced the launch of the ASEAN Cybersecurity Cooperation Strategy of 2021-2025.  The meeting noted that it welcomed the draft strategy as an update to its previous strategy, and noted that the updated strategy is needed to respond to new cyber developments since 2017.

* * *

            We will provide other developments related to robotics on our blog.  To learn more about the work discussed in this post, please visit the Technology Industry and Data Privacy & Cybersecurity pages of our web site.  For more information on developments related to AI, IoT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.

Technology equity markets took a sharp turn in the last two months of Q1 2022, with S&P Technology Index reaching to over 18% in the red in mid-March, before closing the quarter at 7% off.  In the last month, across all sectors, Russia’s attack on Ukraine has rattled markets and dented investor appetite amid increased volatility and uncertainty.  The decline in valuations is being impacted by the combined headwinds of rising inflation and interest rates, as well as geopolitical uncertainty. 

Russia’s invasion of Ukraine triggered an unprecedented phenomenon: global technology firms responded to the invasion by suspending or terminating business operations, effectively self-sanctioning beyond regulatory requirements, often at great expense to bottom lines.  This trend will likely continue – in 2022 decisions about where to invest and who to accept investment from will be driven by ethical concerns, as well as the shifting geopolitical risks.  However, as we will see in this article, many tech businesses struggle to fully abandon their presence in Russia.

This article highlights some of the ways in which the Ukraine crisis is changing tech M&A.

Expanded scope of Due Diligence

As tech companies embark on M&A deals, proactive and effective risk management will be more essential than ever.  Enhanced focus on these issues is likely to translate to expansion of transaction timelines.

  • Sanctions:  The evolving sanctions regime froze the cross-border M&A market for Russian assets and non-Russian assets owned or part-owned by Russian parties.  The first question any M&A team should be checking is whether the deal is permitted under the current sanctions regime.  That requires looking carefully at the ownership structure.  Buyers should look for situations where there may be individuals with proxies or non-sanctioned family members holding their shares.  Recent changes in share ownership will be a definite diligence red flag.
    • It is common for transactions relating to Russian assets to be structured as overseas joint ventures, commonly established in Cyprus, the Netherlands, Luxembourg, Malta or Switzerland.  Tech companies looking to divest their stakes in these entities will need to consider the impact of EU/UK/U.S. sanctions, as well as increasing Russian counter-sanctions.  The restricted number of potential buyers is likely to have a knock-on effect on valuations.
    • The sanctions regime is evolving rapidly, so this particular diligence issue will need to be repeated regularly throughout the deal process as new sanctions measures are introduced.
  • Business Continuity:  More uncertainty and greater risk is also sharpening focus on the impact of the conflict in Ukraine on business continuity.  For tech companies that were already struggling with talent retention, the conflict in Ukraine significantly impacted talent available in the Ukraine tech hub.  However, the concern extends more broadly to Russia as the “brain drain” of top talent continues.  Tech companies looking to leave the Russian market will need to consider (i) whether it is possible to relocate the existing employees; (ii) the availability of local talent in the new location; (iii) the impact of severance costs, which in line with market practice are not insignificant, on the bottom line.  The availability of talent in the region will likely be impacted for some time after the end of the conflict.
  • Commercial Contracts:  Buyers will be concerned about the enforceability of the target’s contractual arrangements.  Provisions such as material adverse change, change in law or force majeure will be the focus of any diligence exercise of material contracts.  Even where a commercial agreement provides for arbitration as the dispute resolution mechanisms, arbitration awards may need to be enforced by a court.  Parties will need to carefully consider such provisions in the context of the legislative and regulatory response to the Ukrainian crisis.

Deal Execution:  A (simplified) way forward

Even where a transaction is permitted under the current sanctions regime, tech companies expanding their businesses through acquisitions should ensure that all contractual payments are front-loaded to the maximum extent permissible, to minimize the risk that new sanctions may make certain payments unlawful.  Other risk mitigation strategies include minimizing the gap between signing and completion and avoiding deferred consideration or significant holdbacks or escrow. 

Tech buyers should not assume that a MAC clause will give them a walk-away right if circumstances change between signing and closing a deal.  MAC clauses are rarely used and even more rarely upheld by courts.  They are unlikely to offer a buyer an “easy exit”, unless the target is disproportionately impacted by the conflict.  A few weeks into the conflict, sellers will increasingly argue that this is a known and assessable risk for the buyer, that should be carved out of the scope of the MAC provision.

Tech M&A transactions continue to be under intense regulatory scrutiny, in part due to the political pressure on regulators to safeguard technology assets.  In recent years, with a hot tech M&A market, a sharp focus was drawn on very significant break fees.  Parties will need to consider how to address this trend, in a world where a break fee may not be allowed to be paid to a sanctioned entity.  Transfers of funds to a Russia connected entity will require careful analysis for compliance with the sanctions regime.

*           *           *

Ukraine Crisis: Resources for Responding to the Impact of the Escalating Conflict

Our lawyers are actively engaged in advising clients on the full range of implications of the current conflict on their business and operations in Russia, Ukraine and globally. This includes advice on potential acquisitions and disposals of assets in Ukraine and Russia in light of the evolving sanctions regime, mitigating exposure to investments in Russia, managing legal and reputational risks for joint venture partners and commercial advice with respect to the impact on current business operations in the region. Our team includes lawyers with extensive experience representing clients on complex transactions and challenging situations in the region across a broad set of asset classes, as well as excellent relations with trusted local lawyers. Please visit the Ukraine and M&A pages on our web site to learn more about this work.

If you have any questions concerning the material discussed in this client alert, please contact the following members of our Mergers and Acquisitions practice:
Louise Nash                                       +44 20 7067 2028                  lnash@cov.com
Peter Laveran-Stiebar                       +1 212 841 1024                    plaveran@cov.com
Philipp Tamussino                            +49 69 768063 392                ptamussino@cov.com
Luciana Griebel                                 +44 20 7067 2268                  lgriebel@cov.com

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  One segment of the Robotics Forum covered risks of automation and AI, highlights of which are captured here.  A full recording of the Robotics Forum is available here until May 31, 2022.

            As AI and robotics technologies mature, the use-cases are expected to grow in increasingly complex areas and to pose new risks. Because lawsuits have settled prior to a court deciding liability questions, no settled case law yet exists to identify where the liability rests between robotics engineers, AI designers, and manufacturers.  Scholars and researchers have proposed addressing these issues through products liability and discrimination doctrines, including the creation of new legal remedies specific to AI technology and particular use-cases, such as self-driving cars.  Proposed approaches for liability through existing doctrines have included:

  • Strict Liability Approach – Manufacturer Liability
    • Courts could apply the “consumer expectations” test where manufacturers would be responsible for defects in design or software that create unreasonably dangerous conditions.  Under this approach, there would be no need to show a reasonable alternative design.  Some argue that this approach would dampen innovation.
  • Negligence Approach
    • Courts could apply the “risk-utility” test where plaintiffs must show that adopting a reasonable design alternative could have reduced the foreseeable risks of harm the product posed.  Courts also could perform a cost-benefit analysis that balances the cost to the manufacturer for an alternative design in relation to the amount of harm reduced.
  • Breach of Warranty Approach
    • Commercial remedies could apply to robotics-related accidents.  The Uniform Commercial Code (“UCC”) governs many aspects of product warranties and commercial transactions, and some have argued that it also could govern robotics liability.  Express warranties are created when a seller promises something to a prospective buyer in association with the sale of goods.
  • Multiple Actor – Joint Liability
    • Under this approach, various parties involved in the design and use of a robotics product could be held liable for harms associated with the product’s performance or malfunction.  Such an approach could prove particularly challenging for complex technologies, such as self-driving cars.

            Stakeholders also must be mindful of how human bias can affect robotics and AI.  Bias in AI can be created via statistical bias where an algorithm produces results are not representative of the true population or social bias where an algorithm treats groups unequally within a system.  There are a number of data practices that can result in AI bias, such as: (1) relying on past biased data in a machine learning algorithm; (2) collecting data for use in AI that is non-representative or not impartial; (3) making broad generalizations with respect to data inputs or results; (4) relying on factors that become a proxy for protected classes based on correlations in society; and (5) using the neutral face of AI to mask intentional discrimination.  The good news is that companies can proactively remedy potential bias or discrimination by avoiding these pitfalls, testing algorithms on diverse population sets, and following evolving legal developments and best practices.

            We will provide additional updates about the 2022 Covington Robotics Forum and other developments related to robotics on our blog.  To learn more about our commercial litigation work, please visit the Commercial Litigation page of our web site.  For more information on developments related to AI, IOT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.