In its August 5, 2022 affirmance of the district court’s grant of summary judgment, the Federal Circuit in Thaler v. Vidal ruled that the Patent Act unambiguously and directly answers the question of whether an AI software system can be listed as the inventor on a patent application. Since an inventor must be a human being, AI cannot be.

Judge Stark’s first authored precedential opinion since confirmation to the Federal Circuit aligns the U.S. position on whether AI can be listed as an inventor on a patent application with that of other major jurisdictions. Left for another day are questions such as the rights, if any, of AI systems, and whether AI systems can contribute to the conception of an invention.

PTO and Litigation Background of the DABUS Patent Applications

In July 2019, two patent applications were filed in the United States Patent and Trademark Office (PTO) that identified an AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) as the sole inventor and Stephen L. Thaler as the Applicant and Assignee. DABUS, which was characterized as “a particular type of connectionist artificial intelligence” known as a “Creativity Machine” during prosecution and as “a collection of source code or programming and a software program” before the U.S. District Court for the Eastern District of Virginia, allegedly generated the subject matter of the two patent applications.

The filed patent applications specifically stated that the inventions were conceived by DABUS, and that DABUS should accordingly be named as the inventor. The PTO subsequently issued Notices stating that the applications did not identify each inventor by his or her legal name. In response to filed Petitions requesting that the PTO vacate the issued Notices, the PTO issued Petition Decisions refusing to vacate, explaining that a machine does not qualify as an inventor under the patent laws, and providing additional time to identify inventors by their legal name to avoid abandonment of the applications.

Thaler then sought judicial review under the Administrative Procedure Act in the Eastern District of Virginia, requesting an order compelling the PTO to reinstate the DABUS patent applications, and a declaration that a patent application for an AI-generated invention should not be rejected on the basis that no natural person is identified as an inventor. After briefing and oral argument, the district court issued an order denying Thaler’s requested relief and granting the PTO’s motion for summary judgment, recognizing the Federal Circuit’s consistent holdings under current patent law requiring inventors to be natural persons.

Continue Reading Federal Circuit Rules That Under The Patent Act An Inventor Must Be Human: So What Can Be Done To Patent AI Inventions?

On August 25, 2022, President Biden announced a new Executive Order (“EO”) addressing the Implementation of the CHIPS Act of 2022 (“CHIPS Act”).  The CHIPS Act was signed by President Biden on August 9, 2022, and, among other things, authorizes $39 billion in funding for new projects to establish semiconductor production facilities within the United States.  The new EO identifies the Administration’s implementation priorities for this CHIPS Act funding and creates the CHIPS Implementation Steering Council to aid with the rollout of administrative guidance.  In connection with the EO, the Department of Commerce launched, which is intended to be a centralized resource for potential applicants of CHIPS funding.  The EO and new website reflect the Administration’s intent to swiftly implement the CHIPS Act and increase the domestic production of semiconductors. 

Continue Reading Biden Administration Announces Priorities for the Implementation of the CHIPS Act of 2022

Policymakers and candidates of both parties have increased their focus on how technology is changing society, including by blaming platforms and other participants in the tech ecosystem for a range of social ills even while recognizing them as significant contributors to U.S. economic success globally.  Republicans and Democrats have significant interparty—and intraparty—differences in the form of their grievances and on many of the remedial measures to combat the purported harms.  Nonetheless, the growing inclination to do more on tech has apparently driven one key congressional committee to have compromised on previously intractable issues involving data privacy.  Rules around the use of algorithms and artificial intelligence, which have attracted numerous legislative proposals in recent years, may be the next area of convergence. 

Continue Reading Artificial Intelligence and Algorithms in the Next Congress

On July 13, the Federal Trade Commission published a notice of proposed rulemaking regarding the Motor Vehicle Dealers Trade Regulation Rule.  The Motor Vehicle Dealers Trade Regulation Rule is aimed at combating certain unfair and deceptive trade practices by dealers and promoting pricing transparency.  Comments to the proposed rule are due on or before September 12, 2022

The proposed rule:

  • Prohibits dealers from making certain misrepresentations in the sales process, enumerated in proposed § 463.3.  The list of prohibited misrepresentations includes misrepresentations regarding the “costs or terms of purchasing, financing, or leasing a vehicle” or “any costs, limitation, benefit, or any other Material aspect of an Add-on Product or Service.”
  • Includes new disclosure requirements regarding pricing, financing and add-on products and services.  Notably, the proposed rule would obligate dealers to disclose the offering price in many advertisements and communications with consumers.
  • Prohibits charges for add-on products and services that confer no benefit to the consumer and prohibits charges for items without “Express, Informed Consent” from the consumer (which, notably, as defined, excludes any “signed or initialed document, by itself”).  The proposed rule outlines a specific process for presenting charges for add-on products and services to the consumer, which obligates the dealer to disclose and offer to close the transaction for the “Cash Price without Optional Add-Ons” and obtain confirmation in writing that the consumer has rejected that price.
  • Imposes additional record-keeping requirements on the dealer, in order to demonstrate compliance with the rule.  The record-keeping requirements apply for a period of 24 months from the date the applicable record is created.

The proposed rulemaking focuses only upon “Dealers”, at a time when Tesla is now selling direct-to-consumer, Ford has announced its own designs to launch an e-commerce platform, and companies such as BMW have begun to unbundle services from vehicle sales and create new standalone offerings (see this recent article on subscription seat warmers).  Under the proposed rule, to meet the definition of a “Dealer”, a person/entity must be “predominantly engaged in the sale and servicing of motor vehicles, the leasing and servicing of motor vehicles, or both” (emphasis added). 

Gesturing at some of the developments in automotive sales models, Commissioner Christine Wilson dissented, expressing her concern that despite the “best of intentions”, a complex regulatory scheme could “stifle innovation”. She requested comment on (among other items) “Anticipated changes in the automobile marketplace with respect to technology, marketing, and sales, and whether it is possible to future-proof the proposed Rule so that it avoids inhibiting beneficial changes in these areas.”

On July 5, 2022, the Cybersecurity and Infrastructure Security Agency (“CISA”) and the National Institute of Standards and Technology (“NIST”) strongly recommended that organizations begin preparing to transition to a post-quantum cryptographic standard.  “The term ‘post-quantum cryptography’ is often referred to as ‘quantum-resistant cryptography’ and includes, ‘cryptographic algorithms or methods that are assessed not to be specifically vulnerable to attack by” a CRQC (cryptanalytically relevant quantum computer) or a classical computer.  NIST “has announced that a new post-quantum cryptographic standard will replace current public-key cryptography, which is vulnerable to quantum-based attacks.”  NIST does not intend to publish the new post-quantum cryptographic standard for commercial products until 2024 but urges companies to begin preparing now by following the Post-Quantum Cryptography Roadmap

Continue Reading CISA and NIST Urge Companies to Prepare to Transition to a Post-Quantum Cryptographic Standard

This quarterly update summarizes key federal legislative and regulatory developments in the second quarter of 2022 related to artificial intelligence (“AI”), the Internet of Things, connected and automated vehicles (“CAVs”), and data privacy, and highlights a few particularly notable developments in U.S. state legislatures.  To summarize, in the second quarter of 2022, Congress and the Administration focused on addressing algorithmic bias and other AI-related risks and introduced a bipartisan federal privacy bill.

Continue Reading U.S. AI, IoT, CAV, and Data Privacy Legislative and Regulatory Update – Second Quarter 2022

Recent months have seen a growing trend of data privacy class actions asserting claims for alleged violations of federal and state video privacy laws.  In this year alone, plaintiffs have filed dozens of new class actions in courts across the country asserting claims under the federal Video Privacy Protection Act (“VPPA”), Michigan’s Preservation of Personal Privacy Act (“MPPPA”), and New York’s Video Consumer Privacy Act (“NYVCPA”).

Continue Reading Emerging Trends: Renewed Wave of Video Privacy Class Actions

On June 3, the New York State legislature passed their version of a right to repair bill—titled the “Digital Fair Repair Act”—that would allow consumers to repair their digital electronic equipment without involving the manufacturer.

Continue Reading Right to Repair: New York State Passes Right to Repair Law

Facial recognition technology (“FRT”) has attracted a fair amount of attention over the years, including in the EU (e.g., see our posts on the European Parliament vote and CNIL guidance), the UK (e.g., ICO opinion and High Court decision) and the U.S. (e.g., Washington state and NTIA guidelines). This post summarizes two recent developments in this space: (i) the UK Information Commissioner’s Office (“ICO”)’s announcement of a £7.5-million fine and enforcement notice against Clearview AI (“Clearview”), and (ii) the EDPB’s release of draft guidelines on the use of FRT in law enforcement.

Continue Reading Facial Recognition Update: UK ICO Fines Clearview AI £7.5m & EDPB Adopts Draft Guidelines on Use of FRT by Law Enforcement

            On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics.  Sam Jungyun Choi, Associate in Covington’s Technology Regulatory Group, and Anna Oberschelp, Associate in Covington’s Data Privacy & Cybersecurity Practice Group, discussed global regulatory trends that affect robotics, highlights of which are captured here.  A recording of the forum is available here until May 31, 2022.

Trends on Regulating Artificial Intelligence

            According to the Organization for Economic Cooperation and Development  Artificial Intelligence Policy Observatory (“OECD”), since 2017, at least 60 countries have adopted some form of AI policy, a torrent of government activity that nearly matches the pace of modern AI adoption.  Countries around the world are establishing governmental and intergovernmental strategies and initiatives to guide the development of AI.  These AI initiatives include: (1) AI regulation or policy; (2) AI enablers (e.g., research and public awareness); and (3) financial support (e.g., procurement programs for AI R&D).  The anticipated introduction of AI regulations raises concerns about looming challenges for international cooperation.

United States

            The U.S. has not yet enacted comprehensive AI legislation, though many AI initiatives have emerged at both the state and federal level.  The number of federal proposed bills introduced with AI provisions grew from 2 in 2012 to 131 in 2021.  Despite the dramatic increase of bills introduced, the number of bills actually enacted by the U.S. Congress remains low, with only 2% of the proposed bills ultimately becoming law. 

            At the same time, U.S. state legislation, either focused on AI technologies or comprehensive privacy bills with AI provisions, have passed at much higher rates than their federal counterparts.  Some states have proposed bills that would regulate AI technologies in the context of a broader data protection framework, such as those laws recently passed in Virginia, Colorado, and Connecticut which set forth requirements for certain profiling activities that could implicate AI. In addition, states have also introduced bills and passed laws that directly regulate AI technologies, such as Colorado’s statute that sets forth requirements for the use of AI technologies in the insurance space. In contrast to the 2% pass rate at the federal level, 20% of the 131 state-proposed bills with AI provisions were passed into law in 2021. Massachusetts proposed the most AI-related bills in 2021 with 20, followed by Illinois with 15, and Alabama with 12.

            Another emerging trend in the U.S. is to regulate the use of AI at the sector-specific level, such as the use of AI by financial institutions, healthcare organizations, or in other regulated contexts.  For example, the Food and Drug Administration (“FDA”) has outlined a plan with the agency’s intended actions to further develop a regulatory framework for applications of AI and machine learning within the FDA’s authority.

European Union

            On April 22, 2021, the European Commission published a proposal for AI regulation as part of its broader “AI package,” which includes (i) a legal framework (the EU Artificial Intelligence Act proposed in April, 2021) to address rights and safety risks, (ii) a review of the existing rules on liability (e.g., product liability in the EU) that could apply to AI systems, and (iii) revisions to sector-specific safety regulations (e.g., EU Machinery Regulation). 

            The material scope of the proposal would apply to “AI systems,” which are defined as systems that (i) receive machine or human inputs or data; (ii) infer how to achieve certain objectives using specified “techniques and approaches,” which are defined as machine learning (“ML”), logic- or knowledge-based, and statistical processes; and (iii) generate outputs like content (audio, video, or text), recommendations, or predictions.  The breadth of the proposal would be relevant for the entire chain of actors from AI systems providers, manufacturers, distributors, importers, and  users of AI.  The territorial scope of the proposal extends to AI systems “placed” or “used” in the EU, or to AI systems used outside of the EU but whose “outputs” are used in the EU.

            The EU model adopts a “risk-based” approach to regulate AI systems by creating four categories of risk: (1) unacceptable, (2) high, (3) limited, and (4) minimal.  AI systems with unacceptable risk would be banned and deemed to present a “clear threat to safety, livelihood, and rights.”  AI systems with high risk would be heavily regulated — including through pre-market conformity assessments.  AI systems with limited risk would be made transparent to users, and AI systems with low-minimal risk could be freely used but encouraged to adhere to codes of conduct.

United Kingdom

            The UK is taking an innovation-friendly approach to AI regulation.  On September 22, 2021, the UK Government published the “UK AI Strategy,” a 10-year strategy with three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all UK industry sectors and geographic regions; and (3) ensuring that the UK gets “right” the national and international governance of AI technologies.

            The UK AI Strategy’s pro-innovation outlook aligns with the UK Government’s “Plan for Digital Regulation,” which it published in July of 2021.  The UK AI Strategy notes that, while the UK currently regulates many aspects of the development and use of AI through cross-sectoral legislation (including competition, data protection, and financial services), the sector-led approach can lead to overlaps or inconsistencies.  To remove potential inconsistencies, the UK AI Strategy’s third pillar proposes publishing a white paper on regulating AI by early 2022 which will set out the risks and harms of AI, and outline proposals to address them.


            On March 30, 2022, Brazil’s Senate announced the creation of a commission tasked with drafting new regulation on AI.  The Commission will lead a study into existing experiences, such as those in the EU, for inspiration of the application for the same concepts within Brazil.  Brazil’s approach to AI is similar to that taken with Brazil’s General Data Protection Law (“LGPD”), which mirrors the GDPR.  On April 4, 2022, Brazil’s Senate opened a public consultation on its AI strategy and interested stakeholders could submit responses until May 13, 2022.


            On February 22, 2022, the Indian Department of Telecommunications published a request for comment on a potential framework for fairness assessments in relation to AI and ML systems.  In light of bias and the need for ethical principles in the design, development, and deployment of AI, the Department noted in particular that it seeks to establish voluntary fairness assessment procedures.


            On February 9, 2022, Jordan’s Minister of Digital Economy and Entrepreneurship launched a public consultation of the National Charter of AI, which includes principles and guidelines that support the application of AI within ethical principles, that responsibly promote innovation and creativity, and that ensure an investment-stimulating economy.


            China is one of the first countries in the world to regulate AI algorithms.  China’s AI algorithm regulations took effect on March 1, 2022; they require businesses to provide explainable AI algorithms that are transparent about their purpose.  The regulations also prohibit businesses that rely on AI algorithms from offering different prices to different people based on personal data that they collect.

International Organizations


            On February 22, 2022, the OECD published the “Framework for the Classification of Artificial Intelligence Systems.”  The Framework’s primary purpose is to characterize the application of an AI system deployed in a specific project and context, although some aspects are also relevant to general AI systems.  Additionally, the Framework provides a baseline to:

  • promote a common understanding of AI to identify features of AI systems that matter the most to help governments and developers tailor policies to specific AI applications and help identify or develop metrics to assess subjective criteria;
  • support sector-specific frameworks by providing the basis for more detailed applications or domain-specific catalogues of criteria in sectors such as healthcare and finance; and
  • support risk assessments by providing the basis to develop a risk assessment framework.


            On November 25, 2021, all UN Educational, Scientific and Cultural Organization (“UNESCO”) member states adopted the first global agreement on the ethics of AI.  In particular, the agreement classifies AI as technological systems which have the capacity to process information in a manner that resembles intelligent behavior and typically includes aspects of reasoning, learning, perception, prediction, planning, or control.  Specifically, the agreement focuses on the broader ethical implications of AI systems in relation to UNESCO’s central domains of education, science, culture, communication, and information, and highlights core principles and values such as diversity and inclusiveness, fairness and non-discrimination, privacy, and human oversight and determination.

Trends on Regulating Robotics

            There has been an uptick in regulations imposed by countries around the world with direct relevance to robotics.  These broad categories or regulations include:

  • Data Protection
    • The United Nations International Children’s Emergency Fund (“UNICEF”) issued a Memorandum on Artificial Intelligence and Child Rights, which discusses how AI strategies impact children’s rights, including the right of portability of personal data and automated data processing.
  • Product Safety and Liability
    • The EU is reviewing its product liability rules to cover robotics through its legal framework for the safety of robotics.
    • Japan’s government has adopted a bill that will make driverless cars legal. 
    • Germany has adopted a bill that will allow driverless vehicles on public roads by 2022, laying the groundwork for companies to deploy “robotaxis” and delivery services in the country at scale.  While autonomous vehicle testing is currently permitted in Germany, the bill will allow operations of driverless vehicles without a human safety operator behind the wheel. 
  • Facial Recognition
    • In 2021, the Supreme People’s Court of China issued regulations for use of facial recognition technology by private businesses.
    • The European Data Protection Board has published draft guidelines on the use of facial recognition technology in the area of law enforcement.

Trends on Regulating Cybersecurity

            While 156 countries (80% of all countries) have enacted cybercrime legislation, the pattern varies significantly by region.

United States

            Every U.S. state has its own breach notification statute, which prescribes notice requirements for the unauthorized access or disclosure of certain types of personal information.  Additionally, there are efforts to create a uniform federal framework in Congress.  On March 2, 2022, the Senate unanimously passed the Strengthening American Cybersecurity Act of 2022, which would impose a 72-hour notification requirement on certain entities that own or operate critical infrastructure in the event of substantial cybersecurity incidents, as defined in the bill.  The bill has not yet been passed by the House of Representatives.  On March 23, the Senate introduced the Healthcare Cybersecurity Act of 2022, which would direct the Cybersecurity and Infrastructure Security Agency (“CISA”) and the Department of Health and Human Services (“HHS”) to collaborate on how to improve cybersecurity measures across healthcare providers.

European Union

            In 2022, the EU is expected to adopt the Proposal for Directive on Measures for High Common Level of Cybersecurity Across the Union (“NIS2 Directive”).  The NIS2 Directive would apply to entities providing services falling within the below sectors:

  • Essential Entities – Energy, transportation, banking, financial market infrastructure, drinking water, waste water, public administration; space, health, research and manufacture of pharmaceutical products, manufacture of medical devices critical during public health emergencies; and digital infrastructure sectors such as cloud computing providers, DNS service providers, and content delivery network providers.
  • Important Entities – Postal and courier services; waste management; chemicals; food; manufacturing of medical devices, computers and electronics, machinery equipment, and motor vehicles; and digital providers such as online market places, search engines, and social networking service platforms.

            Each of these entities would have to implement various measures set out in the Directive to ensure that they can detect and manage the security risks to their networks and information systems.  The European Commission and member states may require these entities to obtain European cybersecurity certifications, and impose an obligation to notify incidents having a significant impact on the provision of their services to regulators and recipients of their service.  Under this Directive, essential entities are subject to ex ante regulation, while important entities are subject to ex post regulation.

            Under the NIS2 Directive, member states would have to establish national cybersecurity frameworks that include a cybersecurity strategy, a crisis management framework, and competent authorities and computer security incident response teams.  The authorities must maintain a list of known vulnerabilities in network and information systems, and pool them in a centralized database.  Authorities may also impose fines of up to the higher of 10 million or 2% of the worldwide annual turnover of the “undertaking” of the preceding financial year.

United Kingdom

            As part of the UK’s National Cyber Strategy of 2022, on January 19, 2022, the UK Government launched a public consultation for a proposal for legislation to improve the UK’s cyber resilience (“UK Cyber Security Proposal”).  The objectives for the consultation are based on two pillars: (1) to expand the scope of digital services under the UK Network and Information Systems (“NIS”) Regulations in response to gaps and evolving threats to cybersecurity and (2) to update and future-proof the UK NIS Regulations in order to more easily manage future risks.  The feedback period ended on April 10, 2022.


            On March 31, 2022, the Security Legislation Amendment Bill of 2022 passed both houses of Australia’s Parliament.  The bill sets out a number of additional measures, including the obligation to adopt and maintain a Risk Management Program, the ability to declare Systems of National Significance, and enhanced cybersecurity obligations that may apply to these systems.  Australia’s Cyber and Infrastructure Security Centre (“CISC”) highlighted that the bill seeks to make risk management, preparedness, prevention, and resilience “business as usual” for the owners and operators of critical infrastructure assets and to improve information exchange between industry and the government. 

International Organizations

            On January 28, 2022, the Association of Southeast Asian Nations’ (“ASEAN”) Digital Ministers’ Meeting announced the launch of the ASEAN Cybersecurity Cooperation Strategy of 2021-2025.  The meeting noted that it welcomed the draft strategy as an update to its previous strategy, and noted that the updated strategy is needed to respond to new cyber developments since 2017.

* * *

            We will provide other developments related to robotics on our blog.  To learn more about the work discussed in this post, please visit the Technology Industry and Data Privacy & Cybersecurity pages of our web site.  For more information on developments related to AI, IoT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of ThingsConnected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.