Today, the Supreme Court issued its decision in Barr v. American Association of Political Consultants, which addressed the constitutionality of the Telephone Consumer Protection Act (TCPA). Although the Court splintered in its reasoning—producing four separate opinions—the justices nevertheless coalesced around two core conclusions: (1) the TCPA’s exception for government debt collection calls is unconstitutional, and (2) the exception can be severed from the rest of the TCPA. Six justices determined that the TCPA’s government-debt exception violates the First Amendment, and seven justices concluded that the exception is severable from the rest of the statute. The end result is that the government-debt exception is invalid but the rest of the TCPA—including its general prohibition on automated calls and text messages to mobile numbers—remains intact. The narrow scope of this ruling suggests that it may have limited practical effect for most parties.
On June 4, 2020, Representatives Anna Eshoo (D-CA-18), Anthony Gonzalez (R-OH-16), and Mikie Sherrill (D-NJ-11) introduced the National AI Research Resource Task Force Act. This bipartisan bill would create a task force to propose a roadmap for developing and sustaining a national research cloud for AI. The cloud would help provide researchers with access to computational resources and large-scale datasets to foster the growth of AI.
“AI is shaping our lives in so many ways, but the true potential of it to improve society is still being discovered by researchers,” explained Rep. Eshoo. “I’m proud to introduce legislation that reimagines how AI research will be conducted by pooling data, compute power, and educational resources for researchers around our country. This legislation ensures that our country will retain our global lead in AI.”
Earlier this week, the Federal Communications Commission’s (FCC’s) Consumer and Government Affairs Bureau released a Declaratory Ruling clarifying the agency’s interpretation of the “Automatic Telephone Dialing System” (an “autodialer” or “ATDS”) definition in the Telephone Consumer Protection (TCPA). The Ruling clarified that, in the context of a call or text message platform, the definition does not turn on whether the platform is used by others to transmit a large volume of calls or text messages; instead, the relevant inquiry is whether, in this context, the platform is capable of transmitting calls or text messages without a user manually dialing each such call or text message.
Earlier this month, the Federal Communications Commission (“FCC”) asked for comment on a Petition for Rulemaking filed by the Consumer Technology Association (“CTA”) that proposes to modify the FCC’s device authorization rules to allow the importation and conditional, preauthorization marketing and sales of radiofrequency (“RF”) devices that have not yet been approved under the FCC’s rules. The deadline for filing comments supporting or opposing the petition is July 9, 2020. Continue Reading
The COVID-19 pandemic is accelerating the digital transition and the adoption of artificial intelligence (“AI”) tools and Internet of Things (“IoT”) devices in many areas of society. While there has been significant focus on leveraging this technology to fight the pandemic, the technology also will have broader and longer-term benefits. As the New York Times has explained, “social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation.”
For businesses proceeding with reopenings over the coming weeks and months, and for sectors that have continued to operate, AI and IoT technologies can greatly improve the way they manage their operations, safely engage with customers, and protect employees during the COVID-19 crisis and beyond. But businesses also should take steps to ensure that their use of AI and IoT technologies complies with the evolving legal requirements that can vary based on several factors, including the industry sector where the technology is deployed and the jurisdiction where it is used. Businesses also will want to have mechanisms in place to help ensure that the technology is used appropriately, including appropriate oversight and workforce training and other measures.
On June 2, 2020, the French Supervisory Authority (“CNIL”) published a paper on algorithmic discrimination prepared by the French independent administrative authority known as “Défenseur des droits”. The paper is divided into two parts: the first part discusses how algorithms can lead to discriminatory outcomes, and the second part includes recommendations on how to identify and minimize algorithmic biases. This paper follows from a 2017 paper published by the CNIL on “Ethical Issues of Algorithms and Artificial Intelligence”. Continue Reading
Senators Maria Cantwell (D-WA) and Bill Cassidy (R-LA) introduced bipartisan legislation this week to address privacy issues in the COVID-19 era. The proposal, entitled the “Exposure Notification Privacy Act,” would regulate “automated exposure notification services” developed to respond to COVID-19. This bipartisan legislation comes on the heels of dueling privacy proposals from both political parties. We previously analyzed the Republican “COVID-19 Consumer Data Protection Act” proposal introduced by Senate Commerce Chairman Roger Wicker (R-MS) on this blog and the Democratic “Public Health Emergency Privacy Act” proposal on this blog.Below are descriptions of the notable provisions in the Exposure Notification Privacy Act:
- In contrast to the Wicker proposal and the proposal introduced by House and Senate Democrats, both of which would cover symptom tracking and other apps, this new bipartisan proposal would be narrower by only regulating operators of so-called “automated exposure notification services.” This is defined as any website or mobile application designed for use or marketing to digitally notify “an individual who may have become exposed to an infectious disease.” Operators can be both for-profit and non-profit entities.
- However, the definition of covered personal data is broader than some earlier proposals that only covered certain categories of health and location data. The new proposal covers all data linked or reasonably linkable to any individual or device that is “collected, processed, or transferred in connection with an automated exposure notification service.” This definition is broader than the Republican proposal, which defined covered data to include health information, geolocation data, and proximity data. It is also broader than the Democratic proposal, which included the same data elements as the Republican proposal while also covering certain medical testing data and contact information.
- Under the bipartisan bill, operators may not enroll individuals in automated exposure notification services without their affirmative express consent, which is the same as both the Democratic and Republican proposals.
- However, the new proposal could curtail the ability of technologies to collect, process, or share an actual, potential or presumptive positive diagnosis of an infectious disease except when such diagnosis is confirmed by a public health authority or a licensed health provider.
- The proposal requires operators to “collaborate with a public health authority in the operation” of their notification service.
- The bill includes certain transfer restrictions. Covered data may only be transferred for certain enumerated purposes, such as to notify enrolled individuals of potential exposure to an infectious disease, or to public health authorities or contracted service providers.
- The bill obligates operators to delete all covered data upon request of the individual, as well as within 30 days of the receipt of such data, on either a rolling basis or “at such times as is consistent with a standard published by a public health authority within an application jurisdiction.” Such deletion requirements do not apply to data retention for public health research purposes.
- The bill distinguishes between operators and service providers, and only a subset of obligations—such as data deletion requirements—apply to service providers. Service providers with “actual knowledge” that an operator has failed to adhere to certain standards required under the proposal would be obligated to notify the operator of the potential violation.
- Similar to the Democratic proposal, this bill makes it unlawful for “any person or entity” to discriminate on the basis of “covered data collected or processed through an automated exposure notification service” or their choice “to use or not use” such a service.
- While the Democratic and Republican proposals imposed public reporting obligations on covered entities, this bipartisan proposal would require such an obligation on the federal Privacy and Civil Liberties Oversight Board. Under the proposal, the Board would be required to issue a report within one year after enactment that assesses “the impact on privacy and civil liberties of Government activities in response to the public health emergency related to” COVID-19 and makes recommendations for the future.
As with both the Republic and Democratic proposals, the Exposure Notification Privacy Act enforcement provisions name both the Federal Trade Commission and state Attorneys General. Notably, the Act preserves the right for individuals to bring claims arising under various state laws, including consumer protection laws, health privacy or infectious diseases laws, civil rights laws, state privacy and data breach notification laws, and under contract or tort law.
In February 2020, the Trump Administration released the American Artificial Intelligence Initiative’s One Year Annual Report, detailing the Administration’s progress since launching its “American Artificial Intelligence Initiative” by signing Executive Order 13859 on February 11, 2019. The Administration’s Report highlights areas where it identifies progress in advancing the United States’ competitive position in fostering advancement in AI technology, including limiting new regulations, increasing funding and tracking of federal AI research and development, increasing access to federal data and computing resources, and promoting international collaboration. Continue Reading
Artificial intelligence (“AI”) is expanding in many industries and could add approximately $13 trillion to the global economy by 2030. Many organizations, both public and private, have invested substantial resources in AI research and development (R&D). The United States, the European Union, Canada, China and many other countries have developed, or are developing, a national AI strategy that, in many cases, contemplates significant government investment in AI. Global investment in AI start‑ups has increased steadily, from $1.3B in 2010 to over $40.4B in 2018, at an average annual growth rate exceeding 48%. While the global pandemic has dampened economic growth, focus continues on maximizing AI to address COVID-19 and other important needs.
Not surprisingly, investment in AI R&D has given rise to a substantial increase in AI‑related intellectual property (“IP”). The U.S. Patent and Trademark Office (“USPTO”) published over 27,000 AI‑related patent applications since 2017, with over 16,000 of them published within the past eighteen months. The World Intellectual Property Organization (“WIPO”) has reported similar increases in AI‑related patent filings globally. Additionally, organizations continue to invest in developing AI algorithms, software, and data assets.
AI also has emerged as an important tool for IP development. For example, many pharmaceutical companies use AI in drug discovery. Advertisers and others leverage AI to create content. These and other activities can result in AI outputs, such as new drugs or content, and incremental improvements to AI algorithms, all of which may be valuable IP.
10 Best Practices for AI-Related Intellectual Property
Organizations should protect their AI‑related IP given its potential value. For S&P 500 companies in 2018, IP and other intangibles represented 84% of company value. However, developing a strategy for harnessing this value may face some hurdles as the AI‑IP legal landscape continues to evolve. For example, WIPO, the European Patent Office (“EPO”), the USPTO, the U.S. Copyright Office and other governmental agencies are examining many AI‑related IP issues, including AI inventorship, patent eligibility, written description and enablement requirements, data issues, and AI‑related copyright issues.
To maximize protection for AI‑related IP while policy deliberations continue, organizations can follow these 10 best practices.
1. Develop an IP Strategy and Procedures.
Organizations should have a written IP strategy, and procedures for implementing this strategy, that efficiently streamlines (1) the identification of IP assets, (2) assessment of their importance to the business, and (3) determination of how best to protect the IP. Some options for protection include patent, copyright, trade secret, trademark, and contract, and organizations frequently employ a combination of protections. For instance, algorithms often are protected by copyright, trade secret, and contract. The IP strategy and procedures should prioritize protection for valuable IP, take into account that existing laws may change, and be modified, as needed, as such laws change. They also should include steps for reducing risks of third party infringement and other IP claims and address trademark, socials media, and other IP matters.
2. Assess Whether Inventions are Patent-Eligible.
When considering patenting AI‑related inventions, organizations must carefully answer the threshold question of whether such inventions qualify for patent protection. This analysis may be complicated because patents are territorial, and patent subject matter eligibility requirements vary among jurisdictions, particularly for AI‑related inventions. For example, in the U.S., broad statutory patent eligibility language has been interpreted by the Supreme Court to exclude abstract ideas, laws of nature and natural phenomena (including products of nature), with recent cases establishing a two‑step test, known as the Alice/Mayo framework, for determining whether a patent claim is directed to patent‑eligible subject matter. In Europe, while a computer program may not be patentable, artificial intelligence and machine learning that serves or achieves a technical purpose may qualify. To address these issues, organizations should identify the countries where they desire patent protection for their AI inventions and assess whether such inventions satisfy the applicable subject matter eligibility criteria. If so, patent applications must be prepared to address such criteria and the organization’s objectives. If patent protection seems unfeasible, the organization should consider trade secret or another alternative.
3. Determine Inventorship and Secure Ownership of AI‑Related Inventions.
Patenting inventions developed using AI, such as those that may arise in the drug discovery context mentioned above, raises new issues. Specifically, patent applications must identify the inventors. However, the United Kingdom Patent Office (“UKIPO”), the EPO, and the USPTO have recently stated that inventors must be human, and do not allow AI tools to be named as an inventor. Consequently, when preparing patent applications for AI‑related inventions, organizations should consider the particular circumstances pertaining to the conception and reduction to practice of the inventions in order to identify who should be named as inventor(s). Identifying inventorship can have important implications for patent ownership. In the U.S., the inventor(s) owns the patent application, absent an agreement or other arrangement to the contrary. Given the potential difficulties in identifying the inventors and the evolving nature of the law, organizations should ensure that all potential inventors have vested or otherwise conveyed, in many cases by contract, any rights they may have in the patent application to the organization.
4. Comply with Written Description and Enablement Requirements.
When preparing AI‑related patent applications, organizations should consider how to disclose the invention. Under U.S. law, patent applications must include a written description that demonstrates that the inventor(s) had possession of the invention at the time of filing and that enables persons of “ordinary skill in the art” to make and use the invention. This written description is intended to advance public knowledge in exchange for granting a monopoly. How best to comply with the written description requirement may depend upon various factors, including the nature of the invention and the information that is available. For example, if the patent application relates to an improvement to pre‑existing AI that is not well‑known or widely available, then a relatively detailed disclosure may be needed to describe and enable the invention. However, if the pre-existing AI is widely known or available, a higher level description may suffice.
5. Protect Trade Secrets.
Trade secrets typically represent an important part of an organization’s IP portfolio. Trade secrets may be preferable to patents in several circumstances, such as when (1) the patentability requirements, including those mentioned above, may not be satisfied, (2) the cost of pursuing patent protection outweighs the benefits, or (3) the need for potential IP protection extends beyond the available patent term. Organizations should have policies to protect the confidentiality and security of their trade secrets. These policies should take into account the amount of remote access and work, such as during the pandemic, and include measures to guard against unauthorized disclosure and use of trade secrets and to investigate and remediate actual or suspected misappropriations. Organizations often implement these policies by using various measures, including physical and technical controls, non‑disclosure agreements, training, audits, and other procedures.
6. Determine Authorship and Ownership of AI‑Generated Copyrighted Works.
For copyrights, determining authorship, and in turn securing ownership of copyrights in AI-generated works, presents novel questions analogous to those raised in the patent context. For instance, the UK’s Copyright, Designs and Patents Act 1988 provides that when “there is no human author” of a computer‑generated work, the author “shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.” Similarly, the U.S. Copyright Office has stated that it will register copyrights only for original works of authorship created by humans. However, identifying the human authors of AI‑generated works is not necessarily easy. For example, some U.S. case law suggests that the author of an AI program will be deemed to be the author of outputs generated by such program if the program, as opposed to the end user, did the “lion share of the work” to generate such outputs. Depending upon the circumstances, determining whether the program did the “lion’s share of the work” may be challenging. Authorship also can have important implications for copyright ownership. In the U.S., authors own the copyright, absent an agreement, work‑for‑hire, or another arrangement to the contrary. As with patents, securing rights from all potential authors, including in many cases by contract, can be important for addressing ownership, including for AI outputs and trained algorithms.
7. Protect Data Rights.
Protecting rights in training data, AI data outputs, and other important data also requires careful attention. Under U.S. law, data is not copyrightable because “facts” are not original works of authorship. However, limited copyright protection may be available for how the data is selected, coordinated, or arranged. Similarly, EU law affords copyright protection to databases that are “original” in the selection or arrangement of their contents. Europe also provides for a sui generis database right, which provides limited protection to databases if significant investments have been made to obtain, verify or present their contents. Organizations can rely on trade secret or similar laws to protect data, so long as appropriate measures are implement to protect the confidentiality of the data. Organizations also commonly utilize contracts to protect data.
8. Manage Text and Data Mining and Similar Activities.
Organizations increasingly are using text and data mining (TDM) and similar means to obtain AI training data and should ensure that these activities do not violate third party rights or applicable laws or agreements. In the EU, the 2019 Digital Single Market Directive defines “text and data mining” as “any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations.” This Directive requires EU Member States to implement certain exceptions to copyright infringement for these activities. Organizations can rely on these exceptions, so long as IP owners have not exercised their rights to prohibit TDM. In the U.S., a patchwork of laws potentially may apply to TDM and similar activities, such as the fair use copyright exception, trademark law, contract law, the Computer Fraud and Abuse Act, and state law. In sum, organizations engaging in TDM and similar activities should familiarize themselves with applicable laws and agreements and tailor their practices to comply with them.
9. Evaluate Broader Data Policies.
Organizations also should evaluate the broader legal landscape pertaining to data. For instance, the European Commission recently issued a communication on “A European strategy for data”. This communication focuses on enabling the EU to realize its potential in the data economy by (1) introducing a cross‑sectoral governance framework for data access and use, (2) improving the EU’s data‑processing infrastructure and creating interoperability standards, (3) investing in skills and small and medium enterprises, and (4) creating common European data spaces in strategic sectors, such as health, finance, agriculture and energy. The developments that follow this communication could impact how organizations protect their data.
10. Maximize Contracts.
As mentioned above, contracts can help secure and allocate IP rights, including for training data, AI outputs, and algorithms. Consequently, organizations should evaluate how best to utilize contracts to achieve their objectives and carefully craft appropriate contractual terms. In addition, organizations should familiarize themselves with the growing number of “free” standard form agreements used to make certain IP available, such as open source and Creative Commons licenses. There are many versions of these licenses with varying terms. Open source licenses often are used for making software freely available, while Creative Commons licenses often are used to make other copyrighted works and databases available on a no‑cost basis. Organizations should assess the various forms of these licenses and consider how they might be used on an in‑bound and out‑bound basis to further their business objectives.
While there is no “one-size-fits all approach” to protecting and maintaining AI‑related IP rights, by following the best practices outlined above, organizations should be able to develop and implement and IP strategies and procedures that further their business objectives.
*Lee Tiedrich is a partner at Covington & Burling LLP and Co-Chair of the global and multi‑disciplinary Artificial Intelligence Initiative. Gregory Discher is Of Counsel and Fredericka Argent and Daniel Rios are associates at the firm. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
Reflecting the heightened interest in 5G and related cybersecurity concerns, the National Telecommunications and Information Administration (NTIA) has requested public comment on the implementation of its National Strategy to Secure 5G. Stakeholders with interests in telecommunications infrastructure and security—and any parties interested in 5G generally—currently have the opportunity to provide input on the plan that will carry out the Administration’s 5G strategy.
From now until June 18, 2020, the NTIA will accept public comments as part of its efforts to develop a rollout for its National Strategy to Secure 5G. This implementation plan is being developed per the Secure 5G and Beyond Act of 2020, which President Trump signed into law on March 23. The NTIA published its National Strategy the same day. Continue Reading