Companies have increasingly leveraged artificial intelligence (“AI”) to facilitate decisions in the extension of credit and financial lending as well as hiring decisions. AI tools have the potential to produce efficiencies in processes but have also recently faced scrutiny for AI-related environmental, social, and governance (“ESG”) risks. Such risks include AI ethical issues related to the use of facial recognition technology or embedded biases in AI software that may potentially perpetuate racial inequality or have a discriminatory impact on minority communities. ESG and diversity, equity, and inclusion (“DEI”) advocates, along with federal and state regulators, have begun to examine the potential benefit and harm of AI tools vis-à-vis such communities.Continue Reading Responsibly Audited AI and the ESG/AI Nexus
On November 3, the FTC announced that it entered into a significant $100 million settlement with Vonage to resolve allegations relating to the internet phone service provider’s sales and autorenewal practices. The FTC alleged that Vonage violated both the FTC Act and the Restore Online Shoppers’ Confidence Act (ROSCA) by failing to provide a simple cancellation mechanism, failing to disclose material transaction terms prior to obtaining consumers’ billing information, and charging consumers without consent.Continue Reading FTC Flexes ROSCA Muscle with $100 Million “Dark Patterns” Settlement with Vonage
Last week, Federal Communications Commission (“FCC”) Chairwoman Jessica Rosenworcel announced plans to reorganize the agency’s International Bureau by creating a new Space Bureau and a standalone Office of International Affairs. The announcement, which marks the latest in a string of space-focused actions over the last several months, is a further indication of the FCC’s commitment to leadership in the growing space economy.Continue Reading FCC Positions Itself for Expanding Space Industry
This quarterly update summarizes key legislative and regulatory developments in the third quarter of 2022 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.
This quarter, Congress has continued to focus on the American Data Privacy Protection Act (“ADPPA”) (H.R. 8152), which would regulate the collection and use of personal information and includes specific requirements for AI systems. Disagreements over the legislation’s preemption of state laws and creation of a private right of action continue to stall the its progress. Separately, the Federal Trade Commission (“FTC”) announced an Advanced Notice of Proposed Rulemaking to solicit input on questions related to privacy and automated decision-making systems. The notice cites to the FTC’s prior guidance related to IoT devices.
Regulators and the White House have expressed increased interest in setting forth requirements and best practice expectations around the operation of AI systems. For example, the FTC announced an Advanced Notice of Proposed Rulemaking in August that asks for comments on a number of topics related to automated decision-making systems. In particular, the FTC is requesting comments on the prevalence of error in automated decision-making systems, discrimination based on protected categories facilitated by algorithmic decision-making systems (and whether the FTC should consider recognizing additional categories of protected classes), and how the FTC should address algorithmic discrimination that occurs through the use of proxies.
In early October, the White House also released its Blueprint for an AI Bill of Rights. Discussed in further detail here, the Blueprint outlines recommended best practices for entities using AI, which include measures to provide a safe and effective system, protections against algorithmic discrimination, attention to data privacy, notice and explanation, and the provision of human alternatives and consideration.
Congress continues to weigh into the discussion about regulation of AI systems. The latest version of the ADPPA would require a covered entity or service provider who “knowingly develops” a covered algorithm that processes covered data “in furtherance of a consequential decision” must evaluate the design, structure, and inputs of the covered algorithm. In addition, entities of a certain size, which the bill calls “large data holders,” must conduct an impact assessment that describes the design process and methodologies of the covered algorithm, an assessment of the necessity and proportionality of the algorithm in relation to its stated purpose, and the steps the entity will take to mitigate the risk of harm.
Internet of Things
This quarter, federal lawmakers introduced and advanced several bills related to the Internet of Things (“IoT”), including two bills imposing requirements on manufacturers of devices with cameras or microphones. One of these bills is the Earning Approval of Voice External Sound Databasing Retained on People (“EAVESDROP”) Act (H.R. 8543), introduced by Representative Steve Scalise (R-LA) in July. The bill would require manufacturers of connected devices with microphones to provide notices to consumers regarding the devices’ collection of certain consumer information. Manufacturers would also have to provide an easy way for consumers to deactivate the ability of the device to collect information. The EAVESDROP Act exempts devices solely marketed as microphones and provides a safe-harbor for manufacturers that comply with a set of self-regulatory guidelines to be developed by the FTC. In contrast, the Informing Consumers about Smart Devices Act (H.R. 4081) would require manufacturers of connected devices equipped with a camera or microphone to disclose to consumers that a camera or microphone is part of the device, and would not apply to mobile phones, laptops, or other devices that consumers would reasonably expect to include a camera or microphone. The Informing Consumers about Smart Devices Act is sponsored by Reps. John R. Curtis (R-UT) and Seth Moulton (D-MA) and was approved by the House of Representatives on September 29, 2022.
Additionally, on September 28, 2022, the Senate approved the Small Business Broadband and Emerging Information Technology Enhancement Act of 2022 (S. 3906). As we noted in our Second Quarterly Legislative and Regulatory Update, this bipartisan bill, sponsored by Senators Jeanne Shaheen (D-NH) and John Kennedy (R-LA), aims to bolster IoT competencies at the Small Business Administration (“SBA”), including through the designation of a coordinator for emerging information technology (which includes IoT technology).
Federal regulatory efforts related to IoT this quarter largely centered on cybersecurity and consumer protections. For instance, the National Institute of Standards and Technology (“NIST”) published the final version of its Profile of the IoT Core Baseline for Consumer IoT Products (NIST IR 8425), building on work undertaken pursuant to E.O. 14028. The publication, which follows a public draft released in June 2022, describes NIST’s cybersecurity expectations for IoT products for home and personal use. As we noted in our previous quarterly update, the NIST guidance is not legally binding, but it signals a best practice that may later be incorporated by lawmakers in legislation.
NIST also published a report summarizing key takeaways from of its June 2022 IoT Cybersecurity workshop (NIST IR 8431), and a report with guidance for first responders on minimizing security vulnerabilities when using mobile and wearable devices (NIST IR 8235). Other agency activities impacting IoT technology include the FTC’s publication of a business guidance blog post focused on the marketplace for sensitive consumer location and health information collected by connected devices, and highlighting FTC enforcement against misuse of consumer data and deceptive claims about data anonymization. These developments signal a continued focus by federal regulators on IoT cybersecurity and the protection of consumer data collected by connected devices.
Connected and Autonomous Vehicles
On August 8, 2022, Reps. Debbie Dingell (D-MI) and Bob Latta (R-OH) launched the bipartisan Congressional Autonomous Vehicle Caucus. The first of its kind, the purpose of this caucus is to educate Congressional Members and staff on autonomous vehicle technology that can improve the safety and accessibility of roadways. Rep. Dingell stated that the caucus will help the United States stay at the “forefront of innovation, manufacturing, and safety” while “engaging all stakeholders, making bold investments, and working across the aisle to get the necessary policies right to support the safe deployment of autonomous vehicles.” Industry should watch for developments here, as policy proposals and opportunities for engagement could be on the horizon.
Federal regulators remain active in this space, signaling an interest in funding and advancing the deployment of CAV technologies. A recent stated priority for the Strengthening Mobility and Revolutionizing Transportation (“SMART”) Grants Program is to improve the integration of systems and promote connectivity of infrastructure, connected vehicles, pedestrians, and bicyclists, and the Department of Transportation (“DOT”) authorized and appropriated $100M for projects in this space for FY2022. Additionally, the Federal Transit Administration (“FTA”) and DOT issued a Notice of Funding Opportunity to apply for funding for projects exploring the use of Advanced Driver Assistance Systems (“ADAS”) for transit buses to demonstrate transit bus automation technologies in real-world settings. Finally, DOT issued a Request for Information seeking comments on the possibility of adapting existing and emerging automation technologies to accelerate the development of real-time roadway intersection safety and warning systems for drivers and vulnerable road users.
This quarter, the National Highway Traffic and Safety Administration (“NHTSA”) also released a final version of the Cybersecurity Best Practices for the Safety of Modern Vehicles guidance, an update to its 2016 edition. While the edits were largely cosmetic, a few key changes potentially relevant to CAVs and in-vehicle software are below:
- The final version clarifies that both suppliers and manufacturers should maintain a database of software components so that when vulnerabilities are identified in software, affected systems can be easily identified.
- The final version adds a new best practice stating that manufacturers should employ measures to limit firmware version rollback attacks (i.e., when an attacker uses the software update mechanisms to place older, more vulnerable software on a targeted device).
- The final version adds a new best practice stating that industry should collaborate to address “future risks” as they emerge.
Privacy and Cybersecurity
As described in further detail in our second quarterly update for 2022 and here, the ADPPA continues to be the prevailing data privacy framework in Congress. The bill sets forth broad requirements around data collection and disclosures, though the likelihood of passage this Congress continues to decrease as lawmakers remain stalled over issues around preemption and a private right of action. California’s principal privacy regulator – the California Privacy Protection Agency – convened a special meeting on July 28, 2022 to discuss the ADPPA and to express the Agency’s strong disagreement with the ADPPA’s preemption provision.
The FTC is also exploring privacy regulation, including through its Advanced Notice of Proposed Rulemaking, released in August. Specifically, the notice broadly asks whether the agency “should implement new trade regulation rules or other regulatory alternatives concerning the ways in which companies (1) collect, aggregate, protect, use, analyze, and retain consumer data, as well as (2) transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.” Notably, the FTC recently extended the deadline to receive comments on the notice to November 21, 2022. Additionally, the FTC released its agenda for a workshop on children’s advertising that will be held on October 19, 2022, which will focus on whether children can distinguish ads from entertainment in digital media.
Many employers and employment agencies have turned to artificial intelligence (“AI”) tools to assist them in making better and faster employment decisions, including in the hiring and promotion processes. The use of AI for these purposes has been scrutinized and will now be regulated in New York City. The New York City Department of Consumer and Worker Protection (“DCWP”) recently issued a Notice of Public Hearing and Opportunity to Comment on Proposed Rules relating to the implementation of New York City’s law regulating the use of automated employment decision tools (“AEDT”) by NYC employers and employment agencies. As detailed further below, the comment period is open until October 24, 2022.
NYC’s Local Law 144, which takes effect on January 1, 2023, prohibits employers and employment agencies from using certain AI tools in the hiring or promotion process unless the tool has been subject to a bias audit within one year prior to its use, the results of the audit are publicly available, and notice requirements to employees or job candidates are satisfied. The DCWP, the New York City agency responsible for administering this law, proposed the new rules to clarify the responsibilities of employers and employment agencies once the statute goes into effect.
What tools are impacted? What are “Automated Employment Decision Tools”?
The law governs “AEDTs,” which are defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” The proposed rules outline which tools fall within the scope of the law by defining “to substantially assist or replace discretionary decision making” as:
- relying solely on the tool’s output (score, tag, classification, ranking, etc.) without considering other factors;
- using the tool’s output as one of a set of criteria where the output is weighted more than any other criterion in the set; or
- using the tool’s output to overrule or modify conclusions derived from other factors.
When does the statute apply? Who are “candidates for employment”?
The new law applies when employers and employment agencies use an AEDT to screen either “candidates for employment” or “employees for promotion” within New York City. The proposed rules define “candidates for employment” to mean persons who have applied for a specific employment position by submitting the necessary information and/or items in the format required by the employer or employment agency.
What is the “bias audit”?
Local Law 144 prohibits the use of an AEDT unless it has been the subject of a bias audit within one year prior to its use. Under the proposed rules, the structure and requirements for the bias audit change based on how the AEDT is used.
- Where an AEDT selects individuals to move forward in the hiring process or classifies individuals into groups, the bias audit must: (i) calculate the selection rate for each category/classification and (ii) calculate the impact ratio for each category/classification. Categories are the component 1 categories (race, ethnicity and gender) as designated on the federal EEO-1 report.
- Where the AEDT only scores individuals rather than selecting them, the proposed rules require the bias audit to: (i) calculate the average score for individuals in each category and (ii) calculate the impact ratio for each category.
An “independent auditor” must perform bias audits. The proposed rules define “independent auditor” as “a person or group that is not involved in using or developing an AEDT that is responsible for conducting a bias audit of such AEDT.”
What happens with the audit results?
The proposed rules clarify Local Law 144’s requirement that the results of a bias audit must be “made publicly available on the website of the employer or employment agency” prior to use by stating that the information must be posted “on the careers or jobs section of their website in a clear and conspicuous manner.” Additionally, the proposed rule would require the information to remain posted for at least six months after the AEDT was last used to make an employment decision.
What about the notice requirements?
Local Law 144 requires that any employer or employment agency that uses an AEDT to screen an employee or a candidate who has applied for a position for an employment decision must notify individuals who reside in New York City that the AEDT will be used in connection with their assessment or evaluation, as well as the job qualifications and characteristics that the AEDT will consider. Notice must be provided at least 10 business days before use of an AEDT and must include instructions for how to request an alternative selection process or accommodation.
The proposed rules provide guidance to employers and employment agencies on how to satisfy the law’s notice requirements.
- For candidates for employment, the rules allow notification to impacted individuals through the following means:
(i) on the careers or jobs section of its website in a clear and conspicuous manner,
(ii) in the job posting, or
(iii) via U.S. mail or e-mail.
- For existing employees, the law’s notice requirements may be satisfied through:
(i) written policies or procedures,
(ii) in the job posting, or
(iii) written notice in person via U.S. mail or e-mail.
How do I comment on the proposed rules?
Anyone can comment on the proposed rules by:
- Submitting Written Comments: Written comments on the proposed rules must be submitted on or before Monday, October 24, 2022 and may be submitted via email at Rulecomments@dcwp.nyc.gov or through the city’s rules website at http://rules.cityofnewyork.us.
- Attending the Public Hearing: Interested parties may attend the public hearing on the proposed rules, which is scheduled to take place on Monday, October 24, 2022 at 11:00 AM. Additional details on how to access the public hearing via phone or videoconference are available here.
This morning, the Supreme Court granted certiorari in Gonzalez v. Google LLC, 2 F.4th 871 (9th Cir. 2021) on the following question presented: “Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?” This is the first opportunity the Court has taken to interpret 47 U.S.C. § 230 (“Section 230”) since the law was enacted in 1996.Continue Reading Supreme Court Grants Certiorari in Gonzalez v. Google, Marking First Time Court Will Review Section 230
On September 16, the Fifth Circuit issued its decision in NetChoice L.L.C. v. Paxton, upholding Texas HB 20, a law that limits the ability of large social media platforms to moderate content and imposes various disclosure and appeal requirements on them. The Fifth Circuit vacated the district court’s preliminary injunction, which previously blocked the Texas Attorney General from enforcing the law. NetChoice is likely to ask the U.S. Supreme Court to review the Fifth Circuit’s decision.
HB 20 prohibits “social media platforms” with “more than 50 million active users” from “censor[ing] a user, a user’s expression, or a user’s ability to receive the expression of another person” based on the “viewpoint” of the user or another person, or the user’s location. HB 20 also includes various transparency requirements for covered entities, for example, requiring them to publish information about their algorithms for displaying content, to publish an “acceptable use policy” with information about their content restrictions, and to provide users an explanation for each decision to remove their content, as well as a right to appeal the decision.Continue Reading Fifth Circuit Upholds Texas Law Restricting Online “Censorship”
On September 12, 2022, the U.S. Cybersecurity and Infrastructure Security Agency (“CISA”) published a Request for Information, seeking public comment on how to structure implementing regulations for reporting requirements under the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”). Written comments are requested on or before November 14, 2022 and may be submitted through the Federal eRulemaking Portal: http://www.regulations.gov.Continue Reading CISA Requests Public Comment on Implementing Regulations for the Cyber Incident Reporting for Critical Infrastructure Act
In its August 5, 2022 affirmance of the district court’s grant of summary judgment, the Federal Circuit in Thaler v. Vidal ruled that the Patent Act unambiguously and directly answers the question of whether an AI software system can be listed as the inventor on a patent application. Since an inventor must be a human being, AI cannot be.
Judge Stark’s first authored precedential opinion since confirmation to the Federal Circuit aligns the U.S. position on whether AI can be listed as an inventor on a patent application with that of other major jurisdictions. Left for another day are questions such as the rights, if any, of AI systems, and whether AI systems can contribute to the conception of an invention.
PTO and Litigation Background of the DABUS Patent Applications
In July 2019, two patent applications were filed in the United States Patent and Trademark Office (PTO) that identified an AI system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) as the sole inventor and Stephen L. Thaler as the Applicant and Assignee. DABUS, which was characterized as “a particular type of connectionist artificial intelligence” known as a “Creativity Machine” during prosecution and as “a collection of source code or programming and a software program” before the U.S. District Court for the Eastern District of Virginia, allegedly generated the subject matter of the two patent applications.
The filed patent applications specifically stated that the inventions were conceived by DABUS, and that DABUS should accordingly be named as the inventor. The PTO subsequently issued Notices stating that the applications did not identify each inventor by his or her legal name. In response to filed Petitions requesting that the PTO vacate the issued Notices, the PTO issued Petition Decisions refusing to vacate, explaining that a machine does not qualify as an inventor under the patent laws, and providing additional time to identify inventors by their legal name to avoid abandonment of the applications.
Thaler then sought judicial review under the Administrative Procedure Act in the Eastern District of Virginia, requesting an order compelling the PTO to reinstate the DABUS patent applications, and a declaration that a patent application for an AI-generated invention should not be rejected on the basis that no natural person is identified as an inventor. After briefing and oral argument, the district court issued an order denying Thaler’s requested relief and granting the PTO’s motion for summary judgment, recognizing the Federal Circuit’s consistent holdings under current patent law requiring inventors to be natural persons.Continue Reading Federal Circuit Rules That Under The Patent Act An Inventor Must Be Human: So What Can Be Done To Patent AI Inventions?
On August 25, 2022, President Biden announced a new Executive Order (“EO”) addressing the Implementation of the CHIPS Act of 2022 (“CHIPS Act”). The CHIPS Act was signed by President Biden on August 9, 2022, and, among other things, authorizes $39 billion in funding for new projects to establish semiconductor production facilities within the United States. The new EO identifies the Administration’s implementation priorities for this CHIPS Act funding and creates the CHIPS Implementation Steering Council to aid with the rollout of administrative guidance. In connection with the EO, the Department of Commerce launched CHIPS.gov, which is intended to be a centralized resource for potential applicants of CHIPS funding. The EO and new website reflect the Administration’s intent to swiftly implement the CHIPS Act and increase the domestic production of semiconductors.Continue Reading Biden Administration Announces Priorities for the Implementation of the CHIPS Act of 2022