On May 25, 2023, the National Telecommunications and Information Administration (NTIA) announced that, on behalf of the U.S. government, it filed responses to the European Commission’s public consultation on The Future of the Electronic Communications Sector and Its Infrastructure.  The consultation explores the issue of how to best promote connectivity and ensure reliable broadband access throughout the EU, as well as the kinds of infrastructure and investments needed to support the evolving telecommunications landscape.  Among other things, the consultation seeks feedback on whether content and application providers (also referred to as Over-The-Top (OTT) services in the U.S.) should make mandated “fair share” payments to telecom operators to subsidize current and future connectivity needs.  NTIA’s filing comes amid an ongoing debate surrounding the future of the U.S. Universal Service Fund (USF) and whether and how to expand its contribution base.

Continue Reading Biden Administration Weighs in on European Commission’s “Fair Share” Telecoms Consultation

On May 23, 2023, the White House announced that it took the following steps to further advance responsible Artificial Intelligence (“AI”) practices in the U.S.:

  • the Office of Science and Technology Policy (“OSTP”) released an updated strategic plan that focuses on federal investments in AI research and development (“R&D”);
  • OSTP issued a new request for information (“RFI”) on critical AI issues; and
  • the Department of Education issued a new report on risks and opportunities related to AI in education.
Continue Reading White House Announces New Efforts to Advance Responsible AI Practices

On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.

In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).

Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI

Today, the Supreme Court issued its opinion in Gonzalez v. Google LLC, a case about whether Section 230 of the Communications Decency Act (47 U.S.C. § 230) protected YouTube’s recommendation algorithms from a claim of secondary liability under the Anti-Terrorism Act (ATA). In a short, three-page per curiam opinion, the Court avoided addressing the Section 230 issue entirely. Instead, the Court held that much of the plaintiffs’ ATA complaint would fail to state a claim for relief under the Court’s separate decision in Twitter v. Taamneh (also handed down today), given that plaintiffs’ counsel in Gonzalez conceded that the allegations in the Gonzalez complaint were materially identical to the Twitter complaint. The Court also relied on the fact that plaintiffs did not seek review of a separate part of the Ninth Circuit’s opinion that addressed ATA claims related to revenue-sharing. Because the Court found that the underlying ATA claim would likely fail on the merits, it found it unnecessary to reach the interpretation of Section 230 immunity. This result was foreshadowed at the oral argument, where the Justices appeared to be concerned with line-drawing and potential unintended consequences of applying Section 230 to the algorithms at issue. The Court found a way out of deciding the Section 230 question in Gonzalez, but it remains to be seen whether the Court will look for a different vehicle to address the scope of Section 230 immunity in the future.

If you have any questions concerning the material discussed in this client alert, please contact the members of our Technology and Communications Regulation and Appellate and Supreme Court practices.

On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).

Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models

On May 1, 2023, the White House Office of Science and Technology Policy (“OSTP”) announced that it will release a Request for Information (“RFI”) to learn more about automated tools used by employers to “surveil, monitor, evaluate, and manage workers.”  The White House will use the insights gained from the RFI to create policy and best practices surrounding the use of AI in the workplace.

Continue Reading White House Issues Request for Comment on Use of Automated Tools with the Workforce

Last week, the Federal Communications Commission (“FCC”) released an Order and Notice of Proposed Rulemaking (“NPRM”) that could have significant compliance implications for all holders of international Section 214 authority (i.e., authorization to provide telecommunications services from points in the U.S. to points abroad), as well as all entities holding an ownership interest in these carriers. The item requires all holders of international Section 214 authority to respond to a one-time information request concerning their foreign ownership and proposes sweeping changes to the agency’s licensing rules for such licensees.

Although the FCC’s information request may be more relevant in the near term, it is a limited one-time requirement. By contrast, the rule changes on which the FCC seeks comment are far-reaching and, if adopted as written, could result in significant future compliance burdens, both for entities holding international Section 214 authority, as well as the parties holding ownership interests in these entities.

The FCC’s latest actions underscore the agency’s ongoing desire to closely scrutinize foreign ownership and involvement in telecommunications carriers serving the U.S. market, as well as to play a more active role in cybersecurity policy. These developments should be of interest to any carrier that serves the U.S. market and any financial or strategic investor focused on the telecommunications space, as well as other parties interested in national security developments affecting telecommunications infrastructure.

Continue Reading FCC Steps Up Review of Foreign Ownership in Telecom Carriers; Proposes Cybersecurity Mandates

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.

Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On April 25, 2023, four federal agencies — the Department of Justice (“DOJ”), Federal Trade Commission (“FTC”), Consumer Financial Protection Bureau (“CFPB”), and Equal Employment Opportunity Commission (“EEOC”) — released a joint statement on the agencies’ efforts to address discrimination and bias in automated systems. 

Continue Reading <em>DOJ, FTC, CFPB, and EEOC Statement on Discrimination and AI</em>

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  

Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation