On December 30, 2020, the Federal Communications Commission (“FCC”) released a Report and Order (“Order”) that imposed certain new restrictions on nonmarketing prerecorded calls to residential lines. The action was in response to Congress’s mandate in the TRACED Act that the FCC reevaluate certain exemptions the agency previously granted regarding the consent requirements for prerecorded calls under the Telephone Consumer Protection Act (“TCPA”). Continue Reading
On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”). The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.
On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”). The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.
The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers. It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation. The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.
On December 16, 2020, the German Federal Government passed a draft law that substantially amends some of Germany’s information technology laws (“IT laws”). These amendments aim to adapt the current legal framework to the increasing digitalization of products and services, the proliferation of IoT products, and the appearance of new cybersecurity threats. The draft law is expected to be enacted in the German Parliament in the first quarter of 2021.
The Federal Communications Commission (“FCC”) is seeking comment on a Notice of Proposed Rulemaking (“NPRM”) that would modify certain aspects of the FCC’s device authorization rules. Specifically, the FCC is seeking comment on a proposed revision to its device authorization rules to allow the importation of limited quantities of radiofrequency (“RF”) devices prior to authorization for pre-sale activities, including imaging, packaging, and delivery to retail locations. The FCC also is proposing rule revisions that would allow conditional sales, but not delivery, of RF devices to consumers prior to authorization.
On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.
The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.
The newly enacted National Defense Authorization Act (“NDAA”) contains important provisions regarding the development and deployment of artificial intelligence (“AI”) and machine learning technologies, many of which build upon previous legislation introduced in the 116th Congress. The most substantial federal U.S. legislation on AI to date, these provisions will have significant implications in the national security sector and beyond. The measures in the NDAA will coordinate a national strategy on research, development, and deployment of AI, guiding investment and aligning priorities for its use.
President Trump had vetoed the NDAA after its initial passage in December, but the $740 billion NDAA became law over the objection of President Trump’s veto with a rare New Year’s Day Senate vote, 81-13. The House voted to override President Trump’s veto on December 28, on a 322-87 vote.
This post highlights some of the key AI provisions included in the NDAA. Continue Reading
Connected and automated vehicle (“CAV”) developments in Washington are likely to pick up speed as 2021 rolls in. Indeed, a new presidential administration, new agency leadership, and a new Congress may drive new CAV regulation while also spurring innovation in an industry that many believe can enhance road safety, mobility, and accessibility. For instance, John Porcari, a Biden-Harris campaign advisor and former U.S. Deputy Secretary of Transportation under President Barack Obama, recently indicated that transportation agencies under President Biden would prioritize innovation and technological change and adopt a federal framework for autonomous vehicles.
Lawmakers and regulators, furthermore, will have the opportunity to build on some of the initiatives that picked up speed during the fall of 2020, such as the Safely Ensuring Lives Future Deployment and Research in Vehicle Evolution Act (H.R. 8350) (“SELF DRIVE Act”), the National Highway Traffic Safety Administration’s (“NHTSA”) AV TEST tool, and NHTSA’s request for comment on its proposed framework for Automated Driving Systems (“ADS”) safety. Additionally, the Federal Communications Commission’s (“FCC”) adoption of rules to modernize the 5.9 GHz Band could spur the deployment of CAV technology, and the new administration may reinvigorate inter-agency efforts to examine consumer data privacy and security issues posed by CAVs, as well as CAV-related developments in infrastructure. This post looks down the road ahead for CAV developments in Washington. Continue Reading
In April 2019, the UK Government published its Online Harms White Paper and launched a Consultation. In February 2020, the Government published its initial response to that Consultation. In its 15 December 2020 full response to the Online Harms White Paper Consultation, the Government outlined its vision for tackling harmful content online through a new regulatory framework, to be set out in a new Online Safety Bill (“OSB”).
This development comes at a time of heightened scrutiny of, and regulatory changes to, digital services and markets. Earlier this month, the UK Competition and Markets Authority published recommendations to the UK Government on the design and implementation of a new regulatory regime for digital markets (see our update here).
The UK Government is keen to ensure that policy initiatives in this sector are coordinated with similar legislation, including those in the US and the EU. The European Commission also published its proposal for a Digital Services Act on 15 December, proposing a somewhat similar system for regulating illegal online content that puts greater responsibilities on technology companies.
Key points of the UK Government’s plans for the OSB are set out below.
On December 15, 2020, the European Commission published its proposed Regulation on a Single Market for Digital Services, more commonly known as the Digital Services Act (“DSA Proposal”). In publishing the Proposal, the Commission noted that its goal was to protect consumers and their fundamental rights online, establish an accountability framework for online services, and foster innovation, growth and competitiveness in the single market. On the same day, the Commission also published its proposal for a Digital Markets Act (“DMA”), which would impose new obligations and restrictions on online services that act as “designated gatekeepers” (see our analysis of the DMA Proposal here).