The rapid spread of COVID-19, along with the effectiveness of existing public health response plans and the impacts of social distancing on the economy, have raised the question of how new technology can be used to address and manage the pandemic. On April 1, 2020, the Stanford Institute for Human-Centered Artificial Intelligence hosted “COVID-19 and AI: A Virtual Conference” to explore the potential applications of artificial intelligence (“AI”) in diagnostics and treatment, epidemiological tracking and forecasting of the spread of COVID-19, and the pandemic’s impacts on the economy, culture, and human behavior.
On March 24, 2020, the Dutch Supervisory Authority (“SA”) announced the launch of a broad investigation into automobile manufacturers, to determine whether any violations of data protection laws have occurred in relation to connected cars.
The Dutch SA sent a questionnaire to all Netherlands-based car and truck manufacturers, asking what types of personal data they process, how long they keep it, what measures they take to secure it, and with whom they share it. On the basis of the results, the SA intends to engage in dialogue with the sector and, where it deems necessary, initiate enforcement actions.
The SA mentioned in its announcement that, thus far, it has received few complaints on this topic, but attributes this to a lack of privacy awareness among drivers. The SA also alluded to its current understanding that “much is not properly addressed”.
Finally, the SA acknowledged that many automobile manufacturers do not have headquarters or a “main establishment” in the Netherlands. Therefore, the SA indicated it will share evidence or suspicions of data protection violations with the competent authorities of such manufacturers, for further follow-up action and possible enforcement.
This investigation follows the publication of guidelines for connected car manufacturers by the European Data Protection Board back in February 2020.
The COVID-19 crisis is demonstrating the potential of digital health technology to manage some of our greatest public health challenges. The White House Office of Science and Technology Policy has issued a call to action for technology companies to help the science community answer high-priority scientific questions related to COVID-19. The Centers for Disease Control and Prevention has also recognized that technology and surveillance systems can play an integral role in supporting the public health response to outbreaks. Continue Reading
Yesterday, the Federal Communications Commission (“FCC”) on its own motion released a Declaratory Ruling to confirm that the COVID-19 pandemic constitutes an “emergency” under the Telephone Consumer Protection Act (“TCPA”); as a consequence, hospitals, health care providers, state and local health officials, and other government officials may lawfully communicate through automated or prerecorded calls (which include text messages) information about the coronavirus and mitigation measures to mobile telephone numbers and certain other numbers (such as those of first responders) without “prior express consent.”
On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).
In this final instalment of our series of blogs on the European Commission’s plans for AI and data, announced on 19 February 2020, we discuss some potential effects on companies in the digital health sector. As discussed in our previous blog posts (here, here and here), the papers published by the European Commission cover broad concepts and apply generally — but, in places, they specifically mention healthcare and medical devices.
The Commission recognizes the important role that AI and big data analysis can play in improving healthcare, but also notes the specific risks that could arise given the effects that such new technologies may have on individuals’ health, safety, and fundamental rights. The Commission also notes that existing EU legislation already affords a high level of protection for individuals, including through medical devices laws and data protection laws. The Commission’s proposals therefore focus on addressing the gap between these existing rules and the residual risks that remain in respect of new technologies. Note that the Commission’s proposals in the White Paper on AI are open for public consultation until 19 May 2020.
The Federal Communications Commission (FCC) has again demonstrated that enabling the 5G ecosystem that, among other things, will drive breakthroughs in the Internet of Things (IoT), remains an agency priority.
In a meeting late last week, the FCC adopted multiple items aimed at expanding spectrum availability and access for 5G applications and services, as well as IoT devices. We will report separately on the FCC’s headline-grabbing action to partially reallocate the C-band. In the meantime, the three items addressing television White Spaces, the 3.5 GHz band, and the Rural Digital Opportunity Fund all have relevance for IoT stakeholders. Continue Reading
In November 2019, the Council of Europe’s* Committee of Experts on Human Rights of Automated Data Processing and Different Forms of Artificial Intelligence (the “Committee”) finalized its draft recommendations on the human rights impacts of algorithmic systems (the “Draft Recommendations’’). The Draft Recommendations, which are non-binding, set out guidelines on how the Council of Europe member states should legislate to ensure that public and private sector actors appropriately address human rights issues when designing, developing and deploying algorithmic systems.
On 19 February 2020, the new European Commission published two Communications relating to its five-year digital strategy: one on shaping Europe’s digital future, and one on its European strategy for data (the Commission also published a white paper proposing its strategy on AI; see our previous blogs here and here). In both Communications, the Commission sets out a vision of the EU powered by digital solutions that are strongly rooted in European values and EU fundamental rights. Both Communications also emphasize the intent to strengthen “European technological sovereignty”, which in the Commission’s view will enable the EU to define its own rules and values in the digital age. The Communications set out the Commission’s plans to achieve this vision.
The European Commission, as part of the launch of its digital strategy for the next five years, published on 19 February 2020 a White Paper On Artificial Intelligence – A European approach to excellence and trust (the “White Paper”). (See our previous blog here for a summary of all four of the main papers published by the Commission.) The White Paper recognizes the opportunities AI presents to Europe’s digital economy, and presents the Commission’s vision for a coordinated approach to promoting the uptake of AI in the EU and addressing the risks associated with certain uses of AI. The White Paper is open for public consultation until 19 May 2020.