On January 7, 2019, pursuant to President Donald Trump’s Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy (OSTP) released a draft Guidance for Regulation of Artificial Intelligence Applications, including ten principles for agencies to consider when deciding whether and how to regulate AI. The White House announced a 60 day public comment period following the release of the Guidance, after which the White House will issue a final memorandum and instruct agencies to submit implementation plans. Comments should be submitted to the White House via Regulations.gov Docket ID OMB_FRDOC_0001-0261.

The White House will require agencies to consider ten high-level AI principles when proposing new regulatory or non-regulatory approaches to private sector use of AI technology. Among the top goals of this policy is promoting “innovation and growth” in the United States’ AI industry, which the Guidance designates as a “high priority.” The Guidance instructs agencies, when considering whether to regulate, to evaluate the impact of regulation on AI growth. It also directs agencies to regulate only after determining that such regulation is necessary after considering these ten principles. Finally, the Guidance highlights some non-regulatory alternatives that may be effective for AI, such as policy guidance and frameworks, pilot programs and experiments and voluntary standards.

Note that the Guidance does not apply to the federal government’s own use of AI tools, but was developed to direct agencies’ oversight of AI applications in the private sector. The White House noted specific examples, including that the Department of Transportation would follow OSTP’s Guidance as it considers regulations for AI-powered drones and the Food and Drug Administration would follow the Guidance in its review of AI-enabled medical devices.

OSTP’s 10 AI principles:

  1. Public Trust in AI — “It is … important that the government’s regulatory and non-regulatory approaches to AI promote reliable, robust and trustworthy AI applications.”
  2. Public Participation — Agencies should provide “opportunities for the public to provide information and participate in all stages of the rulemaking process” and should promulgate and “promote awareness and widespread availability of standards and . . . other informative documents.”
  3. Scientific Integrity and Information Quality — Agencies should apply a scientifically rigorous process to issuing rules and guidance. “Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results.”
  4. Risk Assessment and Management — “It is not necessary to mitigate every foreseeable risk . . . Instead, a risk-based approach should be used to determine which risks are acceptable and which risks present the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits. Agencies should be transparent about their evaluations of risk and re-evaluate their assumptions and conclusions at appropriate intervals so as to foster accountability.”
  5. Benefits and Costs — Where agency action is required, agencies should “carefully consider the full societal costs, benefits, and distributional effects before considering regulations,” including “the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace, whether implementing AI will change the type of errors created by the system, as well as comparison to the degree of risk tolerated in other existing ones. Agencies should also consider critical dependencies when evaluating AI costs and benefits.” The guidance specifically notes that bringing clarity to “questions of responsibility and liability for decisions made by AI” is an area that may require agency action.
  6. Flexibility — “Rigid, design-based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective . . . To advance American innovation, agencies should keep in mind international uses of AI, ensuring that American companies are not disadvantaged by the United States’ regulatory regime.”
  7. Fairness and Non-Discrimination — Agencies should consider “issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.”
  8. Disclosure and Transparency — “In addition to improving the rulemaking process, transparency and disclosure can increase public trust and confidence in AI applications. At times, such disclosures may include identifying when AI is in use, for instance, if appropriate for addressing questions about how the application impacts human end users.”
  9. Safety and Security — In addition to encouraging safety and security throughout an AI system’s lifecycle, agencies should specifically focus on “adversarial use of AI against a regulated entity’s AI technology” and “the risk of possible malicious deployment and use of AI applications.
  10. Interagency Coordination “Agencies should coordinate with each other to share experiences and to ensure consistency and predictability of AI-related policies.” The Office of Management and Budget’s Office of Information and Regulatory Affairs will require that stakeholder agencies have an opportunity to provide input on any AI-related draft regulatory action it designates “significant.”

U.S. Policy and Global Leadership

The Guidance calls on the United States to lead the international community on setting the standards governing AI technologies. The White House has encouraged the European Commission to consider using this Guidance as a model as it drafts its own pending AI regulatory document.

This Guidance comes amidst other federal stakeholders’ work on their own recommendations on AI development and implementation over the past year, including:

National governments will continue to seek to strike a balance they deem appropriate when it comes to AI technology: cultivating innovation and private sector growth to harness AI’s vast potential, while building safeguards and norms to limit its sometimes unforeseeable risks. With this new Guidance, the Trump Administration continues to signal that the United States considers leading the world in developing AI standards a top national priority. The public comment period for this draft document is open through March 13, 2020.