U.S. federal agencies and working groups have promulgated a number of issuances in January 2023 related to the development and use of artificial intelligence (“AI”) systems.  These updates join proposals in Congress to pass legislation related to AI.  Specifically, in January 2023, the Department of Defense (“DoD”) updated Department of Defense Directive 3000.09 and the National Artificial Intelligence Research Resource (“NAIRR”) Task Force Final Report on AI; the National Institute of Standards and Technology (“NIST”) released its AI Risk Management Framework, each discussed below.

Department of Defense Directive 3000.09. 

On January 25, 2023, the DoD updated Directive 3000.09, “Autonomy in Weapon Systems,” which governs the development and fielding of autonomous and semi-autonomous weapons systems, including those systems that incorporate AI technologies.  The Directive has three primary purposes:  (1) establishing a policy and assigning responsibilities for the development and use of autonomous and semi-autonomous functions in weapons systems; (2) establishing guidelines designed to minimize the probability and consequences of failures in such systems; and (3) establishing the “Autonomous Weapon Systems Working Group.”  For example, the Directive provides that autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators “to exercise appropriate levels of human judgment” over the use of force, and that these systems must be subject to verification and validation testing to build confidence in the weapon system’s operation.  The Directive also underscores that design and development of AI capabilities in autonomous and semi-autonomous weapons systems must be consistent with the DoD’s AI Ethical Principles – specifically, that the AI is: (1) responsible; (2) equitable; (3) traceable; (4) reliable; and (5) governable.  The Directive outlines a number of roles and responsibilities regarding oversight for autonomous and semi-autonomous weapon systems and provides guidance as to when senior review and approval are required to use these types of systems.  Directive 3000.09 and the DoD’s AI Ethical Principles will be important for entities working with, and providing AI-enabled tools and services for the DoD.

NAIRR Task Force Report

In the National AI Initiative Act of 2020, Congress directed the National Science Foundation and the White House Office of Science and Technology Policy to establish a task force to develop options for providing researchers and students with access and resources for AI research and development.  As part of these efforts, Congress directed these organizations to create a roadmap for a National Artificial Intelligence Research Resource (“NAIRR”).  On January 24, 2023, the NAIRR Task Force released its final report that presents a roadmap and implementation plan for a national cyberinfrastructure aimed at maximizing the development of AI and using the benefits of this technology in society.  The report’s key recommendations include:

  • Establishing NAIRR with four measurable goals: (1) to spur innovation, (2) to increase diversity of talent, (3) to improve capacity, and (4) to advance trustworthy AI.
  • Implementing NAIRR over four phases: (1) program launch and operating entity selection, (2) operating entity startup, (3) NAIRR initial operation capability, and (4) NAIRR ongoing operations.  As contemplated, NAIRR would be operational “no later than 21 months” from launch of the program and fully implemented in year 3 of the program.  The Report’s implementation plan proposes a pilot program to make AI research resources available to AI R&D communities while implementation ensues.
  • Requiring $2.6 billion in funding for NAIRR over a six-year period to meet the national need for resources to fuel AI innovation.
  • Ensuring that NAIRR is “broadly accessible” to a wide range of users—lowering barriers to participation in AI research and increasing the diversity of AI researchers.  Access would be provided via an integrated portal and must include computational resources—both conventional servers and cloud computing, data resources, and testing tools.

NIST AI Risk Management Framework

As covered in our prior blog posts here and here, on January 26, 2023, the U.S. Department of Commerce’s NIST released its Artificial Intelligence Risk Management Framework (“RMF”) guidance document, together with a companion AI RMF Playbook that suggests ways to navigate and use the Framework.  The RMF provides a voluntary set of principles and process for organizations to follow to identify and minimize risks in the design and use of AI systems.  Governance processes around the use of AI, including policies, processes, and diverse teams to advise on AI development and use are of particular importance to the RMF.  Additionally, the RMF suggests that organizations should evaluate the risks presented by an AI system, taking into account the context of use, and consider how best to mitigate these risks. We will continue to monitor these and other AI related developments across our blogs.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Photo of Robert Huffman Robert Huffman

Bob Huffman represents defense, health care, and other companies in contract matters and in disputes with the federal government and other contractors. He focuses his practice on False Claims Act qui tam investigations and litigation, cybersecurity and supply chain security counseling and compliance…

Bob Huffman represents defense, health care, and other companies in contract matters and in disputes with the federal government and other contractors. He focuses his practice on False Claims Act qui tam investigations and litigation, cybersecurity and supply chain security counseling and compliance, contract claims and disputes, and intellectual property (IP) matters related to U.S. government contracts.

Bob has leading expertise advising companies that are defending against investigations, prosecutions, and civil suits alleging procurement fraud and false claims. He has represented clients in more than a dozen False Claims Act qui tam suits. He also represents clients in connection with parallel criminal proceedings and suspension and debarment.

Bob also regularly counsels clients on government contracting supply chain compliance issues, including cybersecurity, the Buy American Act/Trade Agreements Act (BAA/TAA), and counterfeit parts requirements. He also has extensive experience litigating contract and related issues before the Court of Federal Claims, the Armed Services Board of Contract Appeals, federal district courts, the Federal Circuit, and other federal appellate courts.

In addition, Bob advises government contractors on rules relating to IP, including government patent rights, technical data rights, rights in computer software, and the rules applicable to IP in the acquisition of commercial items and services. He handles IP matters involving government contracts, grants, Cooperative Research and Development Agreements (CRADAs), and Other Transaction Agreements (OTAs).

Photo of Stephanie Barna Stephanie Barna

Stephanie Barna draws on over three decades of U.S. military and government service to provide advisory and advocacy support and counseling to clients facing policy and political challenges in the aerospace and defense sectors.

Prior to joining the firm, Stephanie was a senior…

Stephanie Barna draws on over three decades of U.S. military and government service to provide advisory and advocacy support and counseling to clients facing policy and political challenges in the aerospace and defense sectors.

Prior to joining the firm, Stephanie was a senior leader on Capitol Hill and in the U.S. Department of Defense (DoD). Most recently, she was General Counsel of the Senate Armed Services Committee, where she was responsible for the annual $740 billion National Defense Authorization Act (NDAA). Additionally, she managed the Senate confirmation of three- and four-star military officers and civilians nominated by the President for appointment to senior political positions in DoD and the Department of Energy’s national security nuclear enterprise, and was the Committee’s lead for investigations.

Previously, as a senior executive in the Office of the Army General Counsel, Stephanie served as a legal advisor to three Army Secretaries. In 2014, Secretary of Defense Chuck Hagel appointed her to be the Principal Deputy Assistant Secretary of Defense for Manpower and Reserve Affairs. In that role, she was a principal advisor to the Secretary of Defense on all matters relating to civilian and military personnel, reserve integration, military community and family policy, and Total Force manpower and resources. Stephanie was later appointed by Secretary of Defense Jim Mattis to perform the duties of the Under Secretary of Defense for Personnel and Readiness, responsible for programs and funding of more than $35 billion.

Stephanie was also previously the Deputy General Counsel for Operations and Personnel in the Office of the Army General Counsel. She led a team of senior lawyers in resolving the full spectrum of issues arising from Army wartime operations and the life cycle of Army military and civilian personnel. Stephanie was also a personal advisor to the Army Secretary on his institutional reorganization and business transformation initiatives and acted for the Secretary in investigating irregularities in fielding of the Multiple Launch Rocket System and classified contracts. She also played a key role in a number of high-profile personnel investigations, including the WikiLeaks breach. Prior to her appointment as Deputy, she was Associate Deputy General Counsel (Operations and Personnel) and Acting Deputy General Counsel.

Stephanie is a retired Colonel in the U.S. Army and served in the U.S. Army Judge Advocate General’s Corps as an Assistant to the General Counsel, Office of the Army General Counsel; Deputy Staff Judge Advocate, U.S. Army Special Forces Command (Airborne); Special Assistant to the Assistant Secretary of the Army (Manpower & Reserve Affairs); and General Law Attorney, Administrative Law Division.

Stephanie was selected by the National Academy of Public Administration for inclusion in its 2022 Class of Academy Fellows, in recognition of her years of public administration service and expertise.

Photo of Jorge Ortiz Jorge Ortiz

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related…

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to privacy policies and compliance obligations under U.S. state privacy regulations like the California Consumer Privacy Act.