Photo of Neda ShaheenPhoto of Michael G. Gruden, CIPP/GPhoto of Eric Ransom

On August 21, 2024, the National Institute of Standards and Technology (NIST) released the Second Public Draft of Digital Identity Guidelines (hereinafter, “Draft Guidelines”) for final review. The Draft Guidelines introduce potentially notable requirements for government contractors using artificial intelligence (AI) systems. Among the most significant draft requirements are those related to the disclosure and transparency of AI and machine learning (ML). By doing so, NIST underscores its commitment to fostering secure, trustworthy, and transparent AI, while also addressing broader implications of bias and accountability. For government contractors, the Draft Guidelines are not just a set of recommendations but a blueprint for future AI standards and regulations.

In identifying concerns for digital identity risk management, NIST focuses on three main concerns: identity proofing, authentication, and federation level. Each of these “can result in the wrong subject successfully accessing an online service, system, or data.” See Draft Guidelines, Section 3. The Draft Guidelines note that AI and ML are used in identity systems for multiple purposes (from biometrics to chatbots) and that potential applications are extensive, but that AI and ML also introduce distinct risks, such as disparate outcomes, biased outputs, and the exacerbation of existing inequities. See Draft Guidelines, Section 3.8.

As a result, Section 3.8 of the Draft Guidelines has been updated to require that, in any identity system:

  1. All uses of AI and ML must be documented and communicated to organizations relying on these systems, credential service providers (CSPs), identity providers (IdPs), or verifiers using AI and ML must disclose this to all responsible persons making access decisions based on these systems.
  2. Organizations using AI and ML must provide information to entities using their technology, including methods and techniques for training models, descriptions of training data sets, frequency of model updates, and testing results.
  3. Organizations using AI and ML systems must implement the NIST AI Risk Management Framework to evaluate risks and must consult SP1270 for managing bias in AI.

In other words, NIST’s Draft Guidelines update the call for detailed disclosures that explain how AI systems operate, the data they rely on, and the algorithms that drive their decisions. Clear disclosures will help government clients understand how AI systems work, which can advance decision-making in areas where AI decisions have significant consequences, such as healthcare, law enforcement, and public policy. At the same time, accountability and ethical considerations help foster trust with AI-solutions.

As AI continues to revolutionize various industries, its integration into government projects brings opportunities and challenges. NIST’s role in developing and promoting standards that ensure security, privacy, transparency, and reliability with new technology will be crucial in shaping how AI systems are designed, implemented, and disclosed. Government contractors who embrace the Draft Guidelines may be better positioned to lead in this evolving landscape, shaping new requirements and delivering AI solutions aligned to the highest standards. 

NIST is seeking public comments on the Draft Guidelines through October 7, 2024. Stakeholders should engage with NIST through public comments now, as well as begin to plan for adherence to these guidelines. Taking steps to weigh in on the Draft Guidelines as well as prepare for implementation should they go into effect, will be essential for anticipating final guidelines and ensuring compliance. 

Beginning to reevaluate contract provisions and development of AI governance programs, in line with the Draft Guidelines, is crucial for preparation. Government contractors need to be in a position to seamlessly comply with requirements already placed on government agencies through President Biden’s Executive Order on AI and OMB Guidance that will necessarily be passed down to them.  

By navigating the legal landscape, Crowell & Moring LLP can help clients understand the unique legal implications of NIST, assess legal risks associated with AI disclosures, and identify areas where the client may be vulnerable to potential litigation. Crowell can also advise on where the Draft Guidelines intersect with existing statutes and regulations, such as the Federal Acquisition Regulation (FAR) or False Claims Act (FCA), conduct trainings, and help develop new strategies to mitigate risk from a comprehensive legal perspective.

As NIST begins to collect public comments on the Draft Guidelines, Crowell will continue to monitor legal and policy developments regulating the use of artificial intelligence.  We are prepared to help clients submit comments and engage with regulators, as well as consider their potential next steps.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Neda Shaheen Neda Shaheen

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national…

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national security, technology, cybersecurity, trade and international law. Neda joined the firm after working as a consultant at Crowell & Moring International (CMI), where she supported a diverse range of clients on digital trade matters concerning international trade, national security, privacy, and data governance, as well as advancing impactful public-private partnerships.

Photo of Michael G. Gruden, CIPP/G Michael G. Gruden, CIPP/G

Michael G. Gruden is a counsel in Crowell & Moring’s Washington, D.C. office, where he is a member of the firm’s Government Contracts and Privacy and Cybersecurity groups. He possesses real-world experience in the areas of federal procurement and data security, having worked…

Michael G. Gruden is a counsel in Crowell & Moring’s Washington, D.C. office, where he is a member of the firm’s Government Contracts and Privacy and Cybersecurity groups. He possesses real-world experience in the areas of federal procurement and data security, having worked as a Contracting Officer at both the U.S. Department of Defense (DoD) and the U.S. Department of Homeland Security (DHS) in the Information Technology, Research & Development, and Security sectors for nearly 15 years. Michael is a Certified Information Privacy Professional with a U.S. government concentration (CIPP/G). He is also a Registered Practitioner under the Cybersecurity Maturity Model Certification (CMMC) framework. Michael serves as vice-chair for the ABA Science & Technology Section’s Homeland Security Committee.

Michael’s legal practice covers a wide range of counseling and litigation engagements at the intersection of government contracts and cybersecurity. His government contracts endeavors include supply chain security counseling, contract disputes with federal entities, suspension and debarment proceedings, mandatory disclosures to the government, prime-subcontractor disputes, and False Claims Act investigations. His privacy and cybersecurity practice includes cybersecurity compliance reviews, risk assessments, data breaches, incident response, and regulatory investigations.