Photo of Adelicia R. CliffePhoto of Kate M. Growley, CIPP/G, CIPP/USPhoto of Michelle ColemanPhoto of Laura J. Mitchell Baker

On July 1, 2020, the Department of Defense (DoD) Office of Inspector General (OIG) published its audit report. The report assessed the DoD Joint Artificial Intelligence Center’s (JAIC) progress in developing an Artificial Intelligence (AI) governance framework and standards, as well as DoD components’ implementation of security controls to protect AI data and technologies from internal and external cyber threats. DoD OIG concluded that the JAIC must do more and ensure consistency with DoD’s adoption of ethical principles for AI (as we previously reported on here), including the following: (1) include a standard definition of AI and regularly, at least annually, consider updating the definition; (2) develop a security classification guide to ensure the consistent protection of AI data; (3) develop a process to accurately account for AI projects; (4) develop capabilities for sharing data; (5) include standards for legal and privacy considerations; and (6) develop a formal strategy for collaboration between the Military Services and DoD Components on similar AI projects. In addition, the DoD OIG found that four DoD components (Army, Marine Corps, Navy, and Air Force) and two contractors failed to implement security controls to protect data used in AI projects and technologies from threats. The DoD OIG therefore directed these DoD components and contractors to: (1) configure their systems to enforce the use of strong passwords, generate system activity reports, or lock after periods of inactivity; (2) review networks and systems for malicious or unusual activity; (3) scan networks for viruses and vulnerabilities; and (4) implement physical security controls, such as AI data. Following this report, contractors should expect to see a biannual AI portfolio review of all DoD components’ AI projects and guidance on legal and privacy standard operating procedures.