Photo of Michael Atkinson

Michael K. Atkinson is the former chief watchdog of the U.S. intelligence agencies and served in senior U.S. Department of Justice roles spanning two decades. He has led dozens of high-profile investigations and offers clients a rare combination of experience in criminal defense and corporate compliance. Michael's practice focuses on white collar defense, national security, internal and congressional investigations, and parallel civil and regulatory enforcement proceedings. His work also includes high-stakes compliance advice on strategic issues such as cross-border investigations and the use of artificial intelligence/machine learning (AI/ML) programs. He is a partner in the firm's Washington, D.C. office, working with the White Collar & Regulatory Enforcement and Investigations groups. Michael is also a co-leader of the firm’s National Security Practice and Whistleblower Working Group.

Current political priorities in Congress will continue to push many industries under the microscope of Congressional investigations, including universities, tech companies, entities that receive federal funds, and energy-sector companies. When the chambers of Congress and the executive branch are controlled by the same party, Congressional oversight of the executive branch is less intense and instead public and private sector, state, and local entities are more likely to find themselves in the crosshairs. If a chamber of Congress changes hands in the midterm elections, the focus of the oversight may shift to reflect the policy priorities of the moment and include more executive branch oversight, but even the executive branch is often contending with requests for information that may implicate their dealings with third parties; for example, there is a risk that agency oversight triggers requests for privileged material belonging to a government contractor or grantee. The topics and industries of highest interest may play musical chairs, but entities across sectors would do well to incorporate a few best practices that will mitigate their risk should they end up in the hot seat, either directly or through a government partner.Continue Reading Protecting Information in Congressional Investigations: The Attorney-Client Privilege and Work-Product Privilege

On Monday, September 23, 2024, the Department of Justice (DOJ), released an update to its Evaluation of Corporate Compliance Programs (ECCP) guidance.  The ECCP guidance was last revised in March 2023, which brought a number of significant changes, including a focus on compensation and incentive structures (e.g., clawbacks), and third party messaging applications.  This 2024 update, while not as significant in scope as its predecessor, nonetheless highlights the DOJ’s focus on new and emerging technologies, such as artificial intelligence (AI), as part of its evolving assessment of what makes a corporate compliance program truly effective, and how prosecutors should evaluate risk assessments and other management tools at the time of a corporate resolution.Continue Reading Putting the “AI” in Compliance—DOJ Updates its Corporate Compliance Program Guidance to Address Emerging AI Risks and Leveraging Data 

As the Department of Energy’s (“DOE”) Loan Programs Office (“LPO”) continues to finance clean energy manufacturing and deployment in the United States, the recent announcement by the DOE’s Office of Inspector General’s (“DOE OIG”) that it intends to scrutinize LPO’s due diligence process increases the risk to program applicants. According to a recent notice issued on SAM.gov, the DOE OIG intends to issue a sole source contract for legal support “assessing the policies and procedures” for the due diligence of loan applications, and evaluate specific LPO loans and guarantees to assess their compliance with, consistency in application of, and the effectiveness of LPO policies and procedures, as well as related Governmentwide regulations, policies, procedures, and directives, to identify specific points of weakness in due diligence practices, and to recommend improvements to mitigate risks.Continue Reading Enhanced Review by the Department of Energy’s Office of Inspector General into the Loan Programs Office Poses Increased Risks to Loan Program Applicants

On July 21, 2023, the Biden administration announced that seven companies leading the development of artificial intelligence (AI) — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — have made voluntary commitments, which the companies agreed to undertake immediately, to help move towards safe, secure, and transparent development of AI technology. The goal of the voluntary commitments, or the “AI Agreement” as it is informally dubbed, is to establish a set of standards that promote the principles of safety, security, and trust deemed fundamental to the future of AI. Continue Reading Private Sector Helps Lead the Way: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI

On March 17, 2022, the National Institute of Standards and Technology (“NIST”) published an initial draft of its Artificial Intelligence (AI) Risk Management Framework (“AI RMF”) to promote the development and use of responsible AI technologies and systems.  When final, the three-part AI RMF is intended for voluntary use and to improve the ability to