Photo of Michael G. Gruden, CIPP/GPhoto of Evan D. WolffPhoto of Laura J. Mitchell BakerPhoto of Kate BealePhoto of Lorraine M. CamposPhoto of Alicia ClausenPhoto of Michelle ColemanPhoto of Aaron CummingsPhoto of Jodi G. DanielPhoto of Kate GrowleyPhoto of Jacob HarrisonPhoto of Garylene “Gage” JavierPhoto of Paul KellerPhoto of Matthew MoisanPhoto of Lidia Niecko-NajjumPhoto of Eric RansomPhoto of Anna Z. SaberPhoto of Neda ShaheenPhoto of Roma SharmaPhoto of William TuckerPhoto of Jennie Wang VonCannonPhoto of Alexis WardPhoto of Tiffany Wynn

On October 30, 2023, President Biden released an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI).  This landmark EO seeks to advance the safe and secure development and deployment of AI by implementing a society-wide effort across government, the private sector, academia, and civil society to harness “AI for good,” while mitigating its substantial risks.

The EO lays out eight guiding principles and priorities that consider the views of “other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations” to advance and govern the use of AI.  These include:

  1. Ensure safe and secure AI technology;
  2. Promote responsible innovation, competition, and collaboration;
  3. Support American workers;
  4. Advance equity and civil rights;
  5. Protect American consumers, patients, passengers, and students;
  6. Protect privacy and civil liberties;
  7. Manage the federal government’s use of AI; and
  8. Strengthen U.S. leadership abroad, safeguarding ways to develop and deploy AI technology responsibly.

These eight principles are detailed in EO Sections 4 through 11.  A summary of each section is provided below.

Section 4. Ensuring the Safety and Security of AI Technology

Section 4 focuses on eight key areas:  (1) developing guidelines, standards, and best practices for AI safety and security; (2) ensuring safe and reliable AI through industry reporting on AI development and datacenters, and reporting on foreign access to AI infrastructure; (3) managing AI in critical infrastructure and cybersecurity; (4) reducing risks at the intersection of AI and Chemical, Biological, Radiological, and Nuclear (CBRN) threats; (5) reducing the risks posed by synthetic content; (6) soliciting input on dual-use foundation models with widely available model weights; (7) promoting safe release and preventing the malicious use of federal data for AI training; and (8) developing a coordinated executive branch approach to managing AI security risks.  The EO directs the Secretary of Commerce and the Department of Homeland Security (DHS) to perform a number of tasks that will further ensure the safe and secure use of AI technology.  Some of those tasks include: 

  • The Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), will coordinate with other relevant agencies to establish guidelines and best practices that promote consensus industry standards on developing and deploying AI. The Secretary of Energy is directed to develop AI evaluation tools to identify security risks, including nuclear and energy-security threats.
  • The Secretary of Commerce is directed to use the authority of the Defense Production Act (DPA) to require U.S. companies to report on any development of “dual use” AI foundation models, the ownership of resulting model weights, and the results of red-team testing; and to report on the development or possession of any “large scale computing cluster.”
  • The Secretary of Commerce is directed to solicit input through a public consultation regarding the risks related to the removal of safeguards with AI models, the risks of actors fine-tuning the foundation models, and the benefits and risks of AI innovation. The EO addresses the security risks of dual-use foundation models, including regulations to curb their use by foreign malicious actors, with a particular focus on models with widely available weights.
  • DHS is directed to establish an AI Safety and Security Board as an advisory committee of AI experts to advise on security improvements and incident response related to AI usage in critical infrastructure. To ensure the protection of critical infrastructure, heads of Sector Risk Management Agencies are also directed to assess AI risks related to critical failures, physical attacks, and cyber-attacks.

Section 5. Promoting Innovation and Competition

Section 5 focuses on three key areas:  (1) attracting AI talent to the U.S.; (2) promoting innovation; and (3) promoting competition, with key actions required that address health care, energy, and intellectual property.  Below are a few notable ways the EO addresses the promotion of innovation and competition: 

  • The Secretary of the Department of Health and Human Services (HHS) is directed to prioritize grantmaking and cooperative agreement awards, including through the National Institutes of Health, to support responsible AI development and use in the healthcare sector, health data quality, and health equity. The EO also requires the Secretary of Veterans Affairs to host two 3-month nationwide AI Tech Sprint competitions to improve the quality of veterans’ health care and support small businesses’ innovative capacity.
  • The EO seeks to promote AI innovation and combat risks to developers. To do so, the Secretary of Commerce and U.S. Patent and Trademark Office are directed to issue guidance on patentability and copyright issues related to AI, and the Secretary of DHS is directed to investigate incidents of intellectual property theft and pursue enforcement.
  • The Director of the National Science Foundation (NSF) is directed to launch a pilot program implementing the National AI Research Resource (NAIRR), which will provide training resources to support AI research and development. At least one NSF Regional Innovation Engine and four new National AI Research Institutes must be funded.

Section 6. Supporting Workers

Section 6 focuses on two key areas:  (1) advancing the government’s understanding of AI’s implications for workers; and (2) ensuring that AI in the workplace advances employees’ well-being.  As highlighted below, the EO made several requests for information and research regarding the impact of AI on the workforce and expressed concerns about deploying AI in the workplace and its effect on workers.  The EO also enumerated steps to prioritize diversity in the AI-ready workforce. 

  • The EO requires several actions to be taken to understand AI’s impact on the workforce, including a report regarding the labor-market effects of AI and a report on the abilities of agencies to support workers displaced by the adoption of AI.
  • Focusing on the advancement of employee well-being as AI is deployed in the workplace, the Secretary of Labor is directed to develop and publish principles and best practices that employers can use to mitigate AI’s potential harms to employees’ well-being as well as maximize AI’s potential benefits.
  • The EO requires the Secretary of Labor to issue guidance establishing that employers who deploy AI to monitor or augment employees’ work must still compensate employees appropriately for hours worked, as defined under the Fair Labor Standards Act.
  • The Administration indicates its interest in fostering diversity within AI-related industries by authorizing the prioritization of resources to support AI-related education and workforce development.

Section 7. Advancing Equity and Civil Rights

Section 7 focuses on three key categories where AI can impact civil rights:  (1) the criminal justice system; (2) government benefits and programs; and (3) issues in the broader economy, including hiring, housing, and consumer finance.  Specifically, the EO directs and requires various executive agencies to report on how AI may be used in discriminatory ways and to issue guidance on how such potentially discriminatory practices can be mitigated, as highlighted below.

  • The Attorney General is directed to evaluate and report on how existing laws address civil rights violations and discrimination stemming from AI usage, and to meet with the heads of federal civil rights offices to discuss comprehensive prevention of AI-related discrimination.
  • The Attorney General is also directed to prepare a report by the end of October 2024 that outlines whether and how AI is currently used within the criminal justice system, and identifying best practices for the usage of AI in the criminal justice system.
  • The Secretary of HHS and the Secretary of Agriculture are directed will issue guidance concerning the use of AI to maximize program participation by eligible recipients, and outlining when access to and/or decision-making by human reviewers (e.g. as related to benefits determinations) is warranted.
  • The Secretary of Labor, Federal Housing and Finance Agency, Consumer Financial Protection Bureau, Secretary of Housing and Urban Development, and the Architectural and Transportation Barriers Compliance Board will evaluate, publish guidance, and/or solicit public comment on various AI-related issues. The overarching objective is to ensure nondiscrimination in the substantive areas that fall within their purview, such as hiring, tenant screening, advertising, and the use of biometric data.

Section 8. Protecting Consumers, Patients, Passengers, and Students

Section 8 focuses on three points:  (1) encouraging independent regulatory agencies to consider the full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy, and emphasizing or clarifying requirements and expectations for transparency in AI models; (2) ensuring the safe and responsible deployment and development of AI in healthcare, public health, human-services, transportation, and education sectors; and (3) considering how AI will affect communications networks.  The EO requires the following actions from agencies to further the protection of consumers, patients, passengers, and students.

  • Independent regulatory agencies will address risks that may arise from the use of AI, including risks to financial stability. The EO directs Independent Regulatory Agencies to clarify responsibilities, conduct due diligence and monitor third-party AI services they use, and to clarify expectations and requirements related to the transparency and explainability of AI models.
  • The Secretary of HHS will establish an HHS AI Task Force to create a strategic plan that includes policies and frameworks for the responsible deployment and use of AI in the health and human services sector, as well as establish an AI Safety Program with voluntary federally listed Patient Safety Organizations. Separately, the Secretary of HHS will establish a quality strategy, including an AI assurance policy, advance compliance with federal nondiscrimination laws by providers, and develop a strategy for regulating the use of AI in drug development.
  • The Secretary of Transportation will direct the Nontraditional and Emerging Transportation Technology (NETT) Council to assess the need for information, technical assistance and guidance regarding AI in technology. They will support existing and future initiatives that pilot transportation-related applications of AI, evaluate the outcomes of such pilot programs, and establish a new Department of Transportation (DOT) Cross-Modal Executive Working Group to coordinate applicable work.  In addition, the Advanced Research Projects Agency-Infrastructure (ARPA-I) will explore and solicit public opinion on transportation-related opportunities and challenges.
  • The Secretary of Education will develop resources, policies, and guidance regarding AI, addressing safe, responsible and nondiscriminatory uses of AI in education, as well as their impact on vulnerable and underserved communities. The Secretary of Education should develop an “AI Toolkit” for education leaders, as well as implement recommendations from the Department of Education’s AI and the Future of Teaching and Learning report.
  • The Federal Communications Commission (FCC) will consider how AI affects communications networks and consumers, including by examining the potential for AI to improve spectrum management, creating opportunities for sharing spectrum between federal and non-federal spectrum operations, supporting improved network security, resiliency, and interoperability, and conducting efforts to combat unwanted robocalls and robotexts that are facilitated or exacerbated by AI.

Section 9. Protecting Privacy

Section 9 mandates federal government action to mitigate potential threats to privacy posed by AI.  In particular, the EO expresses concern regarding “AI’s facilitation of the collection or use of information about individuals, or the making of inferences about individuals.”  Accordingly, the EO obligates the federal government to take the following actions with regards to protecting privacy:

  • The Office of Management and Budget (OMB) must take steps to identify commercially available information (CAI) procured by federal agencies, defined as “information or data about an individual or group of individuals, including their device or location, that is made available or obtainable and sold, leased, or licensed to the general public or to governmental or non-governmental entities.”
  • OMB will evaluate agency standards for the collection, processing, or use of CAI that contains Personally Identifiable Information (PII), and issue a Request for Information to inform potential revisions to such standards by the end of April 2024.
  • By the end of October 2024, NIST must create guidelines for agencies to evaluate “differential-privacy-guarantee protections,” defined as protections that allow information about a group to be shared while limiting the leakage of personal information.
  • The National Science Foundation (NSF) will promote research, development, and implementation of privacy-enhancing technologies (PETS), including by creating a Research Coordination Network dedicated to advancing privacy research and working with federal agencies to identify opportunities to incorporate PETS (e.g., AI-generated synthetic data) into agency operations.

Section 10.  Advancing Federal Government Use of AI

Section 10 provides direction for federal government agency efforts to develop and use AI technology (1) focusing on the provision of government-wide guidance for agency use, management, and procurement of AI; and (2) outlining a series of priorities and initiatives intended to improve and accelerate federal hiring of AI talent and provide AI training to federal employees

Providing Guidance for AI Management

  • The Director of OMB will convene and chair an interagency council to coordinate agencies’ development and use of AI. OMB will issue guidance to govern agency AI usage, advance AI innovation, and manage risks posed by the federal government’s use of AI.  Notably, this does not apply to the use of AI in national security systems.
  • OMB must take steps to ensure that agency procurement of AI systems and services align with the guidance. The OMB AI guidance must include the designation of a Chief AI Officer and AI Governance Board at each federal agency; required minimum risk-management practices for government use of AI; recommendations to agencies regarding a variety of AI governance topics, including external AI testing; reporting requirements; and guidelines governing the federal workforce’s use of generative AI.
  • OMB will also develop a framework to prioritize critical and emerging cloud offerings in the Federal Risk and Authorization Management Program (FedRAMP) authorization process. OMB is instructed to begin by prioritizing “generative AI offerings whose primary purpose is to provide large language model-based chat interfaces, code-generation and debugging tools, and associated application programming interfaces, as well as prompt-based image generators.”
  • The General Services Administration (GSA), in coordination with OMB, the Department of Defense (DOD), DHS, NASA, and other federal agencies, is directed to take steps to ease access to government-wide acquisition solutions for AI services and products, potentially including the creation of an AI acquisition resource guide.

Increasing AI Talent in Government

  • Under the EO, federal agencies are broadly directed to hire and retain AI talent. The EO establishes more-specific plans for bolstering the federal AI workforce and making use of AI talent:  the Office of Science and Technology Policy (OSTP) and OMB will establish AI hiring priorities and plans; the Assistant to the President and Deputy Chief of Staff for Policy will convene an AI and Technology Talent Task Force; and the Secretary of Defense will prepare a report on gaps in AI talent for national defense.  AI training and familiarization programs will also be made available by the heads of each agency for employees in AI-relevant roles at all levels.  As we reported here, Congress has already made progress addressing the need for AI talent in the federal government, which includes two pending bills, S.1564 – AI Leadership Training Act and S.2293 – AI LEAD Act.   

Section 11. Strengthening American Leadership Abroad

Section 11 of the EO outlines a set of actions to bolster U.S. leadership in the global effort to harness the benefits of AI while responding to its challenges.  The EO designates the Departments of State, Commerce, Homeland Security, and Energy with the primary authority to engage with international allies and partners, in collaboration with other relevant U.S. agencies, notably including the U.S. Agency for International Development.  These efforts will seek to foster greater understanding of U.S. policies related to AI, encourage the responsible global development of the technology, and promote international collaboration.

Section 11 directs the aforementioned U.S. agencies to complete the following actions, which collectively aim to fortify U.S. global leadership in the development of AI and promote the safe, responsible, and interoperable deployment of the transformative technology:

  • Establish a comprehensive international framework to manage both the risks and benefits of AI and encourage allies to make voluntary commitments similar to those by U.S. companies—likely a nod to recent commitments made by private companies, including Amazon, Google, Microsoft, and OpenAI and last week’s announcement by the G7 on a voluntary AI code of conduct.
  • Develop a plan for global engagement on AI standards, covering topics such as AI terminology, best data practices, trustworthiness of AI systems, and AI risk management. These efforts should adhere to the principles in the AI Risk Management Framework and the National Standards Strategy for Critical and Emerging Technology.
  • Create an AI in Global Development Playbook, incorporating the principles of the AI Risk Management Framework and applying them to various international contexts. Doing so will likely require close coordination with international partners to ensure appropriate “translations” to local contexts that may differ from those in the United States.
  • Establish a Global AI Research Agenda to guide the objectives and execution of AI-related research beyond the United States. The Agenda will address safety, responsibility, benefits, sustainability, and labor-market implications of AI adoption in different international contexts.
  • Enhance cooperation with international allies and partners in preventing, responding to, and recovering from potential disruptions to critical infrastructure as a result of AI incorporation or malicious AI use. The Secretary of DHS has authority over the adoption of AI safety and security guidelines for critical infrastructure.

Conclusion

Overall, this new EO incorporates a sweeping set of provisions tailored to the evolving AI landscape.  It is clear throughout each section that the government is primarily focused on the safe and secure development and deployment of AI.  The EO seeks to maximize the potential benefits of AI while also addressing rising concerns about its potential harms, building on the Administration’s efforts to strike a delicate balance between innovative and protectionary action.

Crowell & Moring LLP and Crowell & Moring International continue to monitor congressional and executive branch efforts to regulate AI.  Our lawyers and public policy professionals are available to advise any clients who want to play an active role in the policy debates taking place right now or who are seeking to navigate AI-related concerns in financial services, intellectual property, privacy, health care, government contracts, and other areas.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Michael G. Gruden, CIPP/G Michael G. Gruden, CIPP/G

Michael G. Gruden is a counsel in Crowell & Moring’s Washington, D.C. office, where he is a member of the firm’s Government Contracts and Privacy and Cybersecurity groups. He possesses real-world experience in the areas of federal procurement and data security, having worked…

Michael G. Gruden is a counsel in Crowell & Moring’s Washington, D.C. office, where he is a member of the firm’s Government Contracts and Privacy and Cybersecurity groups. He possesses real-world experience in the areas of federal procurement and data security, having worked as a Contracting Officer at both the U.S. Department of Defense (DoD) and the U.S. Department of Homeland Security (DHS) in the Information Technology, Research & Development, and Security sectors for nearly 15 years. Michael is a Certified Information Privacy Professional with a U.S. government concentration (CIPP/G). He is also a Registered Practitioner under the Cybersecurity Maturity Model Certification (CMMC) framework. Michael serves as vice-chair for the ABA Science & Technology Section’s Homeland Security Committee.

Michael’s legal practice covers a wide range of counseling and litigation engagements at the intersection of government contracts and cybersecurity. His government contracts endeavors include supply chain security counseling, contract disputes with federal entities, suspension and debarment proceedings, mandatory disclosures to the government, prime-subcontractor disputes, and False Claims Act investigations. His privacy and cybersecurity practice includes cybersecurity compliance reviews, risk assessments, data breaches, incident response, and regulatory investigations.

Photo of Evan D. Wolff Evan D. Wolff

Evan D. Wolff is a partner in Crowell & Moring’s Washington, D.C. office, where he is co-chair of the firm’s Chambers USA-ranked Privacy & Cybersecurity Group and a member of the Government Contracts Group. Evan has a national reputation for his deep technical…

Evan D. Wolff is a partner in Crowell & Moring’s Washington, D.C. office, where he is co-chair of the firm’s Chambers USA-ranked Privacy & Cybersecurity Group and a member of the Government Contracts Group. Evan has a national reputation for his deep technical background and understanding of complex cybersecurity legal and policy issues. Calling upon his experiences as a scientist, program manager, and lawyer, Evan takes an innovative approach to developing blended legal, technical, and governance mechanisms to prepare companies with rapid and comprehensive responses to rapidly evolving cybersecurity risks and threats. Evan has conducted training and incident simulations, developed response plans, led privileged investigations, and advised on hundreds of data breaches where he works closely with forensic investigators. Evan also counsels businesses on both domestic and international privacy compliance matters, including the EU General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA). He is also a Registered Practitioner under the Cybersecurity Maturity Model Certification (CMMC) framework.

Photo of Laura J. Mitchell Baker Laura J. Mitchell Baker

Laura J. Mitchell Baker is a counsel with Crowell & Moring’s Government Contracts Group in the firm’s Washington, D.C. office.

Laura represents government contractors in litigation and administrative matters, including contract disputes with state and federal entities, suspension and debarment proceedings, mandatory disclosures…

Laura J. Mitchell Baker is a counsel with Crowell & Moring’s Government Contracts Group in the firm’s Washington, D.C. office.

Laura represents government contractors in litigation and administrative matters, including contract disputes with state and federal entities, suspension and debarment proceedings, mandatory disclosures to the government, prime-sub disputes, and False Claims Act investigations. Her practice also includes counseling on federal, state, and local government contracts, government contracts due diligence, and regulatory and compliance matters, as well as conducting internal investigations.

Photo of Kate Beale Kate Beale

Kate Beale is a senior policy director in Crowell & Moring’s Government Affairs Group and affiliated with C&M International, the global government relations, public policy, and public affairs affiliate of Crowell & Moring in the Washington, D.C. office. She supports clients in their…

Kate Beale is a senior policy director in Crowell & Moring’s Government Affairs Group and affiliated with C&M International, the global government relations, public policy, and public affairs affiliate of Crowell & Moring in the Washington, D.C. office. She supports clients in their efforts to shape legislative and regulatory policy. Kate brings 20 years of experience leading foreign policy, global health, humanitarian assistance and economic policy at the grassroots level, Congress, Obama administration, and private sector.

Photo of Lorraine M. Campos Lorraine M. Campos

Lorraine M. Campos is a partner and member of the Steering Committee of Crowell & Moring’s Government Contracts Group and focuses her practice on assisting clients with a variety of issues related to government contracts, government ethics, campaign finance, and lobbying laws. Lorraine…

Lorraine M. Campos is a partner and member of the Steering Committee of Crowell & Moring’s Government Contracts Group and focuses her practice on assisting clients with a variety of issues related to government contracts, government ethics, campaign finance, and lobbying laws. Lorraine regularly counsels clients on all aspects of the General Services Administration (GSA) and the U.S. Department of Veterans Affairs (VA) Federal Supply Schedule (FSS) programs. She also routinely advises clients on the terms and conditions of these agreements, including the Price Reduction Clause, small business subcontracting requirements, and country of origin restrictions mandated under U.S. trade agreements, such as the Trade Agreements Act and the Buy American Act. Additionally, Lorraine advises life sciences companies, in particular, pharmaceutical and medical device companies, on federal procurement and federal pricing statutes, including the Veterans Health Care Act of 1992.

Lorraine has been ranked by Chambers USA since 2013, and she was recognized by Profiles in Diversity Journal as one of their “Women Worth Watching” for 2015. Additionally, Lorraine is active in the American Bar Association’s Section of Public Contract Law and serves as co-chair of the Health Care Contracting Committee.

Photo of Alicia Clausen Alicia Clausen

Alicia Clausen uses her vast legal, technological and international knowledge to handle discovery challenges for litigation, investigations, and other disputes. Clients with big data challenges turn to Alicia to rapidly identify critical information, provide counseling on the efficient and strategic use of artificial

Alicia Clausen uses her vast legal, technological and international knowledge to handle discovery challenges for litigation, investigations, and other disputes. Clients with big data challenges turn to Alicia to rapidly identify critical information, provide counseling on the efficient and strategic use of artificial intelligence, and proactively advise on litigation readiness. She has extensive experience managing teams of lawyers and external technology vendors, leveraging technology to efficiently manage all phases of the discovery process.

Alicia represents clients in large-scale civil litigation, antitrust investigations, internal and government investigations, white collar criminal defense matters, and complex commercial disputes. She regularly employs her vast knowledge in discovery to provide defensible strategies while reducing time and costs for clients. Using her technical know-how, Alicia works with clients from a broad range of information-dependent industries, including health care, financial services, government contracting, education, and technology.

Photo of Michelle Coleman Michelle Coleman

Michelle D. Coleman is a counsel in the Government Contracts Group in Crowell & Moring’s Washington, D.C. office. Michelle advises clients from diverse industries in connection with contract disputes and other government contract matters, including Contract Disputes Act (CDA) claims and requests for…

Michelle D. Coleman is a counsel in the Government Contracts Group in Crowell & Moring’s Washington, D.C. office. Michelle advises clients from diverse industries in connection with contract disputes and other government contract matters, including Contract Disputes Act (CDA) claims and requests for equitable adjustments, fiscal law questions, prime-sub disputes, and bid protests.

Photo of Aaron Cummings Aaron Cummings

Aaron serves as the co-chair of the Government Affairs Group and provides counsel and advocacy to clients on legislative and policy matters in a range of areas including antitrust, financial services, health care, energy, intellectual property, artificial intelligence, technology, agriculture, and national security.

Aaron serves as the co-chair of the Government Affairs Group and provides counsel and advocacy to clients on legislative and policy matters in a range of areas including antitrust, financial services, health care, energy, intellectual property, artificial intelligence, technology, agriculture, and national security. All too often in Washington if you’re not at the table, you’re on the menu. Aaron helps clients make sure their views are represented in policy discussions in Capitol Hill, the White House, and throughout the federal government.

Aaron has years of high-level experience on Capitol Hill. He’s the former Chief of Staff to U.S. Senator Chuck Grassley (R-IA), the longest serving Republican Senator in history, and current President Pro Tempore-emeritus of the Senate. As Senator Grassley’s Chief of Staff, Aaron worked closely with other members of Republican Senate Leadership and their senior staff to advance the priorities of the Republican Caucus and to set the agenda for the Senate. Aaron also advised Senator Grassley during his tenure as the Chairman of the powerful Judiciary and Finance Committees, the top Republican of the Committee on Budget, and as a senior member of the Senate Committee on Agriculture. During his tenure as a Chief Counsel on the Senate Judiciary committee, Aaron advised Senator Grassley on a host of policy and constitutional issues, including Supreme Court nominations, and was the lead Republican negotiator of the First Step Act—the biggest criminal justice reform effort in a generation and a signature bipartisan accomplishment of the Trump Administration. Aaron also played key roles in the passage of the United States-Mexico-Canada Trade Agreement and the Infrastructure Investment and Jobs Act. Earlier in his career, he worked as an Associate Director of Presidential Speechwriting in the George W. Bush White House.

Drawing on his years of experience in litigation, leading congressional investigations, and high-profile hearings on Capitol Hill, Aaron also counsels clients responding to government investigations.

Photo of Jodi G. Daniel Jodi G. Daniel

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She…

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She leads the firm’s Digital Health Practice and provides strategic, legal, and policy advice to all types of health care and technology clients navigating the dynamic regulatory environment related to technology in the health care sector to help them achieve their business goals. Jodi is a contributor to the Uniform Law Commission Telehealth Committee, which drafts and proposes uniform state laws related to telehealth services, including the definition of telehealth, formation of the doctor-patient relationship via telehealth, creation of a registry for out-of-state physicians, insurance coverage and payment parity, and administrative barriers to entity formation.

Photo of Kate Growley Kate Growley

Kate M. Growley (CIPP/US, CIPP/G) is a director with Crowell & Moring International and based in Hong Kong. Drawing from over a decade of experience as a practicing attorney in the United States, Kate helps her clients understand, navigate, and shape the policy…

Kate M. Growley (CIPP/US, CIPP/G) is a director with Crowell & Moring International and based in Hong Kong. Drawing from over a decade of experience as a practicing attorney in the United States, Kate helps her clients understand, navigate, and shape the policy and regulatory environment for some of the most complex data issues facing multinational companies, including cybersecurity, privacy, and digital transformation. Kate has worked with clients across every major sector, with particular experience in technology, health care, manufacturing, and aerospace and defense. Kate is a Certified Information Privacy Professional (CIPP) in both the U.S. private and government sectors by the International Association of Privacy Professionals (IAPP). She is also a Registered Practitioner with the U.S. Cybersecurity Maturity Model Certification (CMMC) Cyber Accreditation Body (AB).

Photo of Jacob Harrison Jacob Harrison

Jacob Harrison helps his clients navigate both domestic and international legal challenges.

Jake advises U.S. government contractors on internal investigations and state and federal regulatory compliance. His compliance practice focuses on counseling clients operating at the intersection of government contracts and cybersecurity, including

Jacob Harrison helps his clients navigate both domestic and international legal challenges.

Jake advises U.S. government contractors on internal investigations and state and federal regulatory compliance. His compliance practice focuses on counseling clients operating at the intersection of government contracts and cybersecurity, including for cybersecurity compliance reviews, risk assessments, and data breaches.

In his international practice, Jake represents foreign and domestic clients in Foreign Sovereign Immunities Act and Anti-Terrorism Act litigation. He also has experience advising clients involved in cross-border commercial arbitration proceedings.

During law school, Jake served as an associate editor of the Emory Law Journal and interned at the Supreme Court of Georgia and the Georgia House Democratic Caucus. Before attending law school, Jake worked in politics and state government.

Photo of Garylene “Gage” Javier Garylene “Gage” Javier

Garylene “Gage” Javier, CIPP/US is a Privacy & Cybersecurity associate in the firm’s Washington, D.C. office. Gage practices focuses on privacy, data security, and consumer protection, assisting financial services clients overcome regulatory challenges and achieve their business goals. Gage assists clients concerns that…

Garylene “Gage” Javier, CIPP/US is a Privacy & Cybersecurity associate in the firm’s Washington, D.C. office. Gage practices focuses on privacy, data security, and consumer protection, assisting financial services clients overcome regulatory challenges and achieve their business goals. Gage assists clients concerns that arise from state and federal laws that apply to data privacy and information security, including: the Gramm-Leach-Bliley Act (GLBA); California Consumer Privacy Act (CCPA); California Privacy Rights Act (CPRA); California Financial Information Privacy Act (CFIPA); the Fair Credit Reporting Act (FCRA) and its Affiliate Marketing Rule; the Virginia Consumer Data Protection Act (CDPA); and the EU General Data Protection Regulation (GDPR).

Photo of Lidia Niecko-Najjum Lidia Niecko-Najjum

Lidia Niecko-Najjum is a counsel in Crowell & Moring’s Health Care Group and is part of the firm’s Digital Health Practice. With over 15 years of clinical, policy, and legal experience, Lidia provides strategic advice on health care regulatory and policy matters, with…

Lidia Niecko-Najjum is a counsel in Crowell & Moring’s Health Care Group and is part of the firm’s Digital Health Practice. With over 15 years of clinical, policy, and legal experience, Lidia provides strategic advice on health care regulatory and policy matters, with particular focus on artificial intelligence, machine learning, digital therapeutics, telehealth, interoperability, and privacy and security. Representative clients include health plans, health systems, academic medical centers, digital health companies, and long-term care facilities.

Lidia’s experience includes serving as a senior research and policy analyst at the Association of American Medical Colleges on the Policy, Strategy & Outreach team. Lidia also practiced as a nurse at Georgetown University Hospital in the general medicine with telemetry unit and the GI endoscopy suite, where she assisted with endoscopic procedures and administered conscious sedation.

Photo of Neda Shaheen Neda Shaheen

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national…

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national security, technology, cybersecurity, trade and international law. Neda joined the firm after working as a consultant at Crowell & Moring International (CMI), where she supported a diverse range of clients on digital trade matters concerning international trade, national security, privacy, and data governance, as well as advancing impactful public-private partnerships.

Photo of Roma Sharma Roma Sharma

Roma Sharma is an associate in Crowell & Moring’s Washington, D.C. office and a member of the firm’s Health Care Group. Roma primarily works with health care clients seeking to comply with regulations for state and federal health care programs, health care anti-fraud…

Roma Sharma is an associate in Crowell & Moring’s Washington, D.C. office and a member of the firm’s Health Care Group. Roma primarily works with health care clients seeking to comply with regulations for state and federal health care programs, health care anti-fraud and abuse laws, and licensing laws.

Roma’s work incorporates her Master of Public Health degree in Health Policy as well as her past experiences as an extern at the Office of the General Counsel at the American Medical Association and as an intern at the Illinois Office of the Attorney General, Health Care Bureau.

Photo of William Tucker William Tucker

Will Tucker is an associate in the firm’s Washington, D.C. office, where he practices in the Health Care and Government Contracts groups. Will represents clients in a range of complex litigation and counseling engagements. He helps clients navigate relationships with federal and state…

Will Tucker is an associate in the firm’s Washington, D.C. office, where he practices in the Health Care and Government Contracts groups. Will represents clients in a range of complex litigation and counseling engagements. He helps clients navigate relationships with federal and state regulators, often regarding the use of emerging technologies and implementation of new business models. His counseling practice covers fraud and abuse compliance, state licensure guidance, responding to federal audits, and state procurement procedures, among other issues. His litigation practice spans both plaintiff-side and defense work, including fraud, data rights, and insurance disputes in federal court, as well as bid protests before the Government Accountability Office.

Photo of Jennie Wang VonCannon Jennie Wang VonCannon

Jennie VonCannon is a trial lawyer with a proven track record of success in both the courtroom and the boardroom — with extensive experience in white collar defense and cybersecurity matters. Jennie helps clients in crisis with internal investigations, law enforcement and regulatory…

Jennie VonCannon is a trial lawyer with a proven track record of success in both the courtroom and the boardroom — with extensive experience in white collar defense and cybersecurity matters. Jennie helps clients in crisis with internal investigations, law enforcement and regulatory inquiries and subpoenas, and cybersecurity and privacy incidents. Her impeccable judgment has been honed over 11 years as a federal prosecutor, culminating in her selection to serve with distinction as the deputy chief of the Cyber and Intellectual Property Crimes Section of the National Security Division of the U.S. Attorney’s Office for the Central District of California.

Photo of Alexis Ward Alexis Ward

Alexis Ward represents clients in a variety of matters at the intersection of government contracts and cybersecurity utilizing her experience in analytics and data architecture to counsel clients with a practical, real-world lens. As a member of Crowell & Moring’s Privacy and Cybersecurity

Alexis Ward represents clients in a variety of matters at the intersection of government contracts and cybersecurity utilizing her experience in analytics and data architecture to counsel clients with a practical, real-world lens. As a member of Crowell & Moring’s Privacy and Cybersecurity and Government Contracts groups, Alexis has assisted clients in matters including False Claims Act investigations; developing corporate policies, procedures and governance; and in diverse matters involving cybersecurity and data privacy compliance, risk assessment and mitigation, and incident response.

During law school, Alexis founded USC Gould’s Privacy and Cybersecurity Law Society and was on the board of OUTLaw. Alexis also worked as a teaching assistant for the graduate programs’ Information Privacy Law course. Her paper The Oldest Trick in the Facebook: Would the General Data Protection Regulation Have Stopped the Cambridge Analytica Scandal? was published by the Trinity College Law Review.