Photo of Matthew FerraroPhoto of Anna Z. SaberPhoto of Michael G. Gruden, CIPP/GPhoto of Alexis WardPhoto of Caitlyn WeeksPhoto of Neda ShaheenPhoto of Kirsten NathansonPhoto of Tyler A. O'ConnorPhoto of Tim LaderachPhoto of Jeremy IloulianPhoto of Scott WisePhoto of Jacob CanterPhoto of Linda Malek

On July 23, 2025, the White House released Winning the Race: America’s AI Action Plan (“the Plan”) the Trump Administration’s most significant policy statement on artificial intelligence to date.

The Plan is structured around three pillars—accelerating AI innovation, building U.S. AI infrastructure, and establishing U.S. global leadership in AI diplomacy and security—and it identifies more than 90 federal policy actions the Administration plans to take over the next year. The Plan, which the White House’s Office of Science and Technology Policy (OSTP) wrote after receiving over 10,000 comments from the public, does not have the force of law and acts instead as an exhortation for the government to support AI development through the provision of energy, the adoption of AI by government, and the relaxation of certain regulations.

Also on Wednesday, President Trump signed three executive orders (EOs) on AI: (1) accelerating federal permitting of data center infrastructure (“Infrastructure EO”); (2) promoting the export of American AI “technology stack” (“Export EO”); and (3) preventing “woke AI” in the federal government (“Ideology EO”).

Taken together, the Plan and EOs express a permissive approach to AI regulation that emphasizes AI innovation, adoption, and competitiveness. How much of the Plan and EOs will translate into tangible action to reduce regulatory burdens on companies and spur “dominance” remains to be seen.

Key highlights of particular concern to clients include:

  • The Plan suggests the White House solicit the public for information on laws or regulations that hinder AI development. Likewise, the Plan recommends the Federal Trade Commission (FTC) review prior investigations or judgments that burden AI innovation. Affected companies should respond to this request and, where applicable, seek review by the FTC.
  • The Plan suggests pressuring states to remove “burdensome AI regulation” through the possible denial of federal funding. The Plan does not define “burdensome AI regulation,” but it explicitly asks the National Institute of Standards and Technology (NIST) to remove from its standards any mention of “diversity, equity, and inclusion,” “misinformation,” and climate change. The prospect of the curtailment of federal funds, however vaguely stipulated, creates a challenging and uncertain compliance environment for companies, given that states are currently implementing the most far-reaching AI laws and regulations.
  • While the Plan and the Export EO support the export of the American AI technology stack, the Plan pushes the Department of Commerce to (i) implement stricter export controls on semiconductor technologies critical to AI; (ii) enhance its efforts to identify illegal and unauthorized exports or diversions of U.S.-origin AI compute capacity to foreign adversaries, and enforce those violations; and (iii) use a series of carrots and sticks with U.S. allies and partners to prompt them to impose similar controls. Impacted companies will want to reassess their compliance programs to address these shifts in enforcement.
  • The Plan suggests NIST and other agencies work with industry to incorporate AI-specific scenarios into cybersecurity incident response plans. Private entities should, too.
  • The Plan and the Infrastructure EO aim to support the development of energy sources and data centers needed to speed AI development. The Plan recommends streamlining environmental reviews for data centers and related energy infrastructure, and the EO allows qualifying data center projects to receive financial assistance and expedited permitting reviews. It also directs several federal agencies to identify sites for the deployment of AI infrastructure on federal lands.
  • The Plan suggests the Department of Defense (DoD) take several steps to protect AI technologies and adopt them. Defense contractors should stay attuned to evolving DoD requirements.
  • The Plan and the Ideology EO seek to bar the government’s procurement of AI models that the Administration deems as “woke”—that is, engaging in speech to which the White House objects. The Office of Management and Budget (OMB) will issue guidelines to implement this direction. Government contractors should track these developments, particularly given the government’s ill-defined prohibitions.
  • The Plan has several policy recommendations to support scientific research and innovation, with a particular emphasis on start-ups and early-stage companies. These include the creation of a national repository of data and encouraging the release of “high-quality datasets,” sandboxes to test new AI tools, efforts to ensure access to costly large-scale computing power through a “healthy financial market,” and other economic incentives. These recommendations will need to navigate various extant state, federal, and international laws and regulations around data privacy, cybersecurity, and AI (such as around bias), complicating efforts to implement these recommendations.
  • The Plan does not weigh in on contentious intellectual property issues and does not assert that the training of AI on copyrighted material constitutes “fair use.” Accordingly, regulatory agencies, courts and perhaps Congress will have to resolve these questions.
  • While differing in its rhetoric and emphasis, the Plan is similar in substantial respects to efforts of the Biden Administration to address existential AI risks, promote secure-by-design cybersecurity, develop additional energy resources, lead in global AI standard setting, and address the dangers of deepfakes, particularly in evidentiary proceedings.
  • The White House remains only one player among a crowded field of policymakers focused on AI. The Plan and accompanying EOs land amid a swirl of state and international AI lawmaking and scores of lawsuits that are impacting how both technology developers and users act in an increasingly complex environment. As enthusiasm for some forms of regulation recedes in the federal executive, companies should expect increased efforts by regulatory advocates in these other venues.

An annotated overview of the Plan and the EOs follows:

Pillar 1: Accelerate AI Innovation

Regulatory Review. The Plan seeks to reduce AI regulation by asserting that federal funding should not go to states with “burdensome” AI regulations. While averring its support for states’ rights, the Plan invites the Federal Communications Commissions (FCC) to investigate whether state AI laws interfere with the FCC’s statutory duties. It also recommends OMB withhold discretionary funding from states whose AI regulatory regimes hinder effectiveness of the funding or award.

But the Plan neither calls for a moratorium on state-lawmaking on AI, which was a measure recently considered in the U.S. Congress, nor for a federal law preempting state laws. The Plan itself is vague on what constitutes “burdensome” and “onerous” regulations. The government will have to establish clearer guidelines to enforce such broad avowals, and it could well face legal challenges if federal agencies attempt to withhold Congressionally appropriated funds.

In a speech at an AI Summit the day the White House released the Plan, President Trump went further than his own Plan, calling for a single Federal “rule and regulation” that would “supplant” state action. There has been an unsuccessful effort in Congress in the last few years to pass a comprehensive AI law, including a bipartisan task force. There is no indication that efforts in the near future will be more successful, likely leaving federal agencies as the mechanism for the setting and enforcement of AI-related federal rules. States, international bodies, and courts will also enforce their own AI-related rules and remedies.

Of note for companies, the Plan calls for the OSTP to issue a Request For Information (RFI) inviting commenters to identify current Federal regulations that they believe hinder AI innovation for federal agencies to review and revise. If such an RFI issues, companies should plan to submit comments, particularly on rules that do not explicitly involve AI but that nonetheless restrain innovation.

The Plan also recommends the FTC review and modify prior investigations and dispositions that burden AI innovation. Previously investigated entities may wish to advocate with the FTC for a relook of their cases.

“Woke” AI. In line with the Ideology EO (discussed below), the Plan directs NIST to revise its AI Risk Management Framework to eliminate references to “misinformation,” “Diversity, Equity, and Inclusion,” and “climate change.”

Broadening Access to Computing Power. The Plan calls for the creation of a supportive environment for the development of open-source and open-weight models, which anyone can download and modify. It also calls on various federal agencies to increase the research community’s access to private-sector computing.

Enabling AI Adoption. The Plan calls for a coordinated federal effort to establish a “try-first” culture for AI across American industry. It suggests establishing regulatory sandboxes at various agencies to allow for the testing of AI tools and for the establishment of domain-specific standards for how to measure AI increases in productivity. Likewise, the Plan calls for the government to adopt AI by ensuring all federal employees, to the extent practicable, have access to frontier language models and coordinating government mechanisms for interagency collaboration on adoption and procurement. AI developers should stay attuned to opportunities to partner with the federal government to meet these procurement needs.

Supporting Workers, Manufacturing, and Science. The Plan calls for expanding AI literacy and skills across the economy, establishing an AI Workforce Research Hub, and studying workforce disruption, in moves strikingly similar to those directed to the Department of Labor by the Biden Administration in October 2023. The Plan also calls for investing in AI, robotics, and drone manufacturing, cloud-based infrastructure, and research. It recommends a multi-agency effort to advance AI interpretability—the ability to understand how large language models (LLMs) work—and test AI systems for transparency and vulnerabilities.

Building Datasets. Broadly speaking, the value of an AI system is determined by its software, computing power, and the data on which it trains. The Plan calls on various federal agencies to make recommendations for minimum data quality standards for various AI training modalities, including biological data, and lower barriers of access to federal data, among other steps.

Building AI Evaluations. Evaluating AI performance has become a perennial challenge. The Plan calls on regulators to explore the use of evaluations in their application of existing law to AI systems and publish guidelines for federal agencies to conduct their own evaluations.

Protecting AI. In a move sought by AI frontier labs, the Plan directs the Department of Defense and other agencies to collaborate with American AI developers to help protect them against malicious cyber actors, insider threats, and other risks.

Combating Courtroom DeepfakesCommentators have long raised concerns that litigants may introduce AI-manipulated evidence into court. The Plan suggests federal agencies adopt standards for considering such evidence similar to a federal rule of evidence currently under consideration by the Advisory Committee on Evidence Rules (FRE 901(c)), which in some circumstances requires proponents of evidence to show that its probative value outweighs its prejudicial effects. Parties before agencies should prepare to answer such challenges to their evidence and raise similar questions against opponents.

Pillar 2: Build American AI Infrastructure

Streamlining Permitting. To provide the energy AI requires, the Plan recommends the establishment of new “Categorical Exclusions” under the National Environmental Policy Act (NEPA) that would exempt such developments from full environmental assessments. It also recommends streamlining and expediting permitting processes under the Clean Water Act.

Utilizing Federal Lands. Additionally, the Administration recommends making available federal lands for the construction of new data centers. The Plan also recommends stabilizing the electric grid and encouraging compliance with nationwide standards for resource adequacy.

Semiconductor Manufacturing. The Plan calls for a revitalized United States-focused microchip industry and explicitly advocates the use of offices established by the Biden-era CHIPS Act, whose repeal President Trump had previously advocated.

Cybersecurity, Secure-by-Design, and Incident Response. The Plan recommends various measures to bolster cybersecurity. These include ensuring all AI used in homeland security is “secure-by-design”—a Biden Administration initiative to build cybersecurity principles into software systems from the start. The plan also recommends the creation of an AI Information Sharing and Analysis Center (ISAC) by the U.S. Department of Homeland Security to promote the sharing of AI-security threat information with critical-infrastructure entities. (Anthropic recommended such a forum in its comments to OSTP.) Furthermore, the Plan suggests issuing guidance to private sector entities on remediating and responding to AI-specific threats and coordinating the sharing of known AI vulnerabilities among federal agencies and the private sector. And it encourages federal agencies to collaborate with industry to incorporate AI-specific scenarios into cybersecurity incident response plans. We can expect such best practices to proliferate throughout the private sector.

Training a Skilled Workforce. The Plan calls for investment in the workforce that will build, operate, and maintain AI infrastructure. The Plan recommends a national initiative to identify the necessary occupations and invest in industry training.

Department of Defense Recommendations. The Plan recommends the DoD develop cybersecurity safeguards to account for the increased use of AI systems by the U.S. government, while also proactively expanding the use of AI within the Department itself. It recommends DoD monitor the use of AI by the U.S. and foreign governments to ensure the United States stays abreast of foreign competition in AI adoption and innovation. It also recommends protecting AI technologies through enforcement of U.S. export controls for AI technologies. The Plan requests that DoD drive adoption of AI within the Department itself, including incorporating AI into the military colleges’ curricula and automating Department workstreams.

Pillar 3: Lead in International AI Diplomacy and Security

Exporting AI to U.S. Allies. The Plan and the Export EO (see below) attempt to balance the promotion of the export of U.S. technology with regulations and enforcement to promote U.S. national security and foreign policy. The Plan recommends that the Department of Commerce implement stricter export controls on semiconductor technologies critical to AI, including by identifying potential mechanisms to geolocate chip exports, imposing expanded controls on semiconductor manufacturing subsystems, and using the Foreign Direct Product Rule (FDPR) and secondary tariffs to incentivize international cooperation on U.S. controls. The Export EO pushes executive branch agencies to encourage exports of American AI full-stack technology, as long as those exports meet what the Plan calls “U.S.-approved security requirements and standards,” which are under development.

International Standards Based on American Ideals. The Plan calls for the United States to spearhead diplomacy on international AI standards to prevent Chinese influence over these governance bodies and to ensure that any international regulations or guidelines “promote innovation, reflect American values, and counter authoritarian influence.”

Address Risks. The Plan recommends the government “[e]valuate frontier AI systems for national security risks,” including mass destruction and cyber risks, in partnership with frontier AI developers. The Plan also calls for the evaluation and assessment of potential security vulnerabilities and “malign foreign influence” from adversaries’ use of AI systems in critical infrastructure. Furthermore, the Plan recognizes the dangers AI poses for the synthetization of harmful biological pathogens and suggests a multitiered approach working with allies to use new tools, infrastructure, and mechanisms to screen for malicious actors. These initiatives are redolent of the efforts by the Biden Administration to work with technology firms to address existential risks.

Copyright Left Out

The Plan does not address the contested issues of the scope of copyright protections in the training of AI models. This is contrary to the comments of some significant technology labs which had suggested the Plan state that training AI systems on copyrighted materials constitutes “fair use.” The silence is also notable given the Administration’s decision to remove the Head of the Copyright Office shortly after the office issued a draft report that questioned whether AI training on copyrighted material qualified as fair use.

During the AI Summit, President Trump suggested that current copyright laws hinder the development of AI, and that AI models needed to be able to learn from copyrighted materials without compensating copyright holders. However, the Plan leaves this issue unaddressed, suggesting it will be the courts and perhaps Congress that resolves these questions.

Executive Orders

  • “Accelerating Federal Permitting of Data Center Infrastructure”

This Infrastructure EO is intended to facilitate the “rapid and efficient buildout” of data center infrastructure by making certain “Qualifying Projects” eligible for federal financing and expedited permitting reviews. Specifically, it directs the Secretary of Commerce to identify avenues to provide financial support—potentially including loans and loan guarantees, grants, tax incentives, and offtake agreements—for data centers, data center components, and related energy infrastructure that meet certain capital expenditure, load capacity, or other requirements.

In addition, the EO requires the Council on Environmental Quality (“CEQ”) to identify existing categorical exclusions from the National Environmental Policy Act that can be applied to expedite the permitting of Qualifying Projects and directs CEQ to collaborate with relevant agencies to identify new categorical exclusions.

It also attempts to streamline permitting reviews by directing the Environmental Protection Agency to develop or modify regulations under the Clean Air Act; Clean Water Act; the Comprehensive Environmental Response, Compensation, and Liability Act; and the Toxic Substance Control Act for Qualifying Projects, and by requiring the Secretary of the Army to determine whether to issue a nationwide Section 404 Clean Water Act permit or Section 10 Rivers and Harbors Appropriation Act permit for certain activities by Qualifying Projects. The EO also directs a programmatic consultation approach under the Endangered Species Act, rather than project-by-project reviews.

The EO works to incentivize beneficial reuse of legacy contaminated properties by directing EPA to identify Brownfields and Superfund sites for Qualifying Projects and develop guidance to expedite redevelopment.

Finally, the EO authorizes certain federal agencies to make federal lands available for use by Qualifying Projects.

  • “Promoting the Export of the American AI Technology Stack”

This Export EO establishes a coordinated national effort to support the American AI Industry by removing barriers for American companies to export their AI systems globally. Within 90 days of the Order, the Secretary of Commerce, Secretary of State and Director of the OSTP are mandated to establish and implement the American AI Exports Program, whereby developers of a full-stack AI technology package are able to submit proposals for inclusion in the AI Exports Program.

Companies that are selected as part of the AI Exports Program will be designated as “priority.” Priority AI systems will have the benefit of technical, financial, and diplomatic resources, Federal government-backed partnerships, and market access to promote export.

While this EO does not say explicitly, the Plan notes that after the consortia are selected, State and Commerce will only facilitate deals that meet U.S.-approved security requirements and standards. The aforementioned proposals are likely to be similar to those approved during President Trump’s recent trip to the Middle East, where, for example, as a condition of these deals, the United Arab Emirates agreed to “further align their national security regulations with the United States, including strong protections to prevent the diversion of U.S.-origin technology”.

  • “Preventing Woke AI in the Federal Government”

This Ideology EO addresses what the Administration refers to as an “existential threat to reliable AI”—that is AI models that incorporate “ideological biases or social agendas,” including “diversity, equity, and inclusion.” The EO mandates that any AI model that is procured for use by the federal government be “truth-seeking” and ideologically neutral. The OMB will issue guidance requiring government contractors to adhere to these so-called “Unbiased AI Principles” in the LLMs they furnish to the government.

This EO faces several challenges. First, its terms are undefined, and the meaning of truth and ideological neutrality in the AI context are not self-evident. Second, to the extent that the government seeks to abridge speech by private entities, it may face First Amendment challenges. Finally, technically it is often difficult to ascertain why LLMs produce outputs for given inputs. Preventing outputs that express disfavored ideological opinions may exceed a model builder’s abilities.

Regardless, government contractors will need to review OMB guidance carefully and adhere to Administration requirements as well as possible to avoid enforcement actions or loss of funding.

***

Crowell & Moring LLP and Crowell Global Advisors will continue to monitor U.S. Government efforts to adopt, promote, and regulate AI. Our lawyers and policy professionals are available to advise clients as well as those taking an active role in AI policy development, across government contracts, international trade, privacy and cybersecurity, technology, healthcare, and life sciences, among other areas.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Matthew Ferraro Matthew Ferraro

Matthew F. Ferraro is a partner in Crowell & Moring’s Privacy and Cybersecurity Group, where he helps clients address complex regulatory matters at the intersection of advanced technology, national security, and crisis management. He advises leading organizations on high-impact matters related to artificial

Matthew F. Ferraro is a partner in Crowell & Moring’s Privacy and Cybersecurity Group, where he helps clients address complex regulatory matters at the intersection of advanced technology, national security, and crisis management. He advises leading organizations on high-impact matters related to artificial intelligence (AI) and other emerging technologies, cyberattacks, domestic and international privacy compliance, internal investigations, foreign direct investment reviews, and high-stakes crises.

Before joining the firm, Matthew served as the Senior Counselor for Cybersecurity and Emerging Technology to the Secretary of Homeland Security. As a principal advisor to the Secretary and a member of the U.S. Department of Homeland Security’s leadership team, he served at the heart of U.S. government policymaking around AI and cybersecurity. He assisted in the development and drafting of key AI, cyber, and technology policies and regulations; advised on the deployment of AI to fulfill the department’s missions; and counseled on cyber-incident responses and investigations. Matthew also helped establish and served as the Executive Director of the Artificial Intelligence Safety and Security Board, a flagship public-private advisory committee focused on AI’s use in critical infrastructure and chaired by the Secretary and composed of industry, nonprofit, and government luminaries.

Photo of Anna Z. Saber Anna Z. Saber

Anna Z. Saber is a counsel in the San Francisco office of Crowell & Moring, where she is a member of the Litigation Group.

Anna has experience representing clients in state and federal courts in technology-related litigation involving trade secret and trademark misappropriation,

Anna Z. Saber is a counsel in the San Francisco office of Crowell & Moring, where she is a member of the Litigation Group.

Anna has experience representing clients in state and federal courts in technology-related litigation involving trade secret and trademark misappropriation, copyright infringement and DMCA, cybersecurity and digital crimes, and complex commercial matters. Anna also has experience representing clients in employment disputes.

Using the insights gained through her litigation practice, Anna also has a counseling practice in which she helps clients establish sound licensing agreements to meet their business goals. She provides counsel on matters related to technology transactions, IP protection, and drafting and negotiating commercial and intellectual property agreements for companies ranging from start-ups to market leaders.

Anna’s pro bono practice includes voting rights and wrongful conviction matters.

While at SCU Law, she competed in trial team and moot court, including a second-place finish at the Clara Barton International Humanitarian Law Moot Court Competition. Anna was also a member of the Santa Clara Law Review board, where she served as senior managing editor. A Bay Area native, she enjoys giving back to her local community. Most recently, she has been a co-coach for a high school mock trial team competing in the Santa Clara County High School Mock Trial Program.

Photo of Michael G. Gruden, CIPP/G Michael G. Gruden, CIPP/G

Michael G. Gruden is a counsel in Crowell & Moring’s Washington, D.C. office, where he is a member of the firm’s Government Contracts and Privacy and Cybersecurity groups. He possesses real-world experience in the areas of federal procurement and data security, having worked…

Michael G. Gruden is a counsel in Crowell & Moring’s Washington, D.C. office, where he is a member of the firm’s Government Contracts and Privacy and Cybersecurity groups. He possesses real-world experience in the areas of federal procurement and data security, having worked as a Contracting Officer at both the U.S. Department of Defense (DoD) and the U.S. Department of Homeland Security (DHS) in the Information Technology, Research & Development, and Security sectors for nearly 15 years. Michael is a Certified Information Privacy Professional with a U.S. government concentration (CIPP/G). He is also a Registered Practitioner under the Cybersecurity Maturity Model Certification (CMMC) framework. Michael serves as vice-chair for the ABA Science & Technology Section’s Homeland Security Committee.

Michael’s legal practice covers a wide range of counseling and litigation engagements at the intersection of government contracts and cybersecurity. His government contracts endeavors include supply chain security counseling, contract disputes with federal entities, suspension and debarment proceedings, mandatory disclosures to the government, prime-subcontractor disputes, and False Claims Act investigations. His privacy and cybersecurity practice includes cybersecurity compliance reviews, risk assessments, data breaches, incident response, and regulatory investigations.

Photo of Alexis Ward Alexis Ward

Alexis Ward represents clients in a variety of matters at the intersection of government contracts and cybersecurity utilizing her experience in analytics and data architecture to counsel clients with a practical, real-world lens. As a member of Crowell & Moring’s Privacy and Cybersecurity

Alexis Ward represents clients in a variety of matters at the intersection of government contracts and cybersecurity utilizing her experience in analytics and data architecture to counsel clients with a practical, real-world lens. As a member of Crowell & Moring’s Privacy and Cybersecurity and Government Contracts groups, Alexis has assisted clients in matters including False Claims Act investigations; developing corporate policies, procedures and governance; and in diverse matters involving cybersecurity and data privacy compliance, risk assessment and mitigation, and incident response.

During law school, Alexis founded USC Gould’s Privacy and Cybersecurity Law Society and was on the board of OUTLaw. Alexis also worked as a teaching assistant for the graduate programs’ Information Privacy Law course. Her paper The Oldest Trick in the Facebook: Would the General Data Protection Regulation Have Stopped the Cambridge Analytica Scandal? was published by the Trinity College Law Review.

Photo of Neda Shaheen Neda Shaheen

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national…

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national security, technology, cybersecurity, trade and international law. Neda joined the firm after working as a consultant at Crowell & Moring International (CMI), where she supported a diverse range of clients on digital trade matters concerning international trade, national security, privacy, and data governance, as well as advancing impactful public-private partnerships.

Photo of Kirsten Nathanson Kirsten Nathanson

When companies face environmental regulatory enforcement—whether from the U.S. EPA, the DOJ, other government agencies, citizen activist groups, or other private parties—they turn to Kirsten Nathanson as a strategic and pragmatic legal advisor to navigate the myriad risks and drive toward efficient and

When companies face environmental regulatory enforcement—whether from the U.S. EPA, the DOJ, other government agencies, citizen activist groups, or other private parties—they turn to Kirsten Nathanson as a strategic and pragmatic legal advisor to navigate the myriad risks and drive toward efficient and positive outcomes. In addition to handling litigation encompassing citizen suit defenses; National Environmental Policy Act challenges; CERCLA remediation cost recoveries and defenses; Federal Insecticide, Fungicide, and Rodenticide Act actions; and EPA enforcement, Kirsten’s practice includes a focus on consensus-based dispute resolution and multilateral negotiations and mediation.

Kirsten also serves as a strategic counselor in emerging risk areas, including PFAS and emerging contaminants, particularly when they intersect with all forms of regulatory enforcement and dispute resolution. She supports clients across a range of industries, including energy, agricultural technology, and transportation.

Kirsten is co-chair of the firm’s Environmental Group and has served on the firm’s Management Board and Executive Committee.

Photo of Tyler A. O'Connor Tyler A. O'Connor

Tyler O’Connor is an energy litigator and public policy leader in Crowell & Moring’s Washington, D.C. office, where he represents clients in the courts, in arbitration forums, and before federal agencies.

Prior to joining Crowell, Tyler served as the Energy Counsel to the…

Tyler O’Connor is an energy litigator and public policy leader in Crowell & Moring’s Washington, D.C. office, where he represents clients in the courts, in arbitration forums, and before federal agencies.

Prior to joining Crowell, Tyler served as the Energy Counsel to the House Energy and Commerce Committee, where he played a leading role in drafting the Inflation Reduction Act (IRA) and Infrastructure Investment and Jobs Act (IIJA). He was the lead House lawyer responsible for the Federal Power Act and Natural Gas Act and worked extensively on transmission, energy cybersecurity, and energy supply chain issues. His work brought him into frequent contact with senior administration officials, including at the Department of Energy (DOE) and the Federal Energy Regulatory Commission (FERC), as well as congressional leadership. As the staffer responsible for emerging technologies, including hydrogen and offshore wind, as well as the Loan Programs Office, Tyler has been at the center of energy policy discussions.

Photo of Tim Laderach Tim Laderach

Tim Laderach is an associate in Crowell & Moring’s Antitrust and International Trade practices. He creatively leverages his extensive military and healthcare background to help clients meet their business and legal objectives. His practice spans transactions, investigations, compliance, and regulatory advice.

Tim also

Tim Laderach is an associate in Crowell & Moring’s Antitrust and International Trade practices. He creatively leverages his extensive military and healthcare background to help clients meet their business and legal objectives. His practice spans transactions, investigations, compliance, and regulatory advice.

Tim also maintains an active pro bono practice representing clients seeking political asylum.

While in law school, Tim was a law clerk on Capitol Hill with the majority staff of the Senate Judiciary Committee. There, he authored a memo on competition issues with artificial intelligence (AI)-based pricing algorithms. He was also engaged in a wide variety of the committee’s portfolio, including agency oversight, hearing preparation, and legislative research for national security-related bills pending before the committee. Additionally, Tim has experience as an intern at the Federal Trade Commission (FTC) Healthcare Division. He was involved with a variety of conduct litigation and conducted initial research on Pharmacy Benefit Managers ahead of the current 6-B study.

Tim served on active duty in the U.S. Navy prior to law school. He completed tours as outpatient pharmacy division officer at Naval Hospital Jacksonville Florida and later as pharmacy department head at the Office of Attending Physician, U.S. Capitol. He continues his service now as an officer in the U.S. Navy Reserve.

Photo of Jeremy Iloulian Jeremy Iloulian

Recognized as a “Rising Star” in International Trade by Super Lawyers, Jeremy Iloulian advises clients globally on complex cross-border regulatory, compliance, investigative, and transactional matters and policy developments that touch U.S. national security, international trade, and foreign investment, including those relating to

Recognized as a “Rising Star” in International Trade by Super Lawyers, Jeremy Iloulian advises clients globally on complex cross-border regulatory, compliance, investigative, and transactional matters and policy developments that touch U.S. national security, international trade, and foreign investment, including those relating to U.S. export controls (EAR and ITAR), economic sanctions, anti-boycott laws, the Committee on Foreign Investment in the United States (CFIUS), and various national security controls on fundamental research and supply chains.

Jeremy has extensive experience counseling U.S. and non-U.S. clients, including public and private companies, private equity sponsors, and nonprofits spanning a multitude of industries, including aerospace and defense, energy, entertainment, fashion, food and beverage, health care, infrastructure, technology, telecommunications, and transportation. He provides strategic guidance on managing risks for dealings in high-risk jurisdictions such as China, Russia, Venezuela, and the Middle East, among other countries and regions. He regularly advocates on behalf of such clients before the U.S. Bureau of Industry and Security (BIS), Directorate of Defense Trade Controls (DDTC), Office of Foreign Assets Control (OFAC), Bureau of Economic Affairs (BEA), Census Bureau, Department of Energy, and Nuclear Regulatory Commission (NRC).

Additionally, Jeremy has previously counseled on, presented on, and published research related to international environmental law, specifically the United Nations Convention on the Law of the Sea (UNCLOS) and Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES).

Prior to and during law school, Jeremy interned at multiple government agencies, including the United Nations, the U.S. State Department, and the Iraqi Embassy in Washington, D.C.

Photo of Scott Wise Scott Wise

Scott Wise is a partner in Crowell’s Denver office and a member of the firm’s International Trade Group. His practice focuses on export controls, economic sanctions, and outbound investment issues across industries, with an emphasis on emerging technologies and the technology industry.

Working

Scott Wise is a partner in Crowell’s Denver office and a member of the firm’s International Trade Group. His practice focuses on export controls, economic sanctions, and outbound investment issues across industries, with an emphasis on emerging technologies and the technology industry.

Working with established and start-up tech companies, Scott helps clients to develop unique compliance programs that are responsive to the full range of regulations governing the exports of goods and services. He also trains and counsels clients on compliance with relevant export control regulations such as the International Traffic in Arms Regulations (ITAR) and the Export Administration Regulations (EAR). In addition to technology companies, Scott advises companies in the aerospace and aviation, automotive, chemical, defense, electronics, energy, engineering, financial and insurance, manufacturing, professional services, security, and transportation industries, among others.

Prior to joining Crowell, Scott was the Assistant General Counsel for Global Trade at a multinational technology company where he led a consolidated team focusing on export controls, economic sanctions, and outbound investment. He was the lead export controls and economic sanctions attorney for key business groups ranging from emerging technologies, such as artificial intelligence and quantum computing, to aerospace and defense contracts, and to commercial software and gaming. In that role, Scott developed the company’s compliance approach to new regulations governing the export of various  new technologies, including AI, integrated circuits and chips, and quantum computing, which involved coordination between senior government officials and business leaders. Scott also has prior law firm experience in the international trade practice area.

Photo of Jacob Canter Jacob Canter

Jacob Canter is an attorney in the San Francisco office of Crowell & Moring. He is a member of the Litigation and Privacy & Cybersecurity groups. Jacob’s areas of emphasis include technology-related litigation, involving competition, cybersecurity and digital crimes, copyright, trademark, and patent…

Jacob Canter is an attorney in the San Francisco office of Crowell & Moring. He is a member of the Litigation and Privacy & Cybersecurity groups. Jacob’s areas of emphasis include technology-related litigation, involving competition, cybersecurity and digital crimes, copyright, trademark, and patent, as well as general complex commercial matters.

Jacob graduated from the University California, Berkeley School of Law in 2018, where he launched Berkeley’s election law outreach program and pro bono project. He joins the firm after a year of practice at an international law firm in Washington, D.C., and a year clerking in the Southern District of New York for the Hon. Lorna G. Schofield. Jacob was exposed to and provided support in a variety of complex substantive and procedural legal topics during the clerkship, including trade secrets, insurance/reinsurance, contracts, class actions, privacy, intellectual property, and arbitrability.

Photo of Linda Malek Linda Malek

Linda Malek is a partner in Crowell’s Health Care and Privacy & Cybersecurity Groups, where she advises a broad array of health care and life sciences clients on compliance with federal, state, and international law governing clinical research, data privacy, cybersecurity, and fraud

Linda Malek is a partner in Crowell’s Health Care and Privacy & Cybersecurity Groups, where she advises a broad array of health care and life sciences clients on compliance with federal, state, and international law governing clinical research, data privacy, cybersecurity, and fraud and abuse. Her clients include national hospitals systems and academic medical centers, genetic and biotechnology companies, pharmaceutical companies, medical device companies, financial institutions involved in healthcare services, research foundations and international scientific organizations.

In the healthcare context, Linda is particularly focused on regulatory compliance issues related to clinical research and clinical trials. She creates and implements comprehensive policies governing the conduct of research involving human subjects, and advises clients on human subject research compliance issues. Linda also counsels on legal issues related to conducting secondary research on existing data repositories and tissue banks, including on data privacy and informed consent issues related to the ability to conduct future research. She has experience advising clients on a wide variety of research areas, including biologics, pharmacogenomics, translational research, secondary research, tissue banking, and data repositories. Linda also advises clients in general health care matters related to fraud and abuse, including issues under the Stark laws and federal and state anti-kickback statutes. Her work includes structuring complex transactions in compliance with such laws, assisting in the creation of internal compliance programs, and advising on issues related to the False Claims Act.