On October 24, 2019, Crowell & Moring in conjunction with the American Bar Association will be hosting the sixth annual Legal Careers in Cybersecurity, Privacy & Information Law networking and discussion event. This event offers law students and new lawyers the opportunity to learn how to meld law, policy, and technology, and better navigate the unprecedented legal challenges emerging from the cyber, homeland, and privacy frontiers. Attendees will hear from an exceptional cast of experienced and diverse lawyers who have built careers cybersecurity, privacy, and information law at federal agencies, on the Hill, within the private sector, and at law firms. If interested in attending, please RSVP to Victoria Walker at email@example.com.
On August 9, 2019, the National Institute of Standards and Technology (NIST) released “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools” (the Plan) in response to Executive Order 13859 (EO), as reported on here. In accordance with the EO, the Plan outlines the following priorities for federal engagement: (1) ensure technical standards minimize vulnerability to attacks from malicious actors; (2) reflect federal priorities for innovation, public trust, and public confidence in systems that use artificial intelligence (AI) technologies; and (3) develop international standards to promote and protect those priorities. With emphasis from both public and private sector, NIST calls for flexible AI standards in regulatory and procurement actions, as well as the prioritization of multidisciplinary research and expansive public-private partnerships. Based on the Plan, companies are likely to see an increased number of opportunities to participate and assist the Federal Government in the standard development process, while simultaneously being put on notice that standards in this burgeoning industry may be forthcoming.
The Importance of U.S. Involvement in Standards and Artificial Intelligence
The United States’ global leadership in AI is dependent upon the Federal Government’s active participation in AI standards development. To maintain this leadership, NIST calls on the active participation of both the private sector and academia. Currently, AI standards are either cross-sector (e.g., applications and industries) or sector-specific (e.g., healthcare and transportation). But, NIST emphasizes that the Federal Government should foster collaboration between the two camps where there may be several existing cross-sector or sector-specific technology standards applicable to AI that originally were developed for other technologies, e.g., cybersecurity and privacy. On this point, NIST urges the consideration of these already existing standards before the creation of new ones.
In that vein, NIST requests a consistent set of “rules of the road” for AI. NIST believes these will enable market competition, preclude barriers to trade, and allow innovation to flourish. They also will ensure that AI technologies and systems meet critical objectives for functionality, interoperability, and trustworthiness. Yet, in pursuit of these objectives, the standards must include societal and ethical considerations in IT, as well as aspects of trustworthiness (e.g., explainability and security).
NIST also recognizes the importance of establishing aspirational principles and goals to address legal, ethical, and societal issues in developing AI standards. Here, NIST notes that there has been a “first step” toward standardization of these principles when the Organization for Economic Co-operation and Development Council’s (OECD) Recommendations on AI (Recommendation), as reported here, emphasized the need for the same considerations in the promotion of AI. Thus, NIST calls on the Federal Government to ensure the same cooperation and coordination across federal agencies and private sector stakeholders and to continue its engagement in international dialogues on AI standards. However, while the United States endorsed the Recommendation and adopted these principles, and there is “broad agreement” that these considerations must factor into AI standards, NIST cautions that the current path forward to factor in these issues is unclear.
NIST Urges the Federal Government to Engage in the Creation of AI Standard Priorities
To maintain a leadership position in AI, the Federal Government must create a “purpose-driven role in AI standards development.” To that end, NIST addresses priorities for Federal involvement, the different level of U.S. engagement, and practical first steps. For involvement, the Plan calls for prioritizing AI standards efforts that were “inclusive and accessible, open and transparent, consensus-based, globally relevant and non-discriminatory.”
NIST also provides recommendations for the type of AI standards that deserve priority consideration by the Federal Government. For example, NIST recommends focusing on standards that are innovation-oriented, are applicable across multiple sectors, are effective in monitoring and evaluating AI system performance, and are sensitive to ethical considerations, to name a few.
NIST urges Federal Government involvement in the standard development process to not only protect U.S. dominance in AI, but also to ensure that U.S.-based companies are not excluded or disadvantaged as standards are developed. NIST suggests four different levels of involvement that range from passive to more interactive. The Plan explains that the Government can monitor, participate, influence, and lead. NIST notes that engaging in any of the four levels of involvement requires qualified and competent Federal Government participants, including Federal employees and contractors, to assist with the standards development process.
NIST also provides suggestions on the first practical steps agencies can take to begin engaging in the AI standards process. These include identifying the AI technologies that can be used to further the agency’s mission; conducting a gap analysis to determine if there are already standards in place that can be used or if standards need to be developed; coordinating with agencies with similar needs; and identifying, training, and enabling staff to participate in standards development.
NIST Provides Recommendations for Advancing U.S. Leadership in AI
NIST also calls for the Federal Government to advance its role in AI leadership to help “speed the pace of reliable, robust, and trustworthy AI technology development.” NIST provides the following four recommendations to the Federal Government in the advancement of AI standards:
- Agencies should share information, leadership, and coordination regarding standards development. Growing a cadre of Federal staff that encourages the participation and expertise in AI standards and standards development would be particularly useful.
- Agencies should participate in focused research on how to incorporate “trustworthiness” into AI standards and tools.
- Agencies should use public-private partnerships to develop AI standards and tools.
- For U.S. economic and national security purposes, the Federal Government should engage strategic international partners in the development of AI standards.
Significance of NIST’s Plan
The Plan from NIST is significant in that it provides meaningful guidance, recommendations, and concrete steps for the Federal Government to use in developing a set of AI Standards. But, as NIST emphasizes throughout the Plan, the Federal Government must engage and involve private industry to formulate these AI Standards. For example, NIST urges agencies to study and understand the approaches technology companies are taking to steer their own AI development efforts. Likewise, NIST calls for increased investment in research that focuses on understanding AI trustworthiness and incorporating those metrics into future standards, where private industries will likely have a key role to play. The expansion of public-private partnerships is integral to help inform federal AI standards. Therefore, companies should expect to see an increased number of opportunities to participate and assist the Federal Government in the standard development process. Meanwhile, however, companies active in or considering entering into the AI industry should consider engaging in – or at least monitoring – the standard-setting process that the Plan signals is about to get underway.
On August 15th, the Federal Aviation Administration (“FAA”) posted a Request for Information (“RFI”) seeking responses from third party entities to administer a new aeronautical knowledge and safety test for recreational drone operators. The RFI marks the beginning of FAA’s implementation of Section 349 of the FAA Reauthorization Act of 2018, which requires FAA to develop an electronic test to be administered by FAA, community-based organizations, or by FAA designees. Anyone who wishes to fly a drone recreationally will need to pass the test and retain proof of passage.
According to the RFI, FAA will develop testing and training content, which will cover the safety and operational rules for recreational drones. Third party designees will be responsible for administering the test on an electronic, web-based platform. Designees may also be responsible for maintaining test data on behalf of the FAA, including personally identifiable information (“PII”) for adults and minor children, and for issuing certificates showing proof of passage. The FAA is open to responses from all industry stakeholders, but interested companies should ensure that they can demonstrate an ability to store data in accordance with a number of Federal privacy regulations, including the Federal Records Act, the Privacy Act of 1972, and the E-Government Act. The best-positioned respondents will likely have experience with Shareable Content Object Reference Model (“SCORM”) compliant Learning Management Systems (“LMS”), developing mobile platforms, and administering standardized tests.
After evaluating the responses in accordance with the RFI, FAA may initially invite certain companies to participate in further discussions, some of which may ultimately be selected as designees. Each designee will enter into a Memorandum of Understanding (“MOU”)—a type of Other Transaction Agreement—with the FAA. These types of vehicles are generally not subject to protest. Although the FAA clearly states that it will not provide funding under the MOUs, there do not appear to be restrictions on a designee incurring costs associated with administration of the test, storage of test-taker data, and production of certificates. Responses to the RFI are due on September 12, 2019; interested parties may view the RFI here.
In a prior alert, we highlighted the unusual remedy ordered in Caddell Construction Co. v. U.S., in which the Court of Federal Claims nullified the award of a construction contract and ordered the agency to reopen discussions with only one firm. The court explained that the unusual remedy was appropriate because misleading discussions had impacted only that firm, and “a broad reopening of discussions is unnecessary to cure the procurement error and would cause more harm than good.”
Recently, for the first time, GAO also departed from the rule that discussions must include all offerors in the competitive range, and approved one-sided discussions as corrective action in response to a protest. In Sevenson Envtl. Servs., Inc., B-416166.5, Apr. 1, 2019, at 1 (unpublished decision) and Environmental Chem. Corp., B-416166.3 et al., June 12, 2019, each protester challenged the Army’s decision not to award it a contract for environmental remediation support services. In response to Sevenson’s protest, the Army discovered that it had not advised that firm during discussions of a weakness concerning Sevenson’s key personnel, and elected to take corrective action consisting of “limited discussions with Sevenson to provide it an opportunity to address the weakness.”
Environmental Chemical Corporation (ECC) protested the corrective action on the basis that the Army was not permitted to engage in discussions with only Sevenson, and was required to reopen discussions with all offerors in the competitive range, citing Rockwell Electronic Commerce Corp., B‑286201.6, Aug. 30, 2001 (if discussions are reopened with one offeror after receipt of final revised proposals, they must be reopened with all offerors whose proposals are in the competitive range, even where the discussions are corrective action on improper awards). GAO disagreed. Mirroring the reasoning in Caddell, GAO concluded that under the unique circumstances, only Sevenson suffered from the procurement error, and so one-sided discussions were appropriate to place Sevenson in the same competitive position that the other offerors, including ECC, were in following their receipt of meaningful discussions.
Accordingly, under Environmental Chem. Corp., an Agency’s discretion to limit the scope of proposal revisions during corrective action extends to the discretion to engage in discussions with only one firm, so long as that action is reasonable under the circumstances and remedies the procurement impropriety. In light of this decision, potential protesters take note: if your firm was prejudiced by inadequate discussions, one-sided discussions may be a desirable result. However, in a multi-protester scenario, it is not guaranteed that corrective action on another protester’s discussions challenge will result in new discussions with all firms in the competitive range.
The Defense Contract Audit Agency (“DCAA”) recently made public its Fiscal Year (FY) 2018 Report to Congress (“Report”), which, among other things, provides an update on its incurred cost audits and highlights DCAA’s industry outreach activities. Although the Report touts DCAA’s elimination of the incurred cost audit backlog, DCAA acknowledges that there still is a backlog of 152 years, the majority of which is due to reasons purportedly beyond DCAA’s control, and that is not yet in compliance with the NDAA 2018 requirements to complete incurred cost audits within 12 months of receiving a contractor’s adequate proposal. To “eliminate” the backlog, DCAA “closed 8,482 incurred cost years with a total dollar value of $392.2 billion” using a variety of methods, including reports and memos – the latter of which account for more than half of the years closed. Other reported methods for closing out audits included that the contractor went out of business or did not have any flexibly- priced contracts.
Additionally, according to DCAA, it:
- Sustained audit exceptions for incurred costs 24.1% of the time, which is down from 28.6% in FY 2017 (and is calculated only “based on contracting officer negotiation decisions,” i.e., it does not include successful contractor appeals or settlements following a Contracting Officer’s Final Decision);
- Calculated the time to complete an incurred cost audit at 125 days, which is down from 143 days in FY 2017 (although this calculation is “measured from the date of the entrance conference to report issuance” and, thus, does not account for when the contractor actually submitted its incurred cost proposal to DCAA);
- Will continue to “dedicate the audit resources necessary to meeting the NDAA requirements in FY 2019”; and
- “With the backlog behind [it], will be returning to a more balanced mix of audits across [its] whole portfolio, including business systems, Truth in Negotiations, Cost Accounting standards, pre-award surveys, claims, and terminations.”
The Report also summarizes its outreach actions toward industry, including its engagement with the Section 809 Panel. In this respect, the Report references the Professional Practice Guide (PPG), which was included in Part III of the Panel’s Report, as previously discussed here. According to DCAA, the PPG “will provide consistency in the way DCAA and Independent Professional Accounting Firms [(“IPA”)] consider risk and materiality.” Indeed, the Report indicates that DCAA plans “to use the PPG to meet Congressional requirements to establish, codify, and implement these new materiality thresholds” and that the PPG also “will be important to IPAs when they perform select incurred cost audits for contractors previously audited by DCAA.”
Last month, in National Government Services, Inc. (“NGS”) v. United States—a pre-award bid protest handled by Crowell & Moring—the Federal Circuit ruled that “workload caps” imposed by the Centers for Medicare & Medicaid Services (“CMS”) in its administration of the Medicare Program violated the Competition in Contracting Act’s (“CICA”) “full-and-open competition” requirement. In so doing, the Federal Circuit reversed a Court of Federal Claims (“COFC”) decision that upheld the caps (a prior GAO decision had done the same), clarified the meaning of “full and open,” and clarified the scope of agency authority pursuant to the Federal Acquisition Regulation (“FAR”) to address concerns about competitive balance in the marketplace.
In 2003, as part of the Medicare Modernization Act, Congress established the Medicare Administrative Contractor (“MAC”) program, through which CMS contracts with third-parties to administer Medicare claims and benefits. Under the MAC program, the United States is divided into twelve regions representing different percentages of the total MAC workload depending on the region’s size; CMS awards individual contracts for each.
In 2010, pursuant to its authority to administer the MAC program, CMS implemented the workload caps at issue. Pursuant to the caps, an individual MAC contractor could not hold more than 26% of the national Medicare workload. CMS identified two overarching concerns animating the caps: (1) business continuity issues for the Medicare program should a single entity holding too much of the workload suffer a “disaster event” such that it was unable to continue performance; and (2) the need to maintain a dynamic, competitive marketplace of available MAC contractors. Although CMS placed no limitation on the number of contracts for which for a contractor could bid, the caps precluded a contractor from winning an award that would result in it exceeding the 26% threshold, even where CMS deemed its proposal to represent the best value in a particular procurement.
In 2017, CMS issued an RFP for the MAC contract in Jurisdiction 8, the award of which would put NGS, a current MAC contractor, over the 26% threshold. In November 2017, NGS filed a pre-award protest at GAO challenging the caps as incorporated into the Jurisdiction 8 procurement, arguing that they violated CICA’s the FAR’s full-and-open competition requirements, and that CMS lacked authority to implement them. GAO denied the protest. So too, did the COFC after NGS filed a follow-on protest in February 2018. NGS ultimately found success when the Federal Circuit reversed the COFC decision, and accepted NGS’ arguments nearly in their entirety.
The Federal Circuit’s Decision
In reversing the COFC’s decision, the Federal Circuit considered two questions. First, did CMS’ workload caps violate CICA and the FAR’s full-and-open competition requirements? Second, if yes, was CMS nonetheless authorized to implement them?
The Federal Circuit answered the first question affirmatively, rejecting the Government’s argument that because the caps did not prevent NGS from submitting a proposal, the MAC procurements were full and open (the COFC accepted this argument). The Court explained that simply being able to submit a bid was insufficient where “a responsible offeror that would exceed the workload caps is not given the same opportunity to win an award as other offerors that submitted awardable proposals.” The Court also rejected the Government’s efforts to characterize the caps as evaluation criteria, explaining that they were “not requirements tailored to meet CMS’ needs for a particular procurement” or “based on some capability or experience requirement.” Instead, the caps were CMS’ “attempt to divvy up the MAC contracts in a way that ensures business continuity and helps maintain a competitive MAC market.”
In answering the second question—whether CMS was authorized to implement the caps—the Court noted that CMS’ concerns about “business continuity” and maintaining a competitive marketplace were neither improper nor lacked a rational basis. But absent express authority allowing CMS to limit competition—which the Court concluded the Medicare statute does not grant—CMS was required to utilize specific mechanisms included in CICA and FAR Part 6 to address such concerns. Those mechanisms do not include the broad, program-encompassing caps CMS attempted to impose.
Instead, CICA (at 41 U.S.C. § 3303) and the FAR (at Subpart 6.2) allow agencies, on a procurement-specific basis, to exclude a particular offeror from a procurement in order to promote business continuity and market competition. But CICA and the FAR require that any such exclusion be accompanied by a written Determination and Findings signed by the head of an agency or its designee detailing the justification for the exclusion. While this is, on its face, an onerous requirement, the Court rejected the Government’s argument that CMS need not utilize FAR Part 6 because doing so would be too difficult:
As the Government’s brief tellingly notes, during the time period when CMS used a case-by-case approach to analyze business continuity and competition concerns . . . “CMS had been unable to identify factors that would ‘tip the scales’ for an offeror to lose an award and found it difficult to justify a decision to deny an award based upon business continuity and competition concerns under those circumstances.” But regardless of how difficult it may or may not be to justify excluding a source from competition, this justification is what the FAR requires.
Because it held that CMS had failed to implement the caps properly, the Court did not consider the rationality of the caps themselves, explaining that it would “leave those issues to be addressed in a case in which CMS has followed the proper procedures to address its overarching market concerns.”
While agencies have are afforded substantial discretion to administer their procurements in the manner they best see fit, the Federal Circuit’s decision in NGS is a reminder that such discretion is not unfettered. Absent express authorities stating otherwise, bidders are entitled to full and open competition in federal procurements, subject only to the constraints specifically delineated in CICA and the FAR.
 At the Court of Federal Claims, NGS amended its complaint to include a pre-award challenge to CMS’ inclusion of the caps in the procurement of a new MAC services in Jurisdiction H.
Following the announcement of the White House’s Executive Order on Maintaining American Leadership in Artificial Intelligence (EO) and the Department of Defense’s (DOD) Artificial Intelligence Strategy (AI Strategy) in February, as reported on here, the United States recently endorsed the Organization for Economic Co-operation and Development Council’s (OECD) Recommendation on Artificial Intelligence (Recommendation) – the world’s first intergovernmental policy guidelines for Artificial Intelligence (AI). In the Recommendation, the OECD sets forth the “Principles on Artificial Intelligence” to promote innovative and trustworthy AI in harmonization with human rights and democratic values. More than 40 countries have adopted these principles – including all 36 OECD member countries and 6 non-member countries – signaling global cooperation, coordination, and commitment to human ethical and social considerations in promotion of AI. Companies are likely to see more efforts and progress from the White House and around the federal government in support of sustainable, responsible AI.
OECD’s Recommendation – Principles on Artificial Intelligence
With the support of six non-member countries, the OECD hopes that the Recommendation and its five value-based principles will be embraced by any democratic nation, or a nation who shares democratic values, to facilitate an open dialogue on AI.
The Recommendation identifies five values-based principles for countries to implement in their promotion of reliable AI:
- Inclusive growth, sustainable development, and well-being to benefit people and the planet.
- Human-centered values and fairness that respects the rule of law, human rights, democratic values and diversity, including appropriate safeguards to ensure a fair and just society.
- Transparency and responsible disclosure in AI systems to ensure people understand and challenge AI-based outcomes.
- Robustness, security, and safety in AI systems throughout their life cycles.
- Accountability among organizations and individuals developing, deploying, and operating AI systems.
With these guiding principles, the OECD asks countries to consider the following five recommendations:
- Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
- Foster accessible AI systems with digital infrastructure, technologies, and mechanisms that allow for collaboration with data and knowledge.
- Create an environment to foster the deployment of trustworthy AI systems.
- Empower people with AI skills and support workers in jobs that will employ AI.
- Cooperate across borders and public sectors to ensure responsible control of AI.
In recognizing that countries need assistance in carrying out these principles, the OECD will launch the OECD AI Policy Observatory (Observatory) later this year. The Observatory will be an online live database containing AI resources, from policies and strategies to general information on AI. In addition, countries and other stakeholders will be able to share and update their own AI policies, which will provide an interactive comparison of their respective AI strategies and initiatives. Likewise, the Observatory will provide a platform to discuss and debate AI issues for the international community and other stakeholders.
Significance of the United States’ Support of OECD’s Recommendation
The OECD’s Recommendation is a historic step for the United States and the other member states, and is very significant for the United States as it joins the international community in its pledge for responsible AI. This should come as no surprise, as the Recommendation echoes the White House’s and DOD’s recent announcements on AI. Prior to these announcements, the United States did not have a public position with regard to the ethical and social considerations of AI. But, the United States’ public support of the Recommendation – along with the EO and AI Strategy – demonstrates the United States’ unequivocal commitment to values-based AI. As a result, companies are likely to see more opportunities to partner with the federal government in developing AI. The United States Department of Commerce’s National Institutes of Standards and Technology (NIST) has already taken steps in this direction, and issued a Request for Information for help to create technical standards and tools in consideration of AI technologies. Companies should expect other federal agencies to follow suit in the search for and promotion of responsible AI.
The Digital Revolution is here. Contractors are reinventing their products, customer experiences, and business models — and transforming the public sector marketplace as a result. Meanwhile, government agencies are increasingly using emerging technologies and developing plans to promote and incentivize their use. Join us on May 8, 2018, at 10:30 AM Eastern, as Crowell & Moring attorneys Gail Zirkelbach, John Gibson, and Mana Lombardo lead a discussion highlighting regulatory and contractual compliance considerations that are pivotal to successful planning and implementation of transformative technology in government contracting. Specific topics include:
- 3D Printing
- Artificial Intelligence
OOPS begins tomorrow! For more information and to register, please click here.
Crowell & Moring’s 35th annual Ounce of Prevention Seminar (OOPS) is just around the corner, taking place on May 7 and 8 at the Renaissance Hotel in Washington. At this year’s seminar, “The Challenging Climb to Reach New Heights,” the Government Contracts Group will provide updates and insight in a variety of areas, including ethics and compliance, bid protests, False Claims Act enforcement, cybersecurity, international issues affecting government contractors, and more.
Check back here for updates from our panelists, who will preview sessions on international considerations, #MeToo, and emerging technologies.
For more information and to register for OOPS, please click here.
On March 21, 2019, the Department of Defense (DoD) Defense Innovation Board (“DIB”) released a report, Software is Never Done: Refactoring the Acquisition Code for Competitive Advantage (“the Report”), summarizing DIB’s Software Acquisition and Practices (SWAP) study, which was mandated by the National Defense Authorization Act of Fiscal Year (FY) 2018. The two-year study involved conversations with Congress, the DoD, federally-funded research and development centers, contractors, and the public focused on ways in which DoD can take advantage of the strength of the U.S. commercial software ecosystem. In addition, the Board solicited feedback on concept papers and draft versions of the Report leading up to its publication.
DIB describes the ideal approach to software development as one of “iterative development that deploys secure applications and software into operations in a continuing (and continuous) fashion.” The Report is critical of current DoD software projects where the DoD “spends years on developing requirements, taking and selecting bids from contractors, and then executing programs that must meet the listed requirements before they are ‘done.’” DIB concluded that, as a result, software is obsolete before it reaches the field, is ill-matched to the needs of users, and risks positioning the DoD behind adversaries like China, which leverages private industry to develop national security software.
The Report makes 26 specific recommendations that flow from three fundamental themes: (i) “speed and cycle time” are the critical metrics for managing the DoD’s procurement, deployment and updating of software; (ii) the DoD must do more to educate, retain, and support the best internal software developers; and (iii) software development can no longer be managed as if it were hardware.
Among other things, the Report urges the DoD to immediately:
- Require suppliers to provide access to “source code, software frameworks, and development toolchains, with appropriate intellectual property (IP) rights, for all DoD-specific code,” enabling the DoD to perform full security testing and rebuilding of binaries from the source. The Report notes that contractors should have licensing agreements to protect any IP developed with their own resources.
- Shift away from the use of “rigid requirements for software programs to a list of desired features” with minimum standards for operation, security, and interoperability.
- Make security a “first-order consideration” for all software intensive systems and acquisition programs, and prioritize “regular and automated penetration testing” to expose vulnerabilities and breach DoD systems before adversaries do.
DIB proposes that the DoD secure high-level support for the Report’s vision during FY 2019, and begin initial deployment of its recommendations in FY 2020.