WPF comments to OMB regarding its Draft Memorandum on establishing new Federal Agency requirements for uses of AI

In December 2023, WPF submitted detailed comments to the U.S. Office of Management and Budget regarding its Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum.  OMB published the request in the Federal Register on November 3, 2023. This particular Memorandum is of historic importance, as it articulates the establishment of new agency requirements in the areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public. This is the first memo to address AI with such breadth in the U.S. Federal government.

In our comments, we discuss three key areas.

First, our comments provide a detailed analysis regarding how the Privacy Act can be implemented in a modern AI context. To our knowledge, these suggestions are among the first of their kind.

Second, our comments discuss the need for trustworthy quality assessments of AI systems throughout the AI lifecycle and ecosystem. Most AI systems today utilize some combinations of AI governance tools to evaluate the trustworthiness of AI systems. This is important. But the tools themselves also need to be evaluated for soundness. We discuss the SHAP use case in particular, and recommend that the government address these and other quality assurance challenges in the Memorandum.

Third, our comments discuss the importance of transparency and documentation for AI systems in use by the Federal government, and the importance of making this documentation available to the public. Documentation can include information about the developer, date of release, results of any validation or quality assurance testing, and instructions on the contexts in which the AI methods or systems should or should not be used. We noted that a privacy and data policy is also important and should be included in the documentation of AI systems and governance tools. End users should be made aware of the evaluations in a prominent manner, and the evaluation should be readily understandable by non-expert users.

We additionally requested that:

    • Documentation should provide the suggested context for the use of an AI system or governance tool. AI systems are about context, which is important when it comes to applicable uses, environment, and user interactions. A concern is that tools originally designed for application in one use case or context may potentially be used in an inappropriate context or use case or “off-label” manner due to lack of guidance for the end user. 
    • Documentation should give end users an idea of how simple or complex it would be to utilize a given AI system or governance tool. 
    • Cost analysis for utilizing the system or method: How much would it cost to use the system or tool and validate the results? 
    • A data policy: A detailed data policy should be posted in conjunction with each AI system and governance tool. For example, if applicable, this information could include the kind of data used to create the system or tool, if data is collected or used in the operation of the tool, and if that information is used for further AI model training, analysis, or other purposes.
    • Complaint and feedback mechanism: AI governance tools should provide a mechanism to collect feedback from users. 
    • Cycle of continuous improvement: Developers of AI governance tools should maintain and update the tools at a reasonable pace. 

Related Documents: