Report: Risky Analysis – Assessing and Improving AI Governance Tools

New Report: Risky Analysis: Assessing and Improving AI Governance Tools

We are pleased to announce the publication of a new WPF report, “Risky Analysis: Assessing and Improving AI Governance Tools.” This report sets out a definition of AI governance tools, documents why and how these tools are critically important for trustworthy AI, and where these tools are around the world. The report also documents problems in some AI governance tools themselves, and suggests pathways to improve AI governance tools and create an evaluative environment to measure their effectiveness. AI systems should not be deployed without simultaneously evaluating the potential adverse impacts of such systems and mitigating their risks, and most of the world agrees about the need to take precautions against the threats posed. The specific tools and techniques that exist to evaluate and measure AI systems for their inclusiveness, fairness, explainability, privacy, safety and other trustworthiness issues — called in the report collectively AI governance tools – can improve such issues. While some AI governance tools provide reassurance to the public and to regulators, the tools too often lack meaningful oversight and quality assessments. Incomplete or ineffective AI governance tools can create a false sense of confidence, cause unintended problems, and generally undermine the promise of AI systems. The report contains rich background details, use cases, potential solutions to the problems discussed in the report, and a global index of AI Governance Tools.