WPF Executive Director to teach CMU course on Digital Identity in Rwanda, Africa

WPF’s Pam Dixon will be in Kigali, Rwanda to be the lead instructor on a course on digital identity ecosystems as a lecturer for Carnegie Mellon University, as part of their CEE-TP Courses for IT and policy in the developing world. This course, Identity Ecosystems and Digital Transformation: the key technical, legal, and policy considerations

WPF comments to NIST regarding its differential privacy guidance

WPF submitted comments to the National Institute of Standards and Technology regarding its Draft Guidelines for Evaluating Differential Privacy Guarantees. The comments approach the NIST Draft Guidance from a policy perspective, and urged changes to some parts of the definitional language in the Draft Guidance. Key areas of the comments include: A discussion of the

WPF comments to CFPB regarding notice of proposed rulemaking on Personal Financial Data Rights

WPF submitted comments to the Consumer Financial Protection Bureau regarding its notice of proposed rulemaking regarding Personal Financial Data Rights. This was a particularly important NPRM because it touches on multiple aspects of financial data in our modern era, which means that it touches privacy, identity, poverty, and digital rights in the financial sector. WPF

New Report: Risky Analysis: Assessing and Improving AI Governance Tools

We are pleased to announce the publication of a new WPF report, “Risky Analysis: Assessing and Improving AI Governance Tools.” This report sets out a definition of AI governance tools, documents why and how these tools are critically important for trustworthy AI, and where these tools are around the world. The report also documents problems in some AI governance tools themselves, and suggests pathways to improve AI governance tools and create an evaluative environment to measure their effectiveness. AI systems should not be deployed without simultaneously evaluating the potential adverse impacts of such systems and mitigating their risks, and most of the world agrees about the need to take precautions against the threats posed. The specific tools and techniques that exist to evaluate and measure AI systems for their inclusiveness, fairness, explainability, privacy, safety and other trustworthiness issues — called in the report collectively AI governance tools – can improve such issues. While some AI governance tools provide reassurance to the public and to regulators, the tools too often lack meaningful oversight and quality assessments. Incomplete or ineffective AI governance tools can create a false sense of confidence, cause unintended problems, and generally undermine the promise of AI systems. The report contains rich background details, use cases, potential solutions to the problems discussed in the report, and a global index of AI Governance Tools.