AI

WPF announces participation in the National Institute of Standards and Technology (NIST) AI Safety Institute Consortium (AISIC)

The World Privacy Forum is pleased to announce that it has joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST) in February 2024, the U.S. AI Safety Institute Consortium (AISIC) brings together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

WPF to speak before the State House of Mongolia for its National Consultation on e-Health, and before the Human Rights Commission of Mongolia

5 April 2024, Paris, France — World Privacy Forum Executive Director Pam Dixon has been invited to speak at the State House of Mongolia for the Government of Mongolia’s National Consultation on e-Health. She will be speaking twice at this event; first, on the topic of Artificial Intelligence in Healthcare and second, on Big Data in e-Health.  She will be presenting later in the week on AI governance and Privacy before the Ministry of Digital Development and Communications, and on the topic of AI Governance Tools before the National Human Rights Commission of Mongolia. All speeches will take place in Ulaanbaatar, Mongolia.

Initial Analysis of the new U.S. governance for Federal Agency use of Artificial Intelligence, including biometrics

Today the Biden-Harris Administration published a Memorandum that sets forth how U.S. Federal Agencies and Executive Departments will govern their use of Artificial Intelligence. The OMB memorandum provides an extensive and in some ways surprising articulation of emergent guardrails around modern AI. There are many points of interest to discuss, but the most striking includes the thread of biometrics systems guidance throughout the memorandum and continuing on in the White House Fact Sheet and associated materials. Additionally, the articulation of minimum practices for safety -impacting and rights- impacting AI will likely become important touch points in regulatory discussions in the U.S. and elsewhere. The guidance represents a significant policy shift for the U.S. Federal government, particularly around biometrics.

WPF comments to OMB regarding its Draft Memorandum on establishing new Federal Agency requirements for uses of AI

In December 2023, WPF submitted detailed comments to the U.S. Office of Management and Budget regarding its Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Memorandum.  OMB published the request in the Federal Register on November 3, 2023. This particular Memorandum is of historic importance, as it articulates the establishment of new agency requirements in the areas of AI governance, innovation, and risk management, and would direct agencies to adopt specific minimum risk management practices for uses of AI that impact the rights and safety of the public.

New Report: Risky Analysis: Assessing and Improving AI Governance Tools

We are pleased to announce the publication of a new WPF report, “Risky Analysis: Assessing and Improving AI Governance Tools.” This report sets out a definition of AI governance tools, documents why and how these tools are critically important for trustworthy AI, and where these tools are around the world. The report also documents problems in some AI governance tools themselves, and suggests pathways to improve AI governance tools and create an evaluative environment to measure their effectiveness. AI systems should not be deployed without simultaneously evaluating the potential adverse impacts of such systems and mitigating their risks, and most of the world agrees about the need to take precautions against the threats posed. The specific tools and techniques that exist to evaluate and measure AI systems for their inclusiveness, fairness, explainability, privacy, safety and other trustworthiness issues — called in the report collectively AI governance tools – can improve such issues. While some AI governance tools provide reassurance to the public and to regulators, the tools too often lack meaningful oversight and quality assessments. Incomplete or ineffective AI governance tools can create a false sense of confidence, cause unintended problems, and generally undermine the promise of AI systems. The report contains rich background details, use cases, potential solutions to the problems discussed in the report, and a global index of AI Governance Tools.