AI

AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolved

WPF’s “AI Governance on the Ground Series” highlights and expands on topics and issues from WPF’s Risky Analysis report and its survey of AI tools. In this first publication of the series, we highlight how Canadian government agencies are implementing AI governance and algorithmic transparency mechanisms across various agencies, including its employment and transportation agencies, its Department of Veterans Affairs, and the Royal Canadian Mounted Police, among others. The agencies have evaluated the automated systems they use according to the country’s Algorithmic Impact Assessment process, or AIA, and the assessment results are public. Designers of this assessment framework — required since the country’s Directive on Automated Decision-Making went into effect in April 2019 – have now re-evaluated the AIA, updating its criteria, requirements, and risk-level scoring algorithm along the way. WPF interviewed government officials as well as key Canadian end-users of the assessments to capture the full spectrum of how the AIA is working at the ground level.

Deputy director Kate Kaye attending ACM FAccT conference in Rio de Janeiro, Brazil

Deputy Director Kate Kaye is in Rio de Janeiro Brazil from 3-6 June to attend the leading conference on Artificial Intelligence and trustworthy AI in socio-technical systems, ACM’s Fairness, Accountability, and Transparency (ACM FAccT). While at the conference, Kaye will be interviewing paper authors and leading AI experts for forthcoming WPF podcasts, and to inform additional work.

WPF advises NIST regarding synthetic content and data governance

WPF filed comments with the US National Institute of Standards and Technology regarding its draft governance plan regarding synthetic content. WPF’s comments focused on 7 recommendationsWPF’s comments focused on 7 recommendations ranging from technical to policy issues. One overarching recommendation was that NIST ensure that human rights were attended to in all of its plans. Additional recommendations include requesting that NIST attend to the risks of digital exhaust in metadata, ensure that biometric data is included in the guidance, among other recommendations.

WPF announces participation in the National Institute of Standards and Technology (NIST) AI Safety Institute Consortium (AISIC)

The World Privacy Forum is pleased to announce that it has joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST) in February 2024, the U.S. AI Safety Institute Consortium (AISIC) brings together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

WPF to speak before the State House of Mongolia for its National Consultation on e-Health, and before the Human Rights Commission of Mongolia

5 April 2024, Paris, France — World Privacy Forum Executive Director Pam Dixon has been invited to speak at the State House of Mongolia for the Government of Mongolia’s National Consultation on e-Health. She will be speaking twice at this event; first, on the topic of Artificial Intelligence in Healthcare and second, on Big Data in e-Health.  She will be presenting later in the week on AI governance and Privacy before the Ministry of Digital Development and Communications, and on the topic of AI Governance Tools before the National Human Rights Commission of Mongolia. All speeches will take place in Ulaanbaatar, Mongolia.