National Institute of Standards and Technology

WPF announces participation in the National Institute of Standards and Technology (NIST) AI Safety Institute Consortium (AISIC)

The World Privacy Forum is pleased to announce that it has joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST) in February 2024, the U.S. AI Safety Institute Consortium (AISIC) brings together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

WPF comments to NIST regarding its differential privacy guidance

WPF submitted comments to the National Institute of Standards and Technology regarding its Draft Guidelines for Evaluating Differential Privacy Guarantees. The comments approach the NIST Draft Guidance from a policy perspective, and urged changes to some parts of the definitional language in the Draft Guidance. Key areas of the comments include: A discussion of the

NIST releases milestone AI Risk Management Framework to foster trustworthy AI ecosystems

This week has been an important one for U.S. policy regarding rights-preserving artificial intelligence and how to manage, define, and improve AI in practical implementations. There are two significant news items. First, the National Institute of Technology and Standards (NIST) has released its milestone AI Risk Management Framework (1.0) for voluntary use. The AI Risk

Face Recognition and Face Masks:  Accuracy of face recognition plummets when applied to mask-wearers; NIST report 

NIST has published its first report regarding face recognition algorithms and the wearing of face masks. The report quantifies how one-to-one face recognition systems perform when they are utilized on images of diverse people wearing a variety of mask types and colors. The study found that pre-COVID-19 FR algorithms have substantial error rates, some reaching as high as 50 percent for false non-match rates.