Deputy director Kate Kaye attending ACM FAccT conference in Rio de Janeiro, Brazil
Deputy Director Kate Kaye is in Rio de Janeiro Brazil from 3-6 June to attend the leading conference on Artificial Intelligence and trustworthy AI in socio-technical systems, ACM’s Fairness, Accountability, and Transparency (ACM FAccT).
While at the conference, Kaye will be interviewing paper authors and leading AI experts for forthcoming WPF podcasts, and to inform additional work. She will also be attending in-depth paper sessions and tutorials.
The 2024 roster of accepted papers is impressive, and displays new trends in AI. There is more work this year on measuring AI systems, for example, A Decision Theoretic Framework for Measuring AI Reliance was one paper that stood out in this category. This paper analyzes how AI systems interact with human decision making. There is also more work on climate change and AI. An additional category of considerable importance to AI governance is work on AI fairness, which is highly nuanced at this point and can be seen in particular in the work of Min-Hsuan Yeh et al, (University of Massachusetts), who are presenting new work, Analyzing the Relationship Between Difference and Ratio-Based Fairness Metrics.
While these are technical papers, it is crucial for policy makers in the area of governance and privacy to grasp these and other emerging AI concepts: the AI revolution will be automated, and how this happens and is measured will create meaningful impacts on people. This is why WPF recently published a deeply researched report on AI governance tools, Risky Analysis. In this report, there is a discussion of how privacy interfaces with AI measurement, as well as an index of global AI governance tools. A 2024 FAccT paper that illuminates additional aspects of one of the key case studies in the Risky Analysis report is by Elizabeth Anne Watkins and Jiahao Chen, The four-fifths rule is not disparate impact: A woeful tale of epistemic trespassing in algorithmic fairness.