Robot Interviews: AI and the world of work

Read the article in PDF

6 May 2019
Pam Dixon

AI will alter the world of work, both in workplaces and in workplace information ecosystems. Labor experts have a term for negotiating the near and long-term effects of AI: “A fair and just transition.”[1] These and other experts have already begun documenting the emerging changes in the workplace resulting from AI deployments.[2] Some consequences are positive, some are deleterious.

For example, there is a risk that automation could intensify income inequalities and displace a meaningful percentage of the global workforce.[3] There is also a possibility that certain types of work could increase in value, such as those working in high-skill medical fields, among others.

No one has a crystal ball with a perfect view of what the future may hold. That’s why a key focus now for labor stakeholders and those interested in AI and the world of work is to research and benchmark what is happening. The goal is to provide ongoing, fact-based documentation and analysis that will inform all stakeholders regarding effective mitigations and policies that will ensure a just transition into a new world, one in which AI is ubiquitous.

Recently, a Swedish company made headlines announcing a physical job interview AI robot designed to avoid human biases.[4] This is not the sole example of AI activity in the hiring process — companies have already been using AI software to make phone calls to candidates and interview them for at least a year or more.[5] The idea of a robot interview sounds positive at a surface level: no bias — great! Robot interviews may indeed, on paper,  reduce certain theoretical types of bias.[6] But the view from the trenches is what is important here. If a robot interview does not provide a platform for a wholistic interview that captures all of an applicant’s skills from blunt to subtle, then a different kind of bias could be put in place. The original bias the robot was intended to solve could simply be transferred to another area.

For example, an AI robot interview could create other risks, such as an assessment that creates preferential bias toward those with good language proficiency, and potentially a negative bias against candidates who have a disability impacting aspects of speech. Those candidates with many other positive skills that are too subtle for an AI robot interview — such as creative skills, or nuanced interpersonal skills — could find that their best qualities are simply not taken into account.

Simply solving for one piece of the hiring puzzle (initial hiring dialogue) without solving for the larger picture (wholistic candidate assessment ) could introduce more problems than it solves. Recent research and thinking points to the idea of assessing candidates more wholistically, not less.[7] AI in the hiring process is but one of many issue areas that AI in the world of work brings forward.

In looking at the influences of AI specifically on privacy and the world of work, it is clear that much more work needs to be done to gather facts and create benchmarks specific to privacy, human autonomy, and dignity impacts of AI in the workplace, as well as documenting relevant case studies.[8] This is a significant gap in research. It is important to catch up to the work being done on fair and just transition regarding employability, gender disparity, salary, etc. and to create a complementary body of knowledge that facilitates accurate policy responses regarding a fair and just transition that also enhances privacy, human autonomy, and dignity.

It’s not that automated hiring systems are automatically bad, or bad for privacy. We simply do not have nearly enough independent documentation of what the effects are regarding privacy, and many other potential issues. Simply put, we do not yet know enough about what the privacy impact is or is not. Seeing the privacy issues clearly requires looking into many adjacencies.

Here are some immediate questions:

  • Are certain jobs more likely to use AI in hiring or other human resource activities or decisions? If so, which?
  • What types of AI systems are being used for workplace purposes? What are the uses?
  • What is the role of automated AI in hiring processes, and do these processes influence or change privacy or other fairness considerations? If so, how? (Data collection, analysis, uses, retention?)
  • What are the standards for privacy impact analysis on AI in the workplace and in hiring?
  • In the US, is the Department of Labor monitoring developments and conducting studies? Is the Equal Employment Opportunity Commission monitoring developments?
  • In other jurisdictions, are the relevant government agencies monitoring developments and conducting studies?

Additional questions specific to AI in the hiring process:

  • To what extent are AI systems determining or influencing responses in eligibility situations (employment)?
  • What specific AI systems are in use now in the hiring process, either in-house or outsourced?[9]
  • Are job applicants aware of AI systems in the hiring process?
  • What is business sector responsibility to disclose AI systems used in the hiring process and to ensure that impacts are understood and mitigated?
  • What is the privacy impact assessment of AI systems already in place?
  • In what contexts have hiring robots already been deployed, and for how long?
  • Is there a discrepancy in application of AI hiring technologies, that is, do lower wage applicants get AI interviews, and do higher wage applicants not get them?
  • Have any of the AI robot users or developers instituted longitudinal studies regarding impact assessment?
  • Are AI robot interviews or other AI-influenced activities in the hiring lifecycle compatible with fair and equal treatment of those who are sight or hearing impaired, or have disabilities?

It is just the beginning of experimentation, implementation, and use of AI in hiring and the workplace. As these uses expand, knowledge of how to navigate this new world needs to keep pace. Let the arrival of robot interviewers in hiring situations be a clarion call to catch up, because there will be multiple consequences. We just don’t have a clear picture of what these are yet. Neither hype about how great the systems are nor rhetoric about how damaging the systems are regarding privacy effects will be helpful. We need meaningful factual documentation of actual effects and changes in the trenches, and then all stakeholders can begin understanding more about what is happening and make appropriate decisions.[10]

There’s a lot at stake here, and it’s important to get effective policies in place early. To do that, it is essential to do the hard work of ongoing documentation of the facts and learning what is happening as it is happening.

Pam Dixon, Executive Director

World Privacy Forum


1 There is significant policy momentum on digital transformations and the future of work at the EU level. An excellent study has been authored by the OECD representative of the Trade Union Advisory Committee (TUAC), who was also involved with the OECD AI Guidelines. The study includes an analysis of impacts of digitization and AI on work, and includes case studies from seven EU jurisdictions as well as recommendations. The bibliography of the report cites additional helpful case studies. See: Byhovskaya, A. (2018) Overview of the national strategies on work 4.0: a coherent analysis of the role of the social partners. Brussels: European Economic and Social Committee. Available at:

2 Justine Brown et al, Workforce of the Future: The competing forces shaping 2030, PWC, Available at:

3 James Manyika and Kevin Sneader, AI, automation, and the future of work: ten things to solve for, McKinsey Global Group, Executive Briefing, June 2018. Available at:

4 Maddy Savage, Meet Tengai, the job interview robot who won’t judge you, BBC News, March 12, 2019. Available at:

5 Bill Goodwin, PepsiCo hires robots to interview job candidates, Computer Weekly, April 12, 2018. Available at:

6 Some scholars have made cases for the use of AI tools in hiring to reduce discrimination. See Kimberly Houser, Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making, February 28, 2019. 22 Stanford Technology Law Review (Forthcoming). Available at SSRN:

7 Interview with Srikanth Karra, chief human resource officer of Mphasis, The future of jobs in the world of AI and robotics, Knowledge@Wharton, University of Pennsylvania, March 1, 2018. Available at: See also the discussion of algorithms and classification bias in hiring, Pauline T. Kim, Data-Driven Discrimination at Work, William & Mary Law Review, Vol 58, Issue 3, Article 4. Available at:

8 The workplace of the future: As artificial intelligence pushes beyond the tech industry, work could become fairer—or more oppressive, Special Edition, The Economist, March 28, 2018. Available at:

9 Many such systems already exist. See, for example, Pymetrics AI Hiring Solution, Some of these systems could potentially boost privacy if implemented correctly, but more work is needed to fully understand and assess quality and impacts.

10 Pauline T. Kim, Data-Driven Discrimination at Work, William & Mary Law Review, Vol 58, Issue 3, Article 4. Available at:

Related Documents: