The Ethics of Artificial Intelligence in the Workplace

with John Verdi of the Future of Privacy Forum

Nearly 25 percent of companies report using artificial intelligence for HR-related tasks, including recruiting and hiring employees.

John Verdi, Senior Vice President of Policy with Future of Privacy Forum, joins host Tetiana Anderson to discuss the benefits — and ethical challenges — of using AI tools in the workplace.

Posted on:

November 30, 2023

Hosted by: Tetiana Anderson
Produced by: National Newsmakers Team

Anderson: US companies have recently started incorporating artificial intelligence tools into their recruitment and hiring practices. Utilizing these tools can result in increased access to a broader pool of applicants and faster hiring for employees. Hello, and welcome to "Comcast Newsmakers." I'm Tetiana Anderson. About 1 in 4 organizations report using artificial intelligence to support HR-related activities, including recruitment and hiring. And while AI can offer several benefits, it can also present challenges related to the ethical use of these systems. And joining me to talk all about this is John Verdi. He is the Senior Vice President of Policy at the Future of Policy Forum. And John, thank you so much for being here.

Verdi: Thank you for having me.

Anderson: So I want to start out with these ethical challenges. What are some of them, and how did you stumble across them?

Verdi: Well, everybody is excited about hiring tools that can match great candidates with great jobs. But at the same time, some AI hiring tools can present real ethical challenges for individuals, for companies, and for policymakers who are seeking to regulate in this space. When we looked into this area, we saw concrete examples of AI tools that weren't effective at what they purported to do. We saw AI tools that, even if they were effective to some extent, were undermining the job candidate -- candidacy and job prospects of traditionally marginalized communities. And that isn't good for individuals. It's not good for organizations. And I think lawmakers and regulators are increasingly coming to the conclusion that it's not good for the community as a whole and for, you know, America's political and industrial future.

Anderson: So, even though a company might be interested in having a more diverse pool of candidates, AI can affect that. And I'm wondering if there are communities of job seekers who are more negatively impacted by that than others.

Verdi: Absolutely. You know, some organizations come to AI tools because they want to diversify the candidate pool, because they want to diversify their workforce, because they want to identify rock star candidates who can perform really well within an organization. But at the same time, there are some communities, right -- communities of folks with disabilities, communities of folks who have traditionally been marginalized, like women or people of particular races, that, when their applications are put through an AI process, the biases that we all have as humans subconsciously end up getting reinforced by the technology. That's a real problem.

Anderson: So, in 2023, your organization developed some best practices for using AI so it doesn't cause that bias and discrimination in hiring that we were talking about. And I want to read just a few of them. Developers and deployers should have clearly defined responsibilities regarding AI hiring tools, operation, and oversight. Organizations should not secretly use AI tools to hire, terminate, and take other actions that have consequential impacts. And AI hiring tools should be tested to ensure that they are fit for their intended purpose, and assessed for bias. So, that's a lot. How did you decide that these were the best practices that really needed to be focused on, and should be part of this list?

Verdi: So we talked to the leading companies who build these tools. We talked to leading companies who use the tools, and we talked to enforcers and lawmakers who are exercising oversight over these tools and promoting accountability. So, we landed on a set of principles that we think can help mitigate some of these risks while maintaining the benefits. There's no way to completely eliminate some of the risks here, But when you look at the situation as a whole, I think what folks really want to see is progress made, and they want to see situations where technology is lifting folks up, reducing bias and discrimination, increasing fairness, rather than undermining fairness and increasing bias.

Anderson: And there are some states, like Maryland and New York, who've really already recognized that using AI in the hiring capacity can be a little bit problematic. So, tell us what these two states are doing. And, you know, what are you doing to spread the word to the rest?

Verdi: Well, the number one thing that these folks are doing is making sure that there's no secret AI used in hiring, because when you have secret AI used to make these sorts of consequential decisions that have that have really important impact to individuals, it's much harder to enforce fairness requirements, it's much harder to enforce anti-discrimination and other sorts of legal requirements. And it's, frankly, unfair to individuals to be subject to this sort of assessment without their knowledge and without their consent. So that's where some of the leading jurisdictions are going. What we also see are requirements for accountability mechanisms internally within organizations, to make sure that certain requirements are adhered to and certain protections are in place. And as we sit at the Future of Privacy Forum, we're trying to make sure that folks within the US, from the federal level down to the states, down to the cities, are aware of some of the concerns, are aware of the opportunities that are here, and are smart about how they regulate and legislate around these issues.

Anderson: People are gonna want to know more about this. Is there a place they can look? What's your website?

Verdi: The best place to find us is at We're the Future of Privacy Forum on the web and on the socials. Thank you so much.

Anderson: John Verdi of the Future of Privacy Forum. Thank you for being here.

Verdi: Thanks for having me.

Anderson: And thanks to our viewers, as well, for watching. As always, for more great conversations with leaders in your own community and across the nation, visit I'm Tetiana Anderson. ♪♪ ♪♪

Loading Loading...