Avoiding Replication of Bias in AI Outputs

AI often replicates the biases of the humans from whom the data was collected to begin with (as well as systemic biases that are reinforced through discriminatory hiring practices, redlining in real estate, etc). Is there a place you see as particularly effective to intervene in those tendencies of AI so as to avoid unethical impacts from their application on data analytics for HR?

1 Like

Great question, and honestly I think we have greater risks in HR than in many other domains.

Specifically, AI is trained on data from the past, and thus if we think of the employee journey and the likely GenAI impact, we’ll see challenges such as:

  • Talent Acquisition: There have been lots of horror stories here, about using past hiring practices to predict future best fit candidates. Shocker that these models recommend white males disproportionately. There has been some work to improve these models by obfuscating the name (so as to remove gender / ethnicity biases), but the market is still pretty immature here. Workday is being sued for precisely this issue: https://www.reuters.com/legal/transactional/workday-accused-facilitating-widespread-bias-novel-ai-lawsuit-2024-02-21/
  • Performance management: Not unlike for TA, GenAI augmented performance management suffers from the same challenges. Who received promotions in the past? Largely white men. I had a great chat with an executive from one of our global partners, and they shared they’ve been doing more Analytics vs. AI on their workforces for performance management. Two analyses their diving into are reviewing high performing ICs who report to new/low performing managers, and correcting employee isolation (ex, the one female on a team, the one LGBTQ on a team, etc.). This resulted in me thinking we should introduce an HR benchmark not unlike Net Dollar Retention for CX, but Net Talent Retention for HR. What we want is to always be increasing our talent density (great book by Erin Meyer / Reid Hoffman, No Rules Rules if you haven’t read it on this). But, when I discuss this with prospects and customers, while there is tremendous desire to increase talent density, and curiosity around applying AI to help measure and predict it, there is some skepticism on how to measure, and if the measuring would be biased.

In Visier’s case, we believe that it is imperative that we provide transparency on how our models work, and give our customers the ability to either change our weighting or replace our models with their own. And, if your firm doesn’t like our model and wants to replace it with your own, we’ve seen a lot of interest this past year in both people using our Python and Sagemaker integrations to write their own models: Use Visier Data in Amazon SageMaker.

In the case of Vee, Vee uses GPT4Turbo to convert your question into a query string to our analytic engine, and their our own non-GenAI interpretation engine writes the narrative. So, Vee is less susceptible to the hallucinations and the biases that we see in most GenAI products.

Net? Stay skeptical!

1 Like