By Mandy Long, Manager, Culture & People Experience, Ensono The use of AI within HR can aid in DEI initiatives—but it can’t create an inclusive workplace on its own. To achieve meaningful progress, HR organizations must partner AI solutions with diverse human experiences and expertise.
In February 2023, Tesla recalled over 300,000 vehicles due to bugs in its driver assistance software, which could result in crashes.1 The buzz around artificial intelligence (AI) and its potential for every industry and business function has never been more palpable. But, as the Tesla example illustrates, AI is not perfect—even when it’s trying to help us.
When we completely hand over the keys (literally and figuratively) to intelligent machines, the repercussions can be problematic. Much like self-driving cars, corporate diversity, equity and inclusion (DEI) efforts are still evolving. There is a lot of potential for AI to help, to compensate for human flaws and to make manual or minor tasks easier. But used incorrectly, or prematurely, it can do material damage.
In a year marked by mass layoffs, constrained hiring budgets and faltering employee engagement, it’s inevitable that HR departments will look to technology to win back time and focus. It’s even more likely that the tools they evaluate will have some degree of AI functionality. As both realities come to a head, the onus falls on businesses to ensure these tools are thoughtfully selected, implemented and managed.
Employees and the technology they use must be held to the same ethical standards to ensure that innovation doesn’t come at the expense of progress toward your DEI goals.
AI’s potential to positively transform HR
HR and people departments manage an exorbitant amount of data, more than any human team alone can feasibly parse, analyze and act on. Generative AI and predictive analytics platforms can take on the cumbersome work of filtering signals from noise, giving employees the context (and time) they need to build more effective programs. They might also be the key to creating a true performance mindset around their DEI efforts. For some businesses, progress against diversity commitments is measured by the availability of specific programs rather than their results, according to Deloitte research.2 Few respondents reported that their organizations align DEI progress to business outcomes like revenue.
More accessible AI solutions can help HR teams both expand the scope of their DEI work and connect it to overarching recruitment, engagement and retention goals. For example:
Recruiting: AI tools can draw on reams of internal and third-party data to create more objective job descriptions and interview questions.
Coaching: Similarly, generative AI tools can be a font of inspiration for team meeting prompts or check-in questions to guide employee/manager one-on-ones.
Pay and benefits: They could analyze compensation sources in and outside an organization, highlighting opportunities to recalibrate salaries with market benchmarks or flagging pay inequities across employees of a particular level or department.
Performance management: AI can aggregate review data, documented feedback, corporate training and test scores to identify the most pressing learning and development needs across an organization.
Employee engagement: It’s not unreasonable to imagine AI scraping a variety of internal data sources (e.g., PTO requests, all-hands Q&As, documented feedback) to keep a real-time pulse on employee morale.
The possibilities are promising, but true innovation doesn’t come without a few speed bumps.
1“Tesla recalls 362,758 vehicles, says Full Self-Driving Beta Software may cause crashes,” CNBC, February 2023. 2“Taking bold action for equitable outcomes,” Deloitte, January 2023.
Use with caution: Confronting the reality and repercussions of AI bias
Humans and machines both bring biases to their day-to-day—the difference is that AI can reinforce bias at scale. The algorithms that underpin AI technology learn from massive datasets. When AI is trained on historical information, it runs the risk of perpetuating patterns of ethnic, racial and gender discrimination (as has already been the case with online mortgage lending and facial recognition software). And despite advances in natural language processing and sentiment analysis, much of the information AI learns from and acts on is quantitative. But any HR expert would admit that the work of people operations requires qualitative inputs as well. How many strong candidates or high performers might be passed over because their application data didn’t match an AI-generated “ideal” profile?
3“EEOC Sues iTutor Group for Age Discrimination,” U.S. Equal Employment Opportunity Commission, May 2022.
Even more problematically, AI can be manipulated to make decisions that reflect the prejudices of its human managers. In 2022, the U.S. Equal Employment Opportunity Commission (EEOC) sued the China-based iTutorGroup for allegedly programming its employee recruitment software to automatically reject applicants over a certain age.3 Examples like this prove how fast hastily implemented technology can undo an organization’s DEI progress. On top of reputation and brand risk, businesses that adopt AI without robust vetting and oversight may increasingly be subject to regulatory penalties.
The EEOC released a draft enforcement plan in January 2023, detailing its intentions to hold automated recruiting systems accountable to federal nondiscrimination laws.4 Beginning in July 2023, the New York City Department of Consumer and Worker Protection will enforce a new rule making it illegal for city employers to rely on automated employment decision tools unless they are regularly audited for bias.5 The UK government has also signaled toward forthcoming regulations to ensure the fair, transparent use of AI.6
What a responsible approach to AI use looks like
The potential for bias exists in phases of the AI lifecycle, from product conception and design to implementation and ongoing management. How technology vendors build their AI capabilities is beyond any HR team’s control. But they are accountable for ensuring the tools they invest in don’t expose their organizations to injustice or inequity. HR leaders must consider these four steps as they navigate the AI-driven workplace:
HR teams cannot control how technology vendors build their AI. But they are accountable for ensuring the tools they invest in don’t expose their organizations to injustice or inequity.
4“EEOC Targets AI-based Hiring Bias in Draft Enforcement Plan,” Bloomberg Law, January 2023. 5“NYC Finalizes Regulations on AI Employment Tools and Will Begin Enforcement on July 5, 2023,” JD Supra, April 2023. 6“UK unveils world-leading approach to innovation in first AI white paper to turbocharge growth,” Gov.uk, March 2023.
Acknowledge that human expertise and AI are not an “either/or” dynamic. Despite being the C-suite title with the fastest growth between 2020 and 2021, chief diversity and inclusion officer hiring declined more than four percent in 2022.7 Amid this trend, AI should not be viewed as a replacement for capable leaders. Software can quickly analyze massive datasets, but it can’t necessarily communicate recommendations to busy executives or develop a cohesive strategy for tying DEI programs to business objectives.
Develop an airtight due diligence process. Of organizations that currently use automation or AI to support HR functions, only two in five say the vendors they partner with are very transparent about what they do to prevent bias in their tools.8 HR teams will need to work even more closely with their IT and legal colleagues to develop clear requirements and evaluation criteria for assessing any new solution. Be prepared with specific questions that dig into how vendors develop and train their AI, and what checks and balances are built into their systems to mitigate bias.
7“What’s vaulting into the C-suite? Trends changed fast in 2022,” LinkedIn Workforce Insights, February 2023. 8“Fresh SHRM Explores Use of Automation and AI in HR,” SHRM, April 2022.
Prioritize small experiments. With many enterprise technologies, early adoption can be a competitive advantage. But with AI, in particular, pressure testing tools through controlled pilot programs can help mitigate risk (and other unintended consequences). Today, more than 70 percent of U.S. adults oppose businesses using AI to make final employment decisions—a strong indicator not to go all-in on outsourcing the hiring process to machines.9 HR leaders should look for tedious steps within existing workflows that can be safe candidates for experimentation. Whether these tests impact internal operations or external applicants, organizations must be transparent and specific about how they’re using AI.
Audit your AI, regularly. Especially with instances of self-learning AI, the technology you implement today may not operate the same tomorrow. HR teams need to institute a routine cadence for reviewing AI systems to ensure they’re not making inequitable recommendations or inferences. As more government authorities establish frameworks for regulating AI, audits are likely to become a requirement (as documented in New York City’s recent rule) rather than simply an extra layer of protection.
Diversity, equity and inclusion are human issues—in other words, nuanced. No technology alone, even AI that can pick up on sentiment or mimic natural language, is capable of “fixing” these challenges. But left unchecked or unattended, it may exacerbate them. As leaders look for ways to innovate and unload administrative excess with AI, they can’t lose sight of who’s ultimately in charge. Technology can help teams accelerate their efforts and shift gears as needed, but navigating the workplace of the future still requires capable, compassionate humans in the driver’s seat.
8“Fresh SHRM Explores Use of Automation and AI in HR,” SHRM, April 2022. 9“AI in Hiring and Evaluating Workers: What Americans think,” Pew Research Center, April 2023.