Current Situation
Traditional hiring models are vulnerable to presumptive criteria as they source as close to 100% of an ideal candidate image set by the organization. Presumptive criteria has a high risk of bias introduction that negatively impacts diversity and decreases its organizational benefits. Sourcing, screening, and vetting based on anything other than skills and ability risks excluding qualified and exceptional talent who may have less conventionally expected backgrounds.
Goals and Objectives
Goals:
Mitigate digitally systemic and unconscious biases in TA.
Tap into wider talent pools and delimit irrelevant characteristics from impacting selection criteria.
Strategies:
Design and deploy skills taxonomies to source skills gaps and translate them into hiring criteria.
Adopt skills assessments in the hiring process to evaluate capabilities over credentials.
Facilitate bias mitigation training end to end for all internal and contracted stakeholders across the entire TA and candidate life cycle.
Engage in workforce planning to study and understand where higher-skilled, lower-competition talent pools exist.
Technology Deployed
Dynamic skills architectures
AI for skills and GenAI for gap analyses and talent matching
Skills maps, taxonomies, and ontologies
Internal mobility assessments
Predictive and performance analytics across internal TA teams
Applicant tracking systems (ATS)
Team and peer feedback and performance evaluations around contributions to candidate commentary
Use Case Summary
Organizations develop and deploy best practices to source qualified candidates from analytically viable geographies regardless of baseline comfort with underlying demographics.
Organizations then implement sourcing and interviewing practices and behavior that lean into data-driven insights to inform and guide human hiring decisions.
Elevating data into the evaluation process guides collective practices and methodologies that expose nonstandard criteria not embedded in what is important to the organization.
In time, behavioral frameworks emerge to fuse data and human decisions that isolate indicators of bias regardless of origin.