Bias & Fairness

TechWolf goes above and beyond to ensure our models are as bias-free as possible. Unlike many other solutions, the Skill Engine API doesn't just compensate for bias at the output - instead, fairness is prioritised from the very beginning. A fair model starts with a clean data chain. While the standard practice is to use pretrained models, which are freely available online, these models are often trained on large amounts of poorly described or random data, including forums full of racism, sexism and generally toxic content. In contrast, TechWolf manages all model training for the Skill Engine API from the start, using only our proprietary dataset containing over 300 million vacancies. No resumes or other sensitive personal information is used to train our models, fully removing the possibility that this information influences the model or causes privacy issues.

In addition to this focus on training high-quality, clean models, the Skill Engine API is based on a unique skills framework - this framework acts to build an abstraction of each individual, stripping away potentially sensitive information such as gender, race... In our research with Umbrella Analytics, we found that even without any targeted optimisation, the skills framework already removes most sources of bias out of the box. Furthermore, explicit bias evaluation and control tools will be available soon, allowing you not only to measure bias in your results, but also to impact them as to meet a fairness criterium of your choice (equal opportunity, equal outcome, equity, ...).