Bias & Fairness
TechWolf goes above and beyond to ensure our AI models are as bias-free as possible. Unlike many other solutions, the Skill Engine doesn't just compensate for bias at the output - instead, fairness is a priority from the start.
A fair model starts with a clean data chain. The standard practice is to use pre-trained models, which are freely available online. Those models are often generated based on large sets of poorly described or random data, including sources containing racism, sexism, and generally undesired content.
In contrast, TechWolf carefully manages all model generation for the Skill Engine from the start, using only our proprietary dataset containing over 500 million vacancies. Our models are trained without resumes or other personal information, entirely removing the possibility that this information influences the model or causes privacy issues.
In addition to this focus on training clean, high-quality models, the Skill Engine builds on a unique skills framework. This framework creates an abstraction of each individual, stripping away potentially sensitive information such as gender, race, or age. In our research with Umbrella Analytics, we found that even without any targeted optimization, the skills framework already removes most sources of bias out of the box. Furthermore, explicit bias evaluation and control tools will be available soon. These will allow you to not only measure bias in your results but also to act on them to meet a fairness criterium of your choice (equal opportunity, equal outcome, and equity).