The public often thinks of artificial intelligence software as AI magic. That is somewhat problematic: how can we understand, optimally use, and communicate about a system that revolves around mystery? How can we comprehend this black box?
That is why we designed the Skill Engine to be explainable and understandable. Thanks to its white-box architecture, the Skill Engine supports the following design aspects: a decoupled architecture, explainable results, and allowing human-on-the-loop.
The Skill Engine is not a single end-to-end artificial-intelligence model, but rather a collection of models working in unison. Each of these models has a predefined task with interpretable in- and outputs. The outputs can be evaluated and controlled separately. This decoupling allows us to prevent bias and control fairness in each step while maximizing performance.
We can trace each match result from the Skill Engine back to the original input. This way, everything is explainable not only from the system's point of view but from your perspective as well: the Skill Engine builds its reasoning on skills, experience, and education, and it's always ready to explain how. An accurate explanation of suggestions not only reinforces trust but also provides you with insight into the inner workings of the Skill Engine.
While explainability goes a long way, responsible AI gives its users the option to get involved when they want or need to. The Skill Engine does not require you to drive the system yourself; it operates fully autonomous. But with the help of our skill profile feedback system, we can take your input into account, tailoring it to your needs.