
Inception
TechWolf defines the scope and intended use of an AI model during the inception and design phase. This process ensures that both internal and external requirements are explicitly listed, and that all possible risks are identified. TechWolf specifies the requirements for the system design at inception to ensure the right level of transparency. The success criteria and testing requirements are listed during this process.Development
We put high emphasis on data quality, representativeness and validity in the development of AI models. We adhere to standard practices of data cleaning and logging all data artefacts for each model version. Data annotation is performed in-house, using carefully designed tooling and guidelines to ensure high quality.Verification
Before models are released, they are tested in several ways during the verification phrase. This step includes technical testing (unit, integration and end-to-end tests) as well as evaluation against different datasets. This way, we can measure the overall accuracy (precision, recall and level of detail) of the model, probe for harmful bias, and assess the performance of the model in specific test cases. TechWolf considers two types of updates:- Minor updates - these are small changes, such as adding a small number of skills to the model or tweaking local properties, which keep the rest of the model stable. As the difference between versions is only noticeable in the targeted areas of improvement, these models are verified by the TechWolf team using automated tests, evaluation datasets and manual verification.
- Major updates - these are big upgrades, that can cause substantial shifts in outputs and results. For example, this could be an entirely new model architecture or algorithm. These bigger updates are grouped into milestone releases, which are communicated to the customer in advance. The customer gets the chance to test changes in development and acceptance environments, before being guided to the actual deployment.