How we do unbiased and fair candidate evaluations
When a candidate responds to a questionnaire, the answers are compared with the example answer from the interview script. The answers are broken down into their semantic meaning and the abstract distance in meaning is measured in a high dimensional space, built up from a large language model that our partners with OpenAI have compiled from their artificial intelligence with human language understanding.
The answers are not evaluated by humans, and the candidate's background is not used in the evaluations. This approach drives our unbiased and fair evaluations which are able to produce immediate scores and rankings.
Synonyms, different phrasings, and different approaches to the same answer will all produce great results. The model understands the meaning of a phrase, and reasons about the meaning rather than the text used to capture the phrase. The artificial intelligence does not depend on matching keywords, which makes this solution a much better fit for candidates who express themselves differently but are equally competent.
The score is calibrated to indicate a 0-5 star rating, where 0 means the answer is off topic, a 3 is a competent answer and 5 is an expert answer.