Loading
AI makes Envoy’s online language tests accurate, reliable and valid, but it’s the added layer of human scrutiny that gives educators and users real confidence.
AI may be the future of technology, but humans are still at the heart of it.
The same is true at Envoy where people have played a crucial role in the development of the online English language testing program.
Not only have humans been instrumental in building the AI that powers Envoy - they continue to be a large part of its ongoing evolution.
Envoy is powered by a bespoke AI solution created by experts to evaluate students’ language skills with validity and reliability.
The human element brought to AI is done through data that is fed through the algorithms, which are assessed by expert human linguists who have rated each piece of data.
The upshot is an AI powered English language test driven by humans, says Reza Tasviri, IDP Education’s Head of Assessment.
“Envoy’s design was driven by what is important to assess in language testing as opposed to what technologies were available or made possible,” he said. “This theme flows throughout our entire test and is why humans still play a crucial role in our test. While Envoy is powered by AI, it is driven by humans.
“AI is only as good as the data it’s trained on. At Envoy we ensure all data going into our AI technology has been reviewed and checked by multiple human raters. This creates a golden set of data and ensures accuracy, fairness, and reliability in our testing process.”
Human involvement in Envoy doesn’t end at data development. It continues with ongoing analysis of data, which is tagged and rated by people proficient in the Common European Framework of Reference for Languages (CEFR) levels.
These experts rate each piece of data to detect any inconsistencies, which are moderated or discarded from the ‘golden set of data ’ with the remaining data used to train Envoy’s algorithms.
In a similar process, any completed tests flagged by Envoy’s AI algorithms for suspicious behaviour will be sent to a human rater for review. This person will weigh up the evidence to have a final say on whether the test results should be released or not.
Envoy's report card is delivered within two hours of completing the test, with scores aligned to the CEFR.
Envoy’s strength lies in its ability to harness the best of cutting-edge AI technology in combination with the expertise and experience of human linguists.
It uses open-ended questions to simulate how people actually write and speak. It can test four skills teachers can use to test separately or together. Tests can be easily set up online using a simple to use dashboard enabling a test to be taken anywhere at any time.
A single test can be completed in as little as 20 minutes, while the test for all four skills can be done within 90 minutes.
Results are produced by AI with human oversight when required and are available within two hours.
Bringing the best of human and technology resources together allows Envoy to offer an English language proficiency test educators can rely on for accuracy and reliability.
Get in touch to learn more about Envoy.