[NEW!] TETRA AVA TALK:
If this project sparks your imagination, in this sector, or another one entirely, be sure to get in touch with us! We are currently planning the continuation of this project in the shape of a TETRA project. Please take a look at our flyer, or answer a short survey to show you’re interested in using this technology, and might want to join the new project’s steering group!
SPACE 2.0 is a cooperation between the research team of Digital Arts and Entertainment, and Applied Psychology.
During the Applied Psychology degree at Howest, students are taught various conversational strategies. To practice this skill, students take part in role-playing exercises. These exercises require a lot of teacher commitment, are difficult to evaluate, give limited feedback to each student, give limited flexibility in the learning process, and are heavily dependant on peer effort.
The focus of the project is to replace these role-playing exercises with an AI agent which takes on the client role, and allows the students to talk to a virtual character instead of one of the other students.
To accomplish this, the team makes use of Large Language Models, a recent advancement in AI techology known widely from OpenAI’s ChatGPT. Leveraging this new tech allows the virtual persona to be creative, and answer questions correctly, while still sticking to preset backstory information and personal characteristics. Some personas might be open, willing to share, looking for a solution, others might be more closed-off, requiring specific questions and strategies to disclose more information.
SPACE 2.0 is a continuation of the SPACE PWO project that attempted to solve this same issue with the conversational chatbots of the time, but the technology was insufficient to achieve a result that could be practically useful, and resulted in very rigid repetitive conversations, and required an extreme amount of manual labour to create a new persona.
Aside from the Large Language Model, we also use speech-to-text and text-to-speech to allow the user to speak with the virtual client, as well as using Metahumans and speech-to-animation to create a virtual character that embodies the client, giving the users a more natural feeling as if they were speaking to a real person, instead of feeling like they are talking to a computer.
In addition to the persona, the goal is to also build an observer which will evaluate the student’s conversation with the virtual client based on the application of the strategies they are taught. This might give the students some initial feedback they can use to improve, or at the very least might give the lecturer a quick overview of what elements the students should focus on more.
Virtual training conversations allow for efficient hands-on experience without the need for the physical presence of a real client. This saves time and costs compared to traditional training involving live actors or real patients. Virtual training is also availble anywhere, anytime, allowing students to improve their conversational skills whenever best suits them. This inscreases accessibility to training, especially for those who are geographically limited or have busy schedules.
Training with a virtual client also provides a safe environment in which students can experiment, make mistakes and learn without risk to real patients. We expect that this will contributes to the students’ self-assurance and improve their performance in real clinical situations. Programs can also be tailored to indivudal needs and skill levels, allowing for more focused and effective training as students can focus on specific aspects of their conversational skills.