Humans are not very good at distinguishing between human voices and voices generated by artificial intelligence (AI), but our brains do respond differently to human and AI voices, according to research presented today (Tuesday) at the Federation of European Neuroscience Societies (VENZEN)Forum2024.
The study was presented by PhD student Christine Skjegstad and conducted by Skjegstad and Professor Sascha Frühholz, both from the Department of Psychology at the University of Oslo (UiO), Norway.
Skjegstad said: “We already know that AI-generated voices have become so sophisticated that they are almost indistinguishable from real human voices. It is now possible to clone a person’s voice from just a few seconds of recording, and scammers have used this technology to imitate it While machine learning experts have developed technological solutions to detect AI voices, Much less is known about the response of the human brain to these voices.’
The study involved 43 people listening to human and AI-generated voices expressing five different emotions: neutral, angry, fear, happy and pleasure. They were asked to identify the voices as synthetic or natural while their brains were studied using functional magnetic resonance imaging (fMRI). fMRI is used to detect changes in blood flow in the brain, indicating which parts of the brain are active. The participants were also asked to rate the characteristics of the voices they heard in terms of naturalness, reliability and authenticity.
Participants correctly identified human voices only 56% of the time and AI voices 50.5% of the time, meaning they were equally bad at identifying both types of voices.
People were more likely to correctly identify a ‘neutral’ AI voice as AI (75% compared to 23% who could correctly identify a neutral human voice as human), suggesting that people assume neutral voices are more AI-like. Female AI-neutral voices were correctly identified more often than male AI-neutral voices. For happy human voices, the correct identification rate was 78%, compared to just 32% for happy AI voices, suggesting that people associate happiness as being more human.
Both AI and human neutral voices were perceived as the least natural, reliable and authentic, while human happy voices were perceived as the most natural, reliable and authentic.
However, when they looked at brain imaging, researchers found that human voices elicited stronger responses in areas of the brain related to memory (right hippocampus) and empathy (right inferior frontal gyrus). AI voices elicited stronger responses in areas related to error detection (right anterior mid-cingulate cortex) and attention regulation (right dorsolateral prefrontal cortex).
Skjegstad said: “My research indicates that we are not very accurate at identifying whether a voice is human or AI-generated. The participants also often mentioned how difficult it was for them to tell the difference between the voices. This suggests that current AI voice technology can mimic human voices so much that it is difficult for people to reliably tell them apart.
“The results also indicate a perception bias where neutral voices were more likely to be identified as AI-generated and happy voices were more likely to be identified as more human, regardless of whether they actually were. This was especially the case for neutral women. AI voices, perhaps because we are familiar with female voice assistants such as Siri and Alexa.
“Although we are not very good at identifying human and AI voices, there does appear to be a difference in the brain’s response. AI voices can induce increased alertness, while human voices can evoke a sense of connection.”
The researchers now plan to investigate whether personality traits, for example extroversion or empathy, make people more or less sensitive to noticing the differences between human and AI voices.
Professor Richard Roche is Chairman of the Communications Committee of the FENS Forum and Deputy Head of the Department of Psychology at Maynooth University, Maynooth, County Kildare, Ireland, and was not involved in the research. He said: “Examining the brain’s responses to AI voices is crucial as this technology continues to develop. This research will help us understand the potential cognitive and social implications of AI voting technology, which can inform policy and ethical guidelines.
“The risks of this technology being used to defraud and fool people are obvious. However, there are also potential benefits, such as providing voice replacements for people who have lost their natural voice. AI voices can also be used in therapy for some mental problems.” health conditions.”
More information:
PS07-29AM-602, “Neural dynamics of processing natural and digital emotional vocalization”, by Christine Skjegstad, Poster Session 07 – Late-Breaking Abstracts, Saturday June 29, 09:30-13:00, Poster Room, fens2024.abstractserver.com/pr…s/presentations/4774
Provided by Federation of European Neuroscience Societies
Quote: Our brains respond differently to human and AI-generated speech, but we still struggle to tell them apart (2024, June 24) retrieved June 24, 2024 from https://medicalxpress.com/news/2024-06 -brains-differently-human-ai-generated.html
This document is copyrighted. Except for fair dealing purposes for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.