TAMPA, Fla. — A recent doctoral graduate from the University of South Florida studied the emerging technology of hyper-realistic avatars for her dissertation.
Jill Schiefelbein wanted to know if those lifelike avatars could effectively communicate with people, curious if the avatars would impact how much a person trusted and remember a message.
See her dissertation defense here.
Schiefelbein is also the Chief Experience Officer for Render, a company that specializes in digital likeness solutions. Using Render, she created a hyper-realistic avatar, a video and vocal clone of herself, in a studio. She was then able to tell that avatar what to say. The avatar could deliver that message in a way that looks strikingly human.
“My hyper-realistic avatar, as I like to call her, is my digital counterpart,” Schiefelbein said.
For her research, Schiefelbein utilized the USF Behavioral AI lab to test the impact of the avatars.
She used eye-tracking technology to monitor whether viewers still paid attention to the avatar’s message. She employed facial expression sensors to monitor how viewers genuinely reacted to the avatars.
The biggest takeaway: Schiefelbein’s research revealed hyper-realistic avatars could be effective communicators if companies made clear they were utilizing an avatar that wasn’t real.
“When you don’t disclose and someone finds out, they feel fooled. They feel deceived. They feel swindled,” Schiefelbein said, adding that nearly 85% of the viewers that weren’t told about the avatar were unhappy learning about it later. Those that were told did not share that anger.
Schiefelbein acknowledges there could be negative uses of hyper-realistic avatars. Others have mentioned concerns about avatars delivering misinformation or propaganda. Some note that they could take away existing jobs for people who would no longer be needed to deliver messages from humans.
There are significant worries about transparency too. In fact, President Biden issued an Executive Order last year that included requiring more transparency when companies utilized artificial intelligence.
Schiefelbein prefers to focus on the positive uses of hyper-realistic avatars. She sees them as an upgrade from impersonal email, text, or chatbot communication, giving companies a way to be slightly more personal when they can’t communicate face-to-face.
“So now, instead of just having words that are ‘from’ (in an email), and let’s be honest, we don’t know if people are sending that email or not, we have words from people we’ve already started to form a relationship with,” Schiefelbein said.
USF’s Behavioral AI Lab is part of the school’s Center for Marketing and Sales Innovation Customer Experience Lab.