What career will you get after graduation?
There is no one beaten path when choosing a career after graduation. While some degree programs prepare students for specific careers, there is no rule saying a sports science major has to work in a sports facility, a computer science major has to write code, or legal studies major has to work in a law office.
For Chidozie Urom, a graduate student in the University of Providence’s Masters of Science in Clinical Mental Health Counseling program, a career with the Utah-based business intelligence company VoicesSignals, offered a unique opportunity to explore the intersection between human psychology and advanced technology. The position also gave Urom a chance to push himself out of his comfort zone, engage with a different side of psychology, and prepare for his future career aspirations.
Based in Cottonwood Heights, a small city in Salt Lake County, Utah, VoicesSignals is a growing business intelligence software company working to create new and engaging machine learning software for businesses of all sizes. The company’s software incorporates the human aspects of psychology with the technical aspects of machine learning algorithms. The software helps build adaptive; intelligence software designed to help businesses understand the habits of their customers.
To do this, VoicesSignals employs the help of a rapidly growing technology called artificial intelligence or A.I. – which it uses to assess the customer experience and report actionable data for companies to use.
Artificial intelligence is becoming more commonplace as businesses explore ways to learn about customers throughout their funnel. What was once a labor-intensive job that required teams of humans, countless hours, and millions of data points is now being completed in minutes or seconds using artificial intelligence – expanding the company’s customer recruitment, retention, and experience.
“[VoicesSignals] combines world-class psychological science with advanced voice analytics to offer rich, real-time insight into individuals’ emotions and personality traits in order to predict their future behaviors. All based on conversations that are already happening,” the company’s website reads.
Urom, approaching one year of working for the company, plays a valuable role in the machine-learning process. Once computer scientists, software engineers, and programmers build the A.I. – they pass it over to Urom, who works directly with the A.I. software to guide its training in its assigned tasks.
“My job is to teach the A.I. software how to identify core personality characteristics and understand prosody in speech and language,” Urom said of his responsibilities at the company. “I am a type of personal psychologist for the company, and I work with other psychologists across different levels.”
Although he doesn’t work directly with software development, Urom’s role at the company is a critical step in ensuring the A.I. accurately understands customers from a psychological level.
“[The software engineers] create a platform that is easy to understand so we can “teach” the A.I. to learn these patterns and engender intelligence,” Urom said of the process.
To train the A.I. on recognizing personality characteristics, Urom will listen to previously collected voice recordings obtained by the company. Recordings are uploaded to the mainframe for Urom and the A.I. to listen to – the first step in the machine learning process.
“While listening to the recordings, I pause at moments of significant or minuscule emotional expression. I then label the appropriate emotional criteria for the A.I, which then learns the emotion and can apply the same sentiments to similar emotional expressions,” Urom said.
Urom must carefully assign the correct emotion at each point in the recording – selecting from one of the over 27 emotions known to humans. After assessing the recording and identifying emotional expressions, Urom moves on to the final step in the process, providing the A.I. with one last bit of data in determining personality, but this time through a specialized assessment.
“I use the NEO PI-R, an emotional and personality assessment inventory that psychologists often use in assessing personality,” Urom said. “I use this to create an overall assessment of the individual who is speaking based on their conversation and voice recording.”
When teaching A.I., the task can be slow and, in many cases, very repetitive. Just like teaching someone to ride a bike, it takes time for the A.I. to recognize and learn different patterns, behaviors, and tasks it must complete. While the development of A.I. has improved over the years, fully-autonomous A.I.s are few and often limited in scope. A.I. is most recognized today through its use in everyday life, such as text-to-speech, predictive text, and adaptive cruise control, which still requires human correction.
With the help of many psychological perspectives across different academic levels, VoicesSignals A.I. can inventory and assess thousands of different emotional expressions and NEO PI-R assessments – helping it to provide better insights from millions of data points. With the help of Urom and other psychological assessors, the A.I. can recognize emotion and translate it into actionable data.
“I see this merger between technology and psychology to be a force for good,” Urom said when addressing A.I. and its role in psychology. “While A.I. can be appropriated to achieve certain aims, I believe A.I. will never replace authentic human relationships.”