Emotion recognition is on the rise

Emotion detecting technology is both fascinating and alarming. Cameras that can capture micro expressions on peoples’ faces and voice recognition systems that are sophisticated enough to catch tonal variations are very much in existence and can be put together with algorithms to identify someone’s state of mind. Various industries have pounded on ‘affect recognition’ which is now itself being called an industry.

According to a BBC report, the AI Now Institute a centre at New York University an interdisciplinary research centre dedicated to understanding the social implications of artificial intelligence is raising some concerns about the use of this technology. The institute says that affect recognition is being rapidly commercialised: “The affect-recognition industry is undergoing a period of significant growth: some reports indicate that the emotion-detection and -recognition market was worth $12 billion in 2018, and by one  enthusiastic estimate, the industry is projected to grow to over $90 billion by 2024. These technologies are often layered on top of facial-recognition systems as a “value add,” says the institute’s report. The problem with this is that the technology is still on shaky ground and will detect what it does out of context. A person could frown from sheer emotion but also in reaction to something like pain, for example, and yet the result could be used in any case possibly discriminating against a person such as in a hiring situation. Clearly technologies are growing too fast for our own good.

AI for suicide prevention

According to the Centre for Disease Control and Prevention in LA, the suicide rate for individuals 10 – 24 years has increased 56 per cent between 2007 and 2017. Phebe Vayanos, assistant professor of Industrial and Systems Engineering and Computer Science at the USC Viterbi School of Engineering has been enlisting the help of a powerful ally –artificial intelligence– to help mitigate the risk of suicide. This is not the first time AI is being used to address this widespread problem but Vayanos and her team have been working on this area for a few years now.

"Our idea was to leverage real-life social network information to build a support network of strategically positioned individuals that can ‘watch-out’ for their friends and refer them to help as needed,” Vayanos has said in a media communication. Her team has been designing an algorithm capable of identifying who in a given real-life social group would be the best persons to be trained as “gatekeepers” capable of identifying warning signs of suicide and how to respond.

“We often work in environments that have limited resources, and this tends to disproportionately affect historically marginalised and vulnerable populations,” said co-author on the study Anthony Fulginiti, an assistant professor of social work at the University of Denver. “This algorithm can help us find a subset of people in a social network that gives us the best chance that youth will be connected to someone who has been trained when dealing with resource constraints and other uncertainties,” said Fulginiti. This work is particularly important for vulnerable populations, say the researchers, particularly for youth who are experiencing homelessness and could be feasible in other countries. 

comment COMMENT NOW