Home Page
cover of Ethics of AI
Ethics of AI

Ethics of AI

jiayi li

0 followers

00:00-09:55

Nothing to say, yet

Audio hosting, extended storage and much more

AI Mastering

Transcription

The podcast discusses whether sentiment analysis is helpful in understanding people's true emotions. The guests, Jonathan and Emma, discuss the technical and linguistic aspects of sentiment analysis. They mention that bias can be present in algorithms and models developed by humans. To mitigate this, proper training and ethical researchers are necessary. They also discuss the challenge of improving accuracy and the dynamic nature of language. Emma explains how humans understand emotions through speech cues and body language. She mentions the challenges machines face in understanding emotions in the same way. Ethical considerations include minimizing biases in the data used to train AI. Overall, sentiment analysis is not currently able to fully understand human emotions, but the importance of contextual information is emphasized for future development. Hello everyone, welcome to my podcast channel. The topic of this podcast is that is sentiment analysis helpful in understanding people's true emotions? In this podcast, I have invited two friends to join, who are Jonathan and Emma. Jonathan will focus on the technical aspects and Emma would concentrate from a linguistic perspective. Both of them will discuss the ethic issues that would be involved. Hi Jonathan, thank you for having the interview with me. Should we start from the definition of NLP and sentiment analysis? Logically speaking, NLP is kind of like help computers to understand human language. That's a simple definition of NLP. And sentiment analysis is one of the applications. Well, let me elaborate a bit more. My understanding of sentiment analysis is you extract subjective information from objective text. So do you feel like that could cause some bias from a theory perspective? Different models that probably generate different conclusions from the same piece of text. Yeah, for sure. I mean, algorithms and models are developed by human developers, so they are inherently to be biased. When you construct a model, you have to think about it as a human. And your bias could be explicitly or implicitly affect how you construct the model. Are there any tactics to mitigate this issue? It should start from the human developers. You need to give them proper training to let them realize the bias they potentially have. And also you have to introduce ethical researchers in the development team of algorithms to make sure they know what's going on and they can fix the potential problems if there are any problems, right? Probably they need the person who actually takes the course of ethics. I agree. Currently, people are trying to make the tool or whatever, the model or algorithm, to have more predictive power. But also I can see the future trend should be the development team should make it more explainable. In that sense, you can explain what is going on there. It's not a black box. Is that possible? It's highly possible. I think it's last year, I read some papers from Stanford. You know neural networks, right? They try to make it more explainable. They have CNN and RNN. I don't want to make it too complex, but that's the current situation. Back to the topic, do you think sentiment analysis could really understand the human's emotion? To be honest, I don't think so. Sentiment analysis is still a bit challenging for human beings. How can you see computers can beat us when there's not enough complex information? How can we improve the accuracy of the algorithm which applied on sentiment analysis? You want to improve accuracy, right? But what if there's nothing you can benchmark? Even for the benchmark, people may not have an agreement. They don't know if you can have accuracy-accuracy. That's a philosophical question, but you see what I mean? Even you can see, we as human beings, we have an agreement here. This sentence is, I don't know, it's positive and it's negative. But another thing is the language evolves. There is a paper on PINUS. The full name should be the Proceedings of National Academy of Science. It's one of the top journals in general science. They kind of study the trend that people use different adjectives at different time points. For example, in the last century, terrific means horrible, means terrible. But now, in recent years, people use it in a positive way. Now it's like, it's wonderful, it's amazing. So it's completely the opposite. So you see the benchmark is changing as well. So, I would say, I don't know how we can ultimately get the accuracy, because it's a dynamic... So you think the barrier for sentimental analysis is linguistic? Exactly. That's a really great point. Go back to the definition of NLP, which is, let computers understand human language. And if the human language keeps changing, the computer, of course, cannot manipulate us, right? So they don't know the trend of using different adjectives, right? Before, people used terrific in a bad way. Now people use terrific in a good way. They can never predict that. For the next segment of the podcast, I have invited Emma to provide a perspective from linguistics on the possibility of sentimental analysis. Thank you for taking that with me, Emma. Thank you for inviting me today to the podcast. I'm really excited to talk about this topic. So Emma, how is it possible for humans to understand each other's emotions when communicating? Yeah, so when we talk about human communication, and in particular communication via speech, because obviously sign languages are also natural languages, I would say that there are two parts to understanding how humans can understand each other's emotions when communicating. So firstly, there are the cues within the speech signal, so the actual acoustic cues. But then also, a really big part of understanding emotion comes from being able to see body language, for example, and picking up on the cues within how people are reacting to your speech. Multiple different ways that we can manipulate our intonation and prosodic features, which signal to the speaker then different meanings that we might have behind the sentence. So for example, sarcasm. So a speaker might say something really sarcastic like, oh, you look really nice today. And as humans, we can understand that that speaker is being sarcastic and possibly quite rude about, for example, how the person they are addressing looks. But the only way that we can understand this meaning is through the prosodic features of the sentence, since the actual content of the sentence and the words, you look quite nice today, would be taken as a compliment in all other circumstances, right? And so I think from a language perspective, when we talk about speech and the acoustic cues, intonation and prosodic features and how we say certain words differently, maybe stress certain words over others, play a really big part in how we can understand humans' emotions. So would it be possible for machines to use sentiment analysis in the same way? I think that this is definitely something which will come with a lot of challenges because from my understanding so far with sentiment analysis, a lot of the analysis comes from, for example, collecting corpus written data for a language and using that written data to train AI, for example. And so in that case, you can see how it would be really difficult for machines to be able to understand human emotions in the same way as we can, since, as we've already discussed, the words themselves often don't hold much emotional meaning behind them. And there's another layer in their speech signal in which emotions are encoded. So when it comes to written language, for example, we could potentially train AI on sentiment analysis to understand human emotions by using punctuation. So, for example, a question mark would indicate question or an exclamation mark could indicate excitement or perhaps maybe even anger in certain situations. However, we can still see that even with punctuation, the ability to analyse the really broad range of emotions that humans can feel is very limited. Then from a language point of view, what ethical considerations would we need to take into account when training AI? Yeah, so I think in terms of ethics, the biggest consideration we need to take into account is the data which we give the AI. Since men and women, let's say, are all socialised in very different ways, there are actually patterns within different language that, for example, men might use more, women might use less, and also vice versa, language which women might use more and men might use less. So an example of this which is well-researched within the literature, linguistics literature, is politeness markers. So, for example, when a woman might ask for a favour, they might use more hedging and more difficult markers of politeness and be perhaps less straightforward with the question than a man might. So nuances like this are something that we definitely really need to take into account when training AI and in the data that we provide AI, and it could lead to actually just perpetuating lots of systemic biases and also perhaps even exacerbating them even more. And AI can often pick up on patterns which we humans would not be able to perhaps even pick up on sometimes and be able to account for. Thank you so much for your time, Emma. Thank you for having me, Dayu. It was really great talking to you today. So from an ethical standpoint, both technical person and linguistics both think we have to be really careful about the quality of the data that we are giving the AI and using to train the AI. We need to ensure that we are minimising biases as much as possible in the data. So that the AI doesn't learn those biases and systematic discrimination further. So back to the question, is sentiment analysis really helpful for understanding people's real emotion? So the technology side is seeking the improvement from the linguistics, but the linguistics also do not have a perfect model for building the algorithm. So probably in the current stage, the answer would be no, the sentiment analysis would not really understand human beings. But what they actually both mentioned is the importance of contextual information. So that would be the best direction for further, for future. So thank you guys for listening and see you next time.

Other Creators