Title: AI Learns Gender and Racial Biases From Language
Author: Jeremy Hsu
Text Type: Article
In this article Jeremy Hsu speaks about how AI is learning about gender and racial biases from existing English texts and how these AI could possibly exhibit these biases found in the texts.
Part of the text says that the AI are more likely to associate words such as flower and music with being pleasant and words such as insects and weapons with being unpleasant. But the AI was also more likely to associate European American names rather than African American names with being pleasant. It would also associate women with jobs in the arts rather than with mathematics. I think this is a problem because when we eventually introduce these AI robots into our world they will already hold preconceived prejudices. This will lead to even more gender and racial problems that future generations will have to endeavour. If these AI's can pick up different prejudices that the human race holds against each other then it is possible that the racial and gender problems we have right now will only worsen. Since these AI's effectively think for themselves, these preconceived ideas that they've gained by analysing us as a race and our language, may actually become the ideas that they will follow and try to implement. If the AI's become a race of their own then they could also feed off each other's information like we do. If they did this then they would begin to form their own prejudices amongst their race against our race.
Another part of the text I found interesting was this quote “The worry is that if machine learning is replicating human biases, it's also reflecting that back at us. Perhaps it would further create a feedback loop in perpetuating those biases." I found this to be a extremely true statement. I made a similar point on my first paragraph, if these AI's are recreating these biases then they will feed off each other's information just like we do. This world is run by opinions and power, if a leader thinks something needs changing, that is their opinion and they have the power to implement that change. Prejudices are built upon opinions of other people because of their religion, cultural background, gender etc. Over the years these opinions have become preconceived thoughts that people now already know and already think are normal, because they are so common. Now since AI are picking up on all of our prejudices that we hold against each other, these prejudices will become even more common and out of control, because not only are we reinforcing them, but the AI are picking up our ideas and making them their own. Reinforcing these prejudices and only making them worse.
Link for Article
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-learns-gender-and-racial-biases-from-language
Text Type: Article
In this article Jeremy Hsu speaks about how AI is learning about gender and racial biases from existing English texts and how these AI could possibly exhibit these biases found in the texts.
Part of the text says that the AI are more likely to associate words such as flower and music with being pleasant and words such as insects and weapons with being unpleasant. But the AI was also more likely to associate European American names rather than African American names with being pleasant. It would also associate women with jobs in the arts rather than with mathematics. I think this is a problem because when we eventually introduce these AI robots into our world they will already hold preconceived prejudices. This will lead to even more gender and racial problems that future generations will have to endeavour. If these AI's can pick up different prejudices that the human race holds against each other then it is possible that the racial and gender problems we have right now will only worsen. Since these AI's effectively think for themselves, these preconceived ideas that they've gained by analysing us as a race and our language, may actually become the ideas that they will follow and try to implement. If the AI's become a race of their own then they could also feed off each other's information like we do. If they did this then they would begin to form their own prejudices amongst their race against our race.
Another part of the text I found interesting was this quote “The worry is that if machine learning is replicating human biases, it's also reflecting that back at us. Perhaps it would further create a feedback loop in perpetuating those biases." I found this to be a extremely true statement. I made a similar point on my first paragraph, if these AI's are recreating these biases then they will feed off each other's information just like we do. This world is run by opinions and power, if a leader thinks something needs changing, that is their opinion and they have the power to implement that change. Prejudices are built upon opinions of other people because of their religion, cultural background, gender etc. Over the years these opinions have become preconceived thoughts that people now already know and already think are normal, because they are so common. Now since AI are picking up on all of our prejudices that we hold against each other, these prejudices will become even more common and out of control, because not only are we reinforcing them, but the AI are picking up our ideas and making them their own. Reinforcing these prejudices and only making them worse.
Link for Article
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-learns-gender-and-racial-biases-from-language
No comments:
Post a Comment
Note: only a member of this blog may post a comment.