THE ETHICS OF ARTIFICIAL INTELLIGENCE
- jananijanakiraman03
- Jul 1, 2024
- 3 min read
It’s no surprise that Artificial Intelligence (AI) has begun to make a rapid advance into our society. Popular tech companies like Apple, Google, and Microsoft all have their own types of AI open to public use. Recently, Chat GPT has become a worldwide sensation. However, problems have started to arise. Our society has always had flaws, but we all have rules, consciences, and morals that we follow. However, that’s not how AI works; AI doesn’t have such an emotional thought process. It simply looks through data without any emotional understanding for why things are the way they are. This, in the end, has resulted in AI having multiple biases of all kinds, including biases of race, gender, and many more. The question that arises here is whether AI is ready to be used in society as it is being used today, or if we need to take a step back.
Let’s look at some examples of failed usage of AI that led to nothing but catastrophe. The most well-known AI malfunction was with the AI Chatbot that was released on Twitter named Tay. Microsoft, the creator of Tay, claimed that they had hard working technology that was filtering negative messages; however, in less than 24 hours, Tay was sending racist, anti-semitic, and prejudicial tweets. The bot had to be taken down immediately. This is just one example of how despite the powerful software that Microsoft had employed to keep Tay at bay, the chatbot had still managed to override all of the powerful code that humans had employed.
Another example is Amazon and its AI Hiring system. Amazon recently employed an Artificial Intelligence system to look through the qualities of each candidate and determine which ones should move on to the next stage of hiring. It is well known that women have traditionally faced discriminatory practices in the STEM field. So, naturally, they would have lesser STEM-related roles to put on their resume due to the sexism in the environment. The AI began to rapidly choose male applicants over females by a large ratio. In the end, by knowing that the applicant was a male, it fostered a bias and began to automatically prefer male candidates over females, developing a sexist AI system. Through proper training and humane behavior, humans will not make these same mistakes by mass hiring a huge group of male over females. This is an example of AI having gender bias.
The final example is the Correctional Officer Management Profiling for Alternative Sanctions (COMPAS) department started to develop a dangerous mindset. Due to the fact that past data has shown that black defendants are more likely to recidivate than white defendants, the AI took advantage of this data and slowly began to assume that black defendants, regardless of their past criminal history or the type of crime committed, gave the black defendants more punishment. This is an example of race bias through AI.
The common problem seen here is that Artificial Intelligence doesn’t have the same type of intelligence that humans possess. AI might be smarter in some aspects, but humans are most definitely more emotionally intelligent. Humans are capable of setting strict boundaries for themselves, but AI doesn’t have a conscience that sets boundaries. Humans have to set boundaries for the intelligence program. There wouldn’t be a problem if all of this was solved by a successful boundary system created by software engineers, but as seen in the Microsoft incident, humans are unable to create boundaries. Is it ethical to allow a system that has no boundaries to take over? Despite all of the past crimes that AI has committed, we still have made no effort to keep AI at bay until we learn how to create effective boundaries. It seems that AI keeps getting stronger, and boundaries are only getting weaker.
If we do not work harder to create stronger boundaries and put a temporary pause on AI, then we will be harming many groups of people, especially minorities. Women, racial minorities, and people in general have already been emotionally harassed by AI. We have already begun to make AI powered bots that are equipped with powerful metal armor and great brains. Once they are released into the public, what is to stop them, too, from being corrupted? The consequentialist approach must look at the long term benefits. One might say that with AI we get great medical resources, tech-savvy equipment, etcetera. However, in the long term, once AI takes over all occupations, are our lives really in our own hands?
Comments