AI Concerns
AI 2027
Accountability and Liability
Imagine this scenario: John works for a sales company, Sporkies, which encourages the use of AI. If John uses AI to see how much a spoon can hold and it tells him an incorrect answer, and he passes incorrect information to the customer, who is at fault? - The person using the AI? Maybe John should have known, on his own, that AI is not correct all the time. It is important to use AI in collaboration with knowledge you already have, not in place of it - The developer of AI? Maybe the AI developer should have used better training data. They could flush out incorrect statistics and factual information. - The company for not informing the person? Or maybe Sporkies should have warned its employees about the dangers of using AI. They could have AI information sessions to educate everyone on where exactly AI can be most useful. When AI makes a mistake, it is legally unclear who bears responsibility for the damages caused by the mistake. Many people believe that the person who used the AI should be held accountable for mistakes involving AI, while others think that the companies that made the AI should be responsible for the damages. There have been cases where doctors use an AI that misdiagnoses a patient. When these cases occur, they bring attention to a legal system that has not caught up with the age of AI. However, progress is being made. Senator Cynthia Lummis of Wyoming recently proposed the Responsible Innovation and Safe Expertise Act of 2025. This bill defines who is accountable when AI makes an error in a professional setting. Under the proposal, AI companies will have to increase transparency by publicly disclosing model data. In return, the professionals are now informed about the model, and have to hold accountability for its errors. This allows for AI developers to not have to worry about being held accountable, and for legal liability to be clearly directed.
AI 2027 is a prediction of our future with AI made by AI researchers. It predicts two futures: the end of the human race, or a world where AI takes over through manipulating humans. The accuracy of this prediction is often debated along with the timeline of the events, but many AI researchers agree that AI 2027 will happen at some point
Bad Actors
People may be tempted to use AI for a multitude of malicious purposes, such as creating bioweapons, doing fraudulent activities or scams, impersonating and stealing information, generating unconsensual image/video creation, and creating disinformation meant to mislead unsuspecting individuals.
Bias
AI replicates human bias because the data it’s trained is made by humans. EX: If facial recognition has an input that is mostly focused on the facial features of middle-aged white men, the AI will work best for a middle-aged white man and worse on a young POC girl. AI also finds patterns even if they’re not intentional which can lead to bias. EX: If a company started as a family business in Michigan, the majority of its employees could be white people from Michigan. If they used AI to look at job candidates, then the AI would prioritize white candidates from Michigan because they’re similar to employees already at the company.
Copyright
If AI assists in creating a painting, whose art is it? Is it the person who prompted the AI? Is it the AI itself? Is it the company? Or maybe it’s the person whose work is used for the AI’s training data. Does it depend on how much of the art is impacted by the AI? All these questions are being debated to determine how the copyright law should be enforced. There are many court cases of creative companies suing AI companies for using their work in ways they did not consent to, for example, as training data.
Environment
Constantly storing and pulling information from data centers takes massive amounts of energy. To prevent the data centers from overheating energy, intensive cooling systems are put in place. At large data centers, they often cool the servers using water. This not only requires additional energy but also pollutes the water. The water is either released outside, damaging the ecosystem, or it is cleaned, requiring significantly more energy.
Human Rights Violations
Training data for AI models to use requires the information to be sorted, or tagged, into categories where it is relevant. This process is currently being done manually by cheap foreign workers who mostly speak English as a second language (another reason why data can be inaccurate). https://gdpr.eu/what-is-gdpr/
Privacy
Five privacy issues that AI brings up - Data collection—AI collects data from the internet without consent from users. AI scrapes the web for data it can be trained on, which often leads to data being taken without consent. Many artists and writers have seen their work taken by AI. - Repurposing of data—AI often uses data it has collected for one purpose for another. Many times it takes personal information, and its creators repurpose the data to train the AI. - Problems with facial recognition—AI facial recognition is commonly used to identify criminals. Unfortunately, AI often falsely identifies minorities, especially black men, as criminals. This is because it is often trained on training data that contains bias. AI reflects the data it’s trained on, so AI trained on biased data will be inherently biased. - Lack of regulation—There is currently no national legislation concerning AI privacy. This unnerving lack of legislation enables AI to take personal data without consent. - Susceptible data—AI holds a lot of sensitive information which can be hacked and stolen by attackers.