There are many forms of AI. Most apps you use probably have AI integrated into them in some way. For example, social media apps curate individual feeds through AI algorithms that use machine learning and pattern recognition to create recommendations. The same is true for targeted ads, music recommendations on Spotify, and more. Other forms of AI are also in things like GPS. However, these forms of AI aren’t usually what comes to mind when someone says the word “AI.” What you’re thinking of is most likely generative AI, which can create text, images, or videos from analyzing patterns across the entire internet.
It can—especially if you rely on it too much. Think about it this way: if you go to a gym and have a robot lift all your weights, you won’t get any stronger. You might actually get weaker due to muscle degradation. When you use AI to do something, you miss out on an opportunity to strengthen your own skills However, when used correctly, AI can help you without taking away from your overall learning experience.
In short, the black box is the mechanism AI uses to produce its answers. It’s called the black box because nobody knows exactly how it works. This is because the way AI makes decisions is not as simple as an if/then statement. One metaphor for the black box is a Plinko or disc drop board. For a small one with only a few layers, it’s possible to predict where the ball will go. However, models like Chat GPT (with billions of parameters in their neural networks based on their large data sets) would have thousands of layers that a ball would have to bounce through, which makes it impossible to predict with accuracy where the ball will land.
AI can reduce the need for human workers, excluding manual laborers, because AI can augment productivity and maximize efficiency. Maybe instead of hiring ten researchers, companies can hire five and have them use AI. AI could be used either to maximize people’s efficiency or to take people’s jobs.
First, we can lobby and write letters to our state politicians asking them to make laws and enact regulations against AI misuse. These hypothetical rules could include laws that support data protection to ensure online privacy and safety. Second, we think AI companies should spend more time ensuring the alignment of their AI. If the AI is well trained, it won't fulfill dangerous requests or requests that go against its policies. Third, educating people about AI is also important to ensure responsible use because it can prevent people from falling for scams or becoming too dependent.
No. When you ask ChatGPT a question, it doesn’t know an answer like a human does—instead, it uses probability. For example, if you asked a human what the time was, they would check a clock. ChatGPT will predict an answer based on patterns it learned during training. For example, if there are several possible answers—A1 is 80% likely, A2 10%, A3 5%—the LLM will respond A1 most of the time, A2 some of the time and so on.
Maybe. AI’s future is very uncertain, and a lot of events would have to occur for the takeover to happen. Those who believe that it will happen mostly think that the AI takeover will be caused by the race to develop the most advanced AI between the USA and China. You can read more about this in the “AI 2027” section of this website.
It depends on what we classify as thinking. AI certainly doesn’t think the same way humans do—it uses probability to determine how to respond to a prompt. On the other hand, some people argue that AI’s method of using probability and past experiences (parameters) could be seen as a form of thinking.
Alignment in AI refers to the moral principles of an AI. If an AI is aligned, it has human interests at heart. If it is misaligned, it could hurt humans because its own agenda takes priority.
Artificial general intelligence, or AGI, refers to AI that has become as capable as an average human at all cognitive tasks. Currently, people have mixed views on the topic. Some people view AGI as an impossible dream or marketing gimmick, while others believe that it will be achieved in the next year. It’s hard to say because researchers aren’t sure how to evaluate when AI will have reached the point of human intelligence. Tech companies are also highly incentivized to say that they have created AGI because then other companies would be tripping over themselves to hire the robot to replace some or even all intellectual human labor.