Artificial general intelligence
Artificial general intelligence (AGI) is when a computer can do many different kinds of tasks as well as a human, or even better. Currently some tasks can only be done by humans, who can use learning, thinking, and planning to take actions. AGI is when computers can handle many types of tasks, such as solving problems, understanding new information, or learning new skills, at a level similar to humans.
AGI is different from "narrow" Artificial Intelligence (narrow AI). Narrow AI is built to do a specific task well, like recognising a face, or playing a game of chess. AGI would be able to do many tasks that it wasn’t specifically programmed to do.[1]
Building on AGI is the idea of Artificial superintelligence (ASI), where computers could be much smarter than humans.
People working at companies such as OpenAI[2] and Meta[3] are trying to develop computers that can solve general problems. They say that making AGI is an important goal.
In 2020, a study found 72 AGI research and development projects were being worked on in 37 countries.[4]
People have different ideas about when computers could reach AGI skill levels. In 2023, some experts predicted AGI could be reached within decades, while others thought it might take over a century—or never happen at all.[5] Geoffrey Hinton, an expert in machine learning, is very worried about how fast progress is happening. He thinks AGI could be created much sooner than most people expect.[6]
People have different ideas about the definition of AGI. Some researches claimed that computer systems that we have now (GPT-4) could be an early form of AGI. [7]
AGI is a common idea in science fiction stories and in studies about what the future might be like.[8][9]
There is a debate about how dangerous AGI could be. Some think AGI could be very dangerous, while others think people are worrying too much or too soon.[10][11][12]
Many AI experts believe it is important to address the risk that AGI could cause humanity to no longer exist. They think it is important for people around the world to work together to address this risk.[13][14] Others think that making computers that can reach AGI level will not happen soon and say it is not a risk to humans.[15][16]
- ↑ Krishna, Sri (2023-02-09). "What is artificial narrow intelligence (ANI)?". VentureBeat. Retrieved 2024-03-01.
ANI is designed to perform a single task.
- ↑ "OpenAI Charter". OpenAI. Retrieved 2023-04-06.
Our mission is to ensure that artificial general intelligence benefits all of humanity.
- ↑ Heath, Alex (2024-01-18). "Mark Zuckerberg's new goal is creating artificial general intelligence". The Verge. Retrieved 2024-06-13.
Our vision is to build AI that is better than human-level at all of the human senses.
- ↑ Baum, Seth D. (2020). A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF) (Report). Global Catastrophic Risk Institute. Retrieved 28 November 2024.
72 AGI R&D projects were identified as being active in 2020.
- ↑ "AI timelines: What do experts in artificial intelligence expect for the future?". Our World in Data. Retrieved 2023-04-06.
- ↑ "AI pioneer Geoffrey Hinton quits Google and warns of danger ahead". The New York Times. 1 May 2023. Retrieved 2023-05-02.
It is hard to see how you can prevent the bad actors from using it for bad things.
- ↑ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric (2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv preprint. arXiv:2303.12712.
GPT-4 shows sparks of AGI.
- ↑ Butler, Octavia E. (1993). Parable of the Sower. Grand Central Publishing. ISBN 978-0-4466-7550-5.
All that you touch you change. All that you change changes you.
- ↑ Vinge, Vernor (1992). A Fire Upon the Deep. Tor Books. ISBN 978-0-8125-1528-2.
The Singularity is coming.
- ↑ Morozov, Evgeny (June 30, 2023). "The True Threat of Artificial Intelligence". The New York Times.
The real threat is not AI itself but the way we deploy it.
- ↑ "Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks". ABC News. 2023-03-23. Retrieved 2023-04-06.
AGI could pose existential risks to humanity.
- ↑ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN 978-0-1996-7811-2.
The first superintelligence will be the last invention that humanity needs to make.
- ↑ Roose, Kevin (May 30, 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times.
Mitigating the risk of extinction from AI should be a global priority.
- ↑ "Statement on AI Risk". Center for AI Safety. Retrieved 2024-03-01.
AI experts warn of risk of extinction from AI.
- ↑ Mitchell, Melanie (May 30, 2023). "Are AI's Doomsday Scenarios Worth Taking Seriously?". The New York Times.
We are far from creating machines that can outthink us in general ways.
- ↑ LeCun, Yann (June 2023). "AGI does not present an existential risk". Medium.
There is no reason to fear AI as an existential threat.