Home Innovation Google DeepMind Flags Misuse and Misa...
Business Fortune
08 April, 2025
DeepMind's latest report outlines how Artificial General Intelligence may arrive by 2030 and why it could pose serious harm to humanity if mishandled.
Artificial General Intelligence, or AGI, may be available as early as 2030, according to hints made by DeepMind, a top AI research organization within Google that focuses on AI technology.
AGI might cause severe harm, according to a 145-page document co-authored by Shane Legg, a co-founder of DeepMind. The authors also provided some concerning instances of how AGI might cause an existential crisis that could permanently destroy humanity.
AGI risks are divided into four categories by the recently released DeepMind document: misuse, misalignment, errors, and structural risks. The other two dangers are briefly discussed in the report, but abuse and misalignment are treated in full.
The risk posed by malicious use of AGI is quite similar to that of currently available AI tools; the main distinction is that AGI will be far more potent than current generative language models, which means that the harm will be increased by several orders of magnitude.
According to DeepMind, to stop this, engineers will need to identify and limit the system's ability to perform such things, as well as build security rules.
The system must be in line with human ideals if AGI is to benefit people. According to DeepMind, misalignment occurs when an AI system pursues an objective that differs from human intentions, which sounds like something from a Terminator film.
Regarding minimizing errors, the study suggests that we should limit the usability of AGI and deploy it gradually to prevent it from being overly potent in the first place, but it offers no remedy to the issue.