Super-advanced artificial intelligence, left unchecked, has a “serious chance” of surpassing humans to become the next “apex species” of the planet, according Ethereum co-founder Vitalik Buterin.
But that will boil down to how humans potentially intervene with AI developments, he said.
New monster post: my own current perspective on the recent debates around techno-optimism, AI risks, and ways to avoid extreme centralization in the 21st century.https://t.co/6lN2fLBUUL pic.twitter.com/h5aIyFNCoh
In a Nov. 27 blog post, Buterin, seen by some as a thought leader in the cryptocurrency space, argued AI is “fundamentally different” from other recent inventions — such as social media, contraception, airplanes, guns, the wheel, and the printing press — as AI can create a new type of “mind” that can turn against human interests, adding:
Buterin argued that unlike climate change, a man-made pandemic, or nuclear war, superintelligent AI could potentially end humanity and leave no survivors, particularly if it ends up viewing humans as a threat to its own survival.
“Even Mars may not be safe,” Buterin added.
Buterin cited an August 2022 survey from over 4,270 machine learning researchers who estimated a 5-10% chance that AI kills humanity.
However, while Buterin stressed that claims of this nature are “extreme,” there are also ways for humans to prevail.
Buterin suggested integrating brain-computer interfaces (BCI) to offer humans more control over powerful forms of AI-based computation and cognition.
A BCI is a communication pathway between the brain's electrical activity and an external device, such as a computer or robotic limb.
This would reduce the two-way communication loop between man and machine from seconds to milliseconds, and
Read more on cointelegraph.com