Zoe KleinmanExpertise editor
BBCMark Zuckerberg is claimed to have began work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, way back to 2014.
It’s set to incorporate a shelter, full with its personal power and meals provides, although the carpenters and electricians engaged on the positioning had been banned from speaking about it by non-disclosure agreements, in line with a report by Wired journal.
A six-foot wall blocked the challenge from view of a close-by highway.
Requested final yr if he was making a doomsday bunker, the Fb founder gave a flat “no”. The underground area spanning some 5,000 sq. ft is, he defined, is “similar to a bit of shelter, it is like a basement”.
That hasn’t stopped the hypothesis – likewise about his resolution to purchase 11 properties within the Crescent Park neighbourhood of Palo Alto in California, apparently including a 7,000 sq. ft underground area beneath.
Bloomberg through Getty PhotosAlthough his constructing permits confer with basements, in line with the New York Occasions, a few of his neighbours name it a bunker. Or a billionaire’s bat cave.
Then there may be the hypothesis round different tech leaders, a few of whom seem to have been busy shopping for up chunks of land with underground areas, ripe for conversion into multi-million pound luxurious bunkers.
Reid Hoffman, the co-founder of LinkedIn, has talked about “apocalypse insurance coverage”.
That is one thing about half of the super-wealthy have, he has beforehand claimed, with New Zealand a preferred vacation spot for properties.
So, might they actually be getting ready for struggle, the consequences of local weather change, or another catastrophic occasion the remainder of us have but to learn about?
Getty Photos InformationIn the previous couple of years, the development of synthetic intelligence (AI) has solely added to that listing of potential existential woes.
Many are deeply frightened on the sheer velocity of the development.
Ilya Sutskever, chief scientist and a co-founder of Open AI, is reported to be one them.
By mid-2023, the San Francisco-based agency had launched ChatGPT – the chatbot now utilized by lots of of thousands and thousands of individuals the world over – they usually had been working quick on updates.
AFP through Getty PhotosHowever by that summer season, Mr Sutskever was changing into more and more satisfied that pc scientists had been getting ready to creating synthetic common intelligence (AGI) – the purpose at which machines match human intelligence – in line with a e book by journalist Karen Hao.
In a gathering, Mr Sutskever steered to colleagues that they need to dig an underground shelter for the corporate’s prime scientists earlier than such a strong expertise was launched on the world, Ms Hao stories.
“We’re undoubtedly going to construct a bunker earlier than we launch AGI,” he is extensively reported to have mentioned, although it is unclear who he meant by “we”.
It sheds mild on an odd truth: many main pc scientists and tech leaders, a few of whom are working arduous to develop a massively clever type of AI, additionally appear deeply afraid of what it might sooner or later do.
Getty PhotosSo when precisely – if ever – will AGI arrive? And will it actually show transformational sufficient to make atypical individuals afraid?
An arrival ‘before we expect’
Tech leaders have claimed that AGI is imminent. OpenAI boss Sam Altman mentioned in December 2024 that it’ll come “before most individuals on the earth assume”.
Sir Demis Hassabis, the co-founder of DeepMind, has predicted within the subsequent 5 to 10 years, whereas Anthropic founder Dario Amodei wrote final yr that his most popular time period – “{powerful} AI” – might be with us as early as 2026.
Others are doubtful. “They transfer the goalposts on a regular basis,” says Dame Wendy Corridor, professor of pc science at Southampton College. “It relies upon who you speak to.” We’re on the telephone however I can virtually hear the eye-roll.
“The scientific group says AI expertise is superb,” she provides, “but it surely’s nowhere close to human intelligence.”
There would have to be quite a lot of “elementary breakthroughs” first, agrees Babak Hodjat, chief expertise officer of the tech agency Cognizant.
What’s extra, it is unlikely to reach as a single second. Relatively, AI is a quickly advancing expertise, it is on a journey and there are a lot of corporations around the globe racing to develop their very own variations of it.
However one purpose the thought excites some in Silicon Valley is that it is considered a pre-cursor to one thing much more superior: ASI, or synthetic tremendous intelligence – tech that surpasses human intelligence.
It was again in 1958 that the idea of “the singularity” was attributed posthumously to Hungarian-born mathematician John von Neumann. It refers back to the second when pc intelligence advances past human understanding.
Getty PhotosExtra not too long ago, the 2024 e book Genesis, written by Eric Schmidt, Craig Mundy and the late Henry Kissinger, explores the thought of a super-powerful expertise that turns into so environment friendly at decision-making and management we find yourself handing management to it utterly.
It is a matter of when, not if, they argue.
Cash for all, with no need a job?
These in favour of AGI and ASI are virtually evangelical about its advantages. It’s going to discover new cures for lethal illnesses, resolve local weather change and invent an inexhaustible provide of fresh power, they argue.
Elon Musk has even claimed that super-intelligent AI might usher in an period of “common excessive earnings”.
He not too long ago endorsed the concept AI will grow to be so low cost and widespread that nearly anybody will need their “personal private R2-D2 and C-3PO” (referencing the droids from Star Wars).
“Everybody could have the most effective medical care, meals, dwelling transport and all the pieces else. Sustainable abundance,” he enthused.
AFP through Getty PhotosThere’s a scary facet, after all. Might the tech be hijacked by terrorists and used as an unlimited weapon, or what if it decides for itself that humanity is the reason for the world’s issues and destroys us?
“If it is smarter than you, then we now have to maintain it contained,” warned Tim Berners Lee, creator of the World Vast Internet, speaking to the BBC earlier this month.
“Now we have to have the ability to swap it off.”
Getty PhotosGovernments are taking some protecting steps. Within the US, the place many main AI corporations are based mostly, President Biden handed an government order in 2023 that required some companies to share security take a look at outcomes with the federal authorities – although President Trump has since revoked a number of the order, calling it a “barrier” to innovation.
In the meantime within the UK, the AI Security Institute – a government-funded analysis physique – was arrange two years in the past to raised perceive the dangers posed by superior AI.
After which there are these super-rich with their very own apocalypse insurance policy.
“Saying you are ‘shopping for a home in New Zealand’ is sort of a wink, wink, say no extra,” Reid Hoffman beforehand mentioned. The identical presumably goes for bunkers.
However there is a distinctly human flaw.
I as soon as met a former bodyguard of 1 billionaire together with his personal “bunker”, who informed me his safety group’s first precedence, if this actually did occur, can be to remove mentioned boss and get within the bunker themselves. And he did not appear to be joking.
Is all of it alarmist nonsense?
Neil Lawrence is a professor of machine studying at Cambridge College. To him, this entire debate in itself is nonsense.
“The notion of Synthetic Common Intelligence is as absurd because the notion of an ‘Synthetic Common Car’,” he argues.
“The precise automobile relies on the context. I used an Airbus A350 to fly to Kenya, I exploit a automotive to get to the college every day, I stroll to the cafeteria… There isn’t any automobile that would ever do all of this.”
For him, discuss AGI is a distraction.
Smith Assortment/Gado/Getty Photos“The expertise we now have [already] constructed permits, for the primary time, regular individuals to straight speak to a machine and doubtlessly have it do what they intend. That’s completely extraordinary… and completely transformational.
“The large fear is that we’re so drawn in to huge tech’s narratives about AGI that we’re lacking the methods by which we have to make issues higher for individuals.”
Present AI instruments are educated on mountains of information and are good at recognizing patterns: whether or not tumour indicators in scans or the phrase more than likely to come back after one other in a specific sequence. However they don’t “really feel”, nonetheless convincing their responses could seem.
“There are some ‘cheaty’ methods to make a Giant Language Mannequin (the inspiration of AI chatbots) act as if it has reminiscence and learns, however these are unsatisfying and fairly inferior to people,” says Mr Hodjat.
Vince Lynch, CEO of the California-based IV.AI, can also be cautious of overblown declarations about AGI.
“It is nice advertising,” he says “In case you are the corporate that is constructing the neatest factor that is ever existed, individuals are going to need to provide you with cash.”
He provides, “It isn’t a two-years-away factor. It requires a lot compute, a lot human creativity, a lot trial and error.”
Requested whether or not he believes AGI will ever materialise, there is a lengthy pause.
“I actually do not know.”
Intelligence with out consciousness
In some methods, AI has already taken the sting over human brains. A generative AI device may be an knowledgeable in medieval historical past one minute and resolve complicated mathematical equations the following.
Some tech corporations say they do not all the time know why their merchandise reply the best way they do. Meta says there are some indicators of its AI methods enhancing themselves.
Finally, although, regardless of how clever machines grow to be, biologically the human mind nonetheless wins.
It has about 86 billion neurons and 600 trillion synapses, many greater than the synthetic equivalents. The mind would not have to pause between interactions, and it’s continually adapting to new info.
“In the event you inform a human that life has been discovered on an exoplanet, they are going to instantly study that, and it’ll have an effect on their world view going ahead. For an LLM [Large Language Model], they are going to solely know that so long as you retain repeating this to them as a truth,” says Mr Hodjat.
“LLMs additionally do not need meta-cognition, which implies they do not fairly know what they know. People appear to have an introspective capacity, generally known as consciousness, that enables them to know what they know.”
It’s a elementary a part of human intelligence – and one that’s but to be replicated in a lab.
High image credit: The Washington Submit through Getty Photos/ Getty Photos MASTER. Lead picture reveals Mark Zuckerberg (beneath) and a inventory picture of an unidentified bunker in an unknown location (above)
BBC InDepth is the house on the web site and app for the most effective evaluation, with contemporary views that problem assumptions and deep reporting on the largest problems with the day. And we showcase thought-provoking content material from throughout BBC Sounds and iPlayer too. You possibly can ship us your suggestions on the InDepth part by clicking on the button beneath.

