…

…
…

…
…
Reader, may your 2025 begin well, continue better and make you resolute.
May you be especially determined and resilient if you can see this as an all-hands-on-deck time unless you don’t mind watching what we dread most about the AI revolution coming true. A time of orcs and goblins when saying simply ‘our’ is less apt to make minds switch off than talking about ‘humanity,’ particularly when the h-word is joined to ‘extinction,’ because we’re also in an age of acute, enfeebling threat fatigue.
They were notably paired in a conversation on BBC Radio 4 last Friday in which Sajid Javid, a former British government minister acting as guest editor, did his best to tilt the AI research pioneer Geoffrey Hinton’s vision of our AI-shaped future from dystopian to cheerily utopian — bringing us all ‘longer, happier and healthier lives’.
Just like a politician, a slab of beef in every crockpot and a Tesla in every garage! you might have snapped irritably. But this one’s unusual. Born a bus-driver’s son in a provincial English city, he is estimated to have taken a ninety-eight per cent cut in salary in giving up a banking career for politics. He has remained likably bloke-y and unaffected in spite of being appointed to at least five ministerial posts in the UK cabinet, at different times, including chancellor of the exchequer, and being entitled to put a ‘sir’ before his name for audiences that understand Britain’s public honours system.
As investors’ cash pours into AI like a defective flash flood without a celestial shutoff valve, governments worrying about slow economic growth, including the UK’s, are under ferocious — all but brutal — pressure to allow Big AI to do what it likes.
This was already happening last January, when the right-of-centre party to which ‘Sir Saj’ belongs was in power. In a short news item Private Eye — the satirical magazine and only high-profile print publication still distinguishing itself for serious, difficult, complex investigative journalism — reported, with its usual breezy insouciance, on lobbying in Britain’s parliament by Silicon Valley’s A16Z, referred to there as Andreessen-Horowitz. Notoriously the most aggressive AI advocate in top-tier venture capital, the firm had the ear of a Tory prime minister a year ago and must now be reckoned with by today’s Labour leaders — just as liable to turn puce when reading some updated version of this:
Following founder Marc Andreessen’s ‘Techno-Optimist Manifesto’, the company says in its submission that ‘big AI companies should be allowed to build AI as fast and aggressively as they can’, ‘development of open-source code should continue to be unregulated’, and that any potential risks posed by AI should be mitigated by, er, ‘using AI to maximise society’s defensive capabilities’.
So AI is going to regulate AI …
It is over-confidence — or arrogance — on that scale that Geoffrey Hinton has been opposing in recent speeches and interviews, as when he spoke for about three minutes on 10 December at the Nobel banquet in Stockholm in honour of his physics prize. Remarkably, no one seems to have noticed that he is the only Silicon Valley scientist — he led Google’s research into deep learning for ten years — to have been so ennobled in Sweden.
The transcript below could contain minor errors, even if the transcriber paid special attention to nuance, hesitations and emphasis. The BBC’s own record of the exchange is set to be removed from its website soon, with the rest of the three hour-plus episode of the 27 December Radio 4 Today programme of which it is part.
The thought-provoking reference to Charles Dickens in it is presumably about extreme social inequality: the man in the street’s limited defences against pitiless exploitation and near-absolute control by members of the ruling class in mid-1800s England. Not the least Dickensian in the other sense — referring to humour or divine caricature.
…
SAJID JAVID : I wonder whether you thought when you started this work that this is where we would be now.
GEOFFREY HINTON : I didn’t think that it would be where we are now. I thought at some point in the future we would get here.
Because the situation we’re in now is that most of the experts in the field think that some time within probably the next twenty years we’re gonna develop AIs that are smarter than people. And that’s a very scary thought.
SAJID JAVID : Well, I’d read somewhere that — [ laughs ] I know this is a really silly way to put it, but — humans were something like 10,000 times-plus smarter than the goldfish. And an ASI — artificial super-intelligence — could be 10,000 times more intelligent than a human.
Is that the kind of thing we’re talking about?
GEOFFREY HINTON : It’s not clear what times means in that context. I like to think of it as imagine yourself and a three year-old. We’ll be the three year-old, they’ll be the grownup.
SAJID JAVID : Do you think people and sort of society generally realise the profound change that is coming? You know, I’ve referred to the change AI will bring on par with the — sort of creation of the wheel and fire. Do you think it could go that far?
GEOFFREY HINTON : Oh yeah. Yes. I think it’s like the Industrial Revolution. In the Industrial Revolution human strength ceased to be that relevant because machines were just stronger. If you wanted to dig a ditch you dug it with a machine.
What we’ve got now is something that’s replacing human intelligence. And just ordinary human intelligence will not be at the cutting edge any more, it will be machines.
SAJID JAVID : What do you think life will be like ten to twenty years from now?
GEOFFREY HINTON : It will depend on what our political systems do with this technology. So my big worry at present is that we’re in a situation where we need to be very careful, very thoughtful, about developing a potentially very dangerous technology.
It’s gonna have lots of wonderful effects in health care. And in almost every industry, it’s going to make things more efficient. But we need to be very careful in the development of it. We need regulations to stop people from doing bad things with it.
And we don’t appear to have those kinds of political systems in place at present.
SAJID JAVID : Speaking as myself, as a former government minister, as a former Chancellor of the Exchequer, I’m interested to know also how you think this might change existing structures. For example, you have talked about many people losing their jobs as obviously happened in the Industrial Revolution. What that means for society and the types of jobs that might be lost, and that’s what I might call — when I’m talking about the ‘bad’ [ dimensions and effects of AI ] — that … it’s a sort of necessary outcome of technological change. But how profound do you think that will be?
GEOFFREY HINTON : Well, if you want to know what happened in the Industrial Revolution, to ordinary people, I think reading Dickens is good.
… I think there will be similar amounts of change caused by AI. And my worry is that even though it will cause huge increases in productivity — which should be very good for society — it could be very bad for society if all the benefit goes to the rich, and a lot of people lose their jobs and become poorer.
If you have a big gap between rich and poor it’s very bad for society.
SAJID JAVID : What’s different this time?
GEOFFREY HINTON : So these things are more intelligent than we are.
So there’s never any chance in the Industrial Revolution that machines would take over from people just because they were stronger. We were still in control because we had the intelligence.
Now, there’s a threat that these things can take control.
So that’s one big difference.
SAJID JAVID : And I think it’s the pace of change as well — how quickly this is all happening.
GEOFFREY HINTON : Yes, it’s very, very fast. Much faster than I expected. Because it’s so fast, we haven’t had the time to do the research needed on how to keep it under control.
SAJID JAVID : And let’s just talk about … some of the good things that are already emerging from AI. For example, as a former health secretary I think a lot about the advances that can be made in medical research, in life sciences. Is that a sector you can pick and say that actually is something where you can really extend life years for people? We can all live longer and have happier and healthier lives.
GEOFFREY HINTON : Yes. So I think it’s going to do tremendous good in areas like medicine. And that’s why it’s unrealistic to talk about stopping the progress.
I didn’t sign the petition that asked for that a few years ago because it just seemed completely unrealistic to me. In health care, for example, we’ll be able to have family doctors who in effect have seen a hundred million patients. And have all the tests that have ever been done on you and on your relatives.
Two hundred thousand people — about — die everywhere from bad diagnoses. Most of that’s gonna go away.
Already, an AI system is better than a doctor at doing diagnosis. And the combination of an AI system and the doctor is much better than the doctor at dealing with difficult cases, and the AI system’s only gonna get better.
SAJID JAVID : In the past, you previously predicted — I think you said there was a ten percent chance that AI will lead to human extinction within the next three decades. Has anything changed your analysis of that?
GEOFFREY HINTON : Erm. Not really. I think, ten to twenty [per cent].
SAJID JAVID : Oh, you’re going up!
GEOFFREY HINTON : If anything. You see, we’ve never had to deal with anything like this before.
And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing?
There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother. But that’s about the only example I know of.
SAJID JAVID : Perhaps then just to end on — despite what you’ve said — I remain an optimist about AI and what it means. Am I right to feel that way?
GEOFFREY HINTON : I hope you’re right to feel that way. My friend Yann LeCun, who’s also very knowledgeable about AI feels that way. … My worry is that the invisible hand [ capitalism ] is not gonna keep us safe.
So just leaving it to the profit motive of large companies is not gonna be sufficient to make sure that they develop it safely.
You can see that if you look at the history of Open AI. Initially, they were very concerned with safety and as time went by and the potential profits got bigger, they’ve got less and less concerned with safety.
The only thing that can force these big companies to do more research on safety is government regulation.

