Tech power has gone bully-boy, part 3: Gen Z knows that apps are feeding into early command-and-control AI. It must stop feeling powerless and act



In the montage above, R: one of many British child-soldiers who lied about being old enough to join the fighting in World War I. L: the face of today’s enemy, for anyone young and protective, could be a seemingly harmless, data-siphoning app

[ Part 1 and Part 2 ]

[ This post — delayed by unwanted adventures beginning in February that scrambled all plans and routines — is being published outside post-Gutenberg’s paywall with the hope that ‘influencers,’ especially those in Generation Z, will read and discuss it. 

For any subject in dire need of public understanding, there may never have been as wide a gap between what technologists and scientists know, compared to even highly educated non-experts, as in prospectively all-transformative artificial intelligence. 

The effects of public incomprehension, here, will be grave. Where we need vigorous opposition all we have is inaction. A deadly paralysis.

The finest tech and science brains struggling yet failing to enlighten us fear that it will take a calamity to shock us out of our resistance to paying attention. As tricky as the public education task was for them in explaining viruses and vaccines during the COVID pandemic, this is so much harder. 

How, for instance, do you demonstrate to someone uninterested in technology the implications of AI already revealing a capacity for complex practical judgment in answering a question about precisely how — and why — it would go about stacking a set of objects including nine eggs, a book, a laptop, a bottle and a nail ‘in a stable manner’? The significance of a report about this‘Proof of AI coming alive? Microsoft says its GPT-4 software is already “showing signs of human reasoning”’is not easily absorbed by the likes of a commenter on one London-based news site last month for whom this spring’s AI uproar is a silly fuss about progress in auto-complete word processing.

Instead of thinking about the meaning of Microsoft’s alert about GPT-4’s upskilling, too many of us would prefer to zone out and into a video clip that is undeniably delicious — recording a 1970 meeting of the Monty Python Royal Society for Putting Things on Top of Things. Enjoy the laugh, o reader, if you watch or re-watch it, but then you must return to trying to understand what we must all keep trying to understand better, and assist with preventative measures. ]

The rule is the same for wildland firefighting as for a kitchen blaze. First, cut the fire off from its source of fuel. Smother it if you can to deprive it of oxygen. Whether in chaparral or woodland, you’ll want to clear a fire break, deploying muscle power or bulldozers to rip out parched greenery all the way down to bare earth. You could be a battalion chief in CAL Fire, California’s stellar firefighting army, and the rule for the order of business would not change — even if your range of experience and excellent judgment mean that you are routinely invited to Sacramento for consultations on state-wide planning for cataclysmic firestorms.

However small and insignificant it might look, the fire burning this minute is where you have got to focus. 

From this angle of approach — nip horror in the bud — there’s a curious back-to-frontness about the sudden explosion in debating and bell-ringing about our AI-enslaved future without any mention of, or pointers to, developments in the present leading us to it in plain sight. 

Worse, some of the people you might expect to be concentrating on stamping out the digital equivalent of feeder-wildfires are actually igniting them — even as they issue one public caution after another about, metaphorically speaking, the whole globe burning unless ‘regulators’ step in like superheroes to save us.

What would digital equivalents of feeder-conflagrations be?

Systems for collecting personal data about you 24 hours a day, starting with those apps you unthinkingly install on your phone, you will answer — if you have read the first two parts of this Tech power has gone bully-boy series. Anything from that innocent-seeming app software you know as the icon you click on to get straight into your bank account; or the one from the idealistic literary magazine capable of retrieving an archived essay in a flash; or another downloaded when you failed to resist the siren smile of the double-tailed Starbucks mermaid. 

Here is Stuart Russell — an Atlantic-hopping British computer scientist and founder of Berkeley’s Center for Human-Compatible Artificial Intelligence — pronouncing on apps in the BBC’s most prestigious lecture series at the end of 2021:

If you take your phone out and look at it, there are 50 or a hundred corporate representatives sitting in your pocket busily sucking out as much money and knowledge and data as they can. None of the things on your phone really represent your interests at all. 

What should happen is that there’s one app on your phone that represents you that negotiates with the information suppliers, and the travel agencies and whatever else, on your behalf, only giving the information that’s absolutely necessary and even insisting that the information be given back, that transactions be completely oblivious, that the other party retains no record whatsoever of the transaction, whether it’s a search engine query or a purchase or anything else.

You might think it strange that, as far as I can tell, no newspaper has so far quoted Professor Russell’s silver bullet suggestion in a high-profile public lecture until … oh, wait … most newspapers are also in the business of collecting data about you nonstop, aren’t they? You could also wonder, should news media that decline to publicise alerts by eminent scientists about data-gathering systems being feeds for early AI recuse themselves from covering the subject of machine intelligence? 

Should they hand the task over to honest scholars with a gift for communication? — because old, venerated media brands are clearly profiting from peddling data, ’the oil of the 21st century,’ and almost certainly ‘sharing’ with the search engine giants information they gather about their readers from website cookies and apps?

But there are no regulations to stop them from doing this, yet — or, barring the odd high-profile case in which EU authorities slap some Big Tech colossus with a billion-dollar fine — effective enforcement where they do exist. Here is a Financial Times reader’s depressingly accurate statistical perspective on why regulation is too far behind technological advances to protect us from AI harms, including intrusive data collection for training its algorithms: 

@Draco Malfoy The technology behind Artificial Intelligence is understood by such a tiny number of humans globally, probably a few tens of thousands are genuine experts in the field, out of what 7 billion, or is it 8 now?And yet a few thousand non-technical bureaucrats representing hundreds of millions or billions of people are tasking themselves with regulating it. … AI … [is] … already ridiculously good and only getting better.

An unnerving obstacle to finding a way out of the dilemma is this: the demographic segment best informed and most anxious about the link between data collection and dark visions of future AI feels powerless. Members of it may be more suspicious of apps and more likely to resist installing them than their elders, but are largely passive — so far.  

On the Stanford University campus in early March I fell into a lively debate with two inquisitive, endearing Gen Z-ers who were adamant that any defence against command-and-control tech power would be taken apart by governments and corporations. Gesturing at his companion as he summed up their joint conclusion, one of them sighed, ‘Like he said, capitalism is the problem. Too much money is at stake — from exploiting this technology.’ Their gentle dejection lent them an air indistinguishable from anti-materialistic Hobbits reminding me that even their revered ancestors Bilbo and Frodo had experienced the corrupting pull of the ring of Sauron.

‘But you’re not going to give up without a fight, are you? After all, this technology is shaping what will be your world for far longer than mine.’

Their smiles as they gazed back at me were as sceptical as they were indulgent. They said that they would keep thinking about what might be done. In the months since then, I have had virtually identical exchanges with their contemporaries elsewhere. 

I have also stumbled on a confirmation of my impressions of them in Gen Z, Explained: The Art of Living in a Digital Age (2021), a collaboration between four scholars working in the UK and US. 

‘[A]dvances in technology scare me because they’re tied to corruption, power, like inappropriate uses of dominance,’ said one interviewee for the book. Technology is described as ‘a monster god on the horizon running towards us’ on the web site of Tim Urban, a writer popular with this group, the authors note. They report that Gen Z-ers ‘are often … deeply pessimistic about the problems they have inherited’ and ‘have a sense of diminished agency,’ because ‘institutions and political and economic systems seem locked, inaccessible to them.’ 

I have been puzzling over the contrast across time between their defeatism and the gung ho sang-froid of boy soldiers in Britain in World War I. In our information-soaked century, perhaps our teenagers and youngest adults are too well-educated about obstacles to making a difference to bear any resemblance to the twelve- and thirteen-year-old enlistees who lied about being over eighteen to serve, answering feverish recruitment appeals by government leaders. But the under-age signups slowed noticeably once news of the hellish deprivations and misery of fighting from muddy, sodden trenches in the bloodbath got home, despite the national mood of ‘almost hysterical patriotism.’

A combination of conflicts of interest related to investment opportunities in AI; international competition for tech dominance; and plain ignorance means that there are no calls to action from Gen Z’s elders. Nor are their any from their own tribe, as far as I can tell, because they are apt to be leaderless, preferring engagement ‘in a distinctively non-hierarchical, collaborative manner,’ — another independent observation of mine supported by Gen Z, Explained.

Nor is there the blanket media coverage you might hope for of the insults to any notion of privacy or safety in the data collection grinding on and on. Hardly any other media organs followed the online Daily Mail’s example in following the scoop that grew out of a fine investigation by the UK’s Observer — a report on 28 May that twenty regional National Health Service (NHS) websites had for years been using a ‘covert tracking tool’ of Meta/Facebook, through which they passed on to the social media giant ‘private details of web browsing’ by citizens. Data sucked up by what is arguably the least trusted bully-boy corporation — from the databases of the one organisation that has ranked repeatedly in first or second place in opinion polls about trusted British institutions — included …

granular details of pages viewed, buttons clicked and keywords searched. [ These are ] matched to the user’s IP address — an identifier linked to an individual or household — and in many cases details of their Facebook account.

Nor, considering its implications, has there been remotely like enough attention paid to the US Federal Trade Commission’s announcement on 31 May that it had punished Amazon with a $25 million fine, the amount agreed in a settlement of the FTC’s complaint alleging that ‘Amazon prevented parents from exercising their [data] deletion rights … kept sensitive voice and geolocation data for years, and used it for its own purposes, while putting data at risk of harm from unnecessary access.’

Another FTC settlement announcement on the same day imposed an effective penalty of $5.8 million on the online retailer for spying on women — by employees of its Ring subsidiary — in bedrooms and bathrooms, and ‘for failing to prevent hackers from commandeering the devices.’

Such, such, are the joys of misused tech power.

Celebrated brainboxes, the likes of the ‘AI godfather’ Geoffrey Hinton and Elon Musk, can surely do a bit more than make speeches you could easily take for techies doing their best to out-gloom biblical end-times pronouncements ( ‘ … an angel flying through the midst of heaven, saying with a loud voice, Woe, woe, woe, to the inhabiters of the earth …’). 

Even as they call for urgent AI regulation, they know that we do not have a fraction of the rule-makers and enforcers we need to make AI development safe.

Why is no one asking them what they propose for alternative technological brakes, or sleeping policemen?  

Leave a comment