Tech power has gone bully-boy, part 3: Gen Z knows that apps are feeding into early command-and-control AI. It must stop feeling powerless and act



In the montage above, R: one of many British child-soldiers who lied about being old enough to join the fighting in World War I. L: the face of today’s enemy, for anyone young and protective, could be a seemingly harmless, data-siphoning app

[ Part 1 and Part 2 ]

[ This post — delayed by unwanted adventures beginning in February that scrambled all plans and routines — is being published outside post-Gutenberg’s paywall with the hope that ‘influencers,’ especially those in Generation Z, will read and discuss it. 

For any subject in dire need of public understanding, there may never have been as wide a gap between what technologists and scientists know, compared to even highly educated non-experts, as in prospectively all-transformative artificial intelligence. 

The effects of public incomprehension, here, will be grave. Where we need vigorous opposition all we have is inaction. A deadly paralysis.

The finest tech and science brains struggling yet failing to enlighten us fear that it will take a calamity to shock us out of our resistance to paying attention. As tricky as the public education task was for them in explaining viruses and vaccines during the COVID pandemic, this is so much harder. 

How, for instance, do you demonstrate to someone uninterested in technology the implications of AI already revealing a capacity for complex practical judgment in answering a question about precisely how — and why — it would go about stacking a set of objects including nine eggs, a book, a laptop, a bottle and a nail ‘in a stable manner’? The significance of a report about this‘Proof of AI coming alive? Microsoft says its GPT-4 software is already “showing signs of human reasoning”’is not easily absorbed by the likes of a commenter on one London-based news site last month for whom this spring’s AI uproar is a silly fuss about progress in auto-complete word processing.

Instead of thinking about the meaning of Microsoft’s alert about GPT-4’s upskilling, too many of us would prefer to zone out and into a video clip that is undeniably delicious — recording a 1970 meeting of the Monty Python Royal Society for Putting Things on Top of Things. Enjoy the laugh, o reader, if you watch or re-watch it, but then you must return to trying to understand what we must all keep trying to understand better, and assist with preventative measures. ]

The rule is the same for wildland firefighting as for a kitchen blaze. First, cut the fire off from its source of fuel. Smother it if you can to deprive it of oxygen. Whether in chaparral or woodland, you’ll want to clear a fire break, deploying muscle power or bulldozers to rip out parched greenery all the way down to bare earth. You could be a battalion chief in CAL Fire, California’s stellar firefighting army, and the rule for the order of business would not change — even if your range of experience and excellent judgment mean that you are routinely invited to Sacramento for consultations on state-wide planning for cataclysmic firestorms.

However small and insignificant it might look, the fire burning this minute is where you have got to focus. 

From this angle of approach — nip horror in the bud — there’s a curious back-to-frontness about the sudden explosion in debating and bell-ringing about our AI-enslaved future without any mention of, or pointers to, developments in the present leading us to it in plain sight. 

Worse, some of the people you might expect to be concentrating on stamping out the digital equivalent of feeder-wildfires are actually igniting them — even as they issue one public caution after another about, metaphorically speaking, the whole globe burning unless ‘regulators’ step in like superheroes to save us.

What would digital equivalents of feeder-conflagrations be?

Systems for collecting personal data about you 24 hours a day, starting with those apps you unthinkingly install on your phone, you will answer — if you have read the first two parts of this Tech power has gone bully-boy series. Anything from that innocent-seeming app software you know as the icon you click on to get straight into your bank account; or the one from the idealistic literary magazine capable of retrieving an archived essay in a flash; or another downloaded when you failed to resist the siren smile of the double-tailed Starbucks mermaid. 

Here is Stuart Russell — an Atlantic-hopping British computer scientist and founder of Berkeley’s Center for Human-Compatible Artificial Intelligence — pronouncing on apps in the BBC’s most prestigious lecture series at the end of 2021:

If you take your phone out and look at it, there are 50 or a hundred corporate representatives sitting in your pocket busily sucking out as much money and knowledge and data as they can. None of the things on your phone really represent your interests at all. 

What should happen is that there’s one app on your phone that represents you that negotiates with the information suppliers, and the travel agencies and whatever else, on your behalf, only giving the information that’s absolutely necessary and even insisting that the information be given back, that transactions be completely oblivious, that the other party retains no record whatsoever of the transaction, whether it’s a search engine query or a purchase or anything else.

You might think it strange that, as far as I can tell, no newspaper has so far quoted Professor Russell’s silver bullet suggestion in a high-profile public lecture until … oh, wait … most newspapers are also in the business of collecting data about you nonstop, aren’t they? You could also wonder, should news media that decline to publicise alerts by eminent scientists about data-gathering systems being feeds for early AI recuse themselves from covering the subject of machine intelligence? 

Should they hand the task over to honest scholars with a gift for communication? — because old, venerated media brands are clearly profiting from peddling data, ’the oil of the 21st century,’ and almost certainly ‘sharing’ with the search engine giants information they gather about their readers from website cookies and apps?

But there are no regulations to stop them from doing this, yet — or, barring the odd high-profile case in which EU authorities slap some Big Tech colossus with a billion-dollar fine — effective enforcement where they do exist. Here is a Financial Times reader’s depressingly accurate statistical perspective on why regulation is too far behind technological advances to protect us from AI harms, including intrusive data collection for training its algorithms: 

@Draco Malfoy The technology behind Artificial Intelligence is understood by such a tiny number of humans globally, probably a few tens of thousands are genuine experts in the field, out of what 7 billion, or is it 8 now?And yet a few thousand non-technical bureaucrats representing hundreds of millions or billions of people are tasking themselves with regulating it. … AI … [is] … already ridiculously good and only getting better.

An unnerving obstacle to finding a way out of the dilemma is this: the demographic segment best informed and most anxious about the link between data collection and dark visions of future AI feels powerless. Members of it may be more suspicious of apps and more likely to resist installing them than their elders, but are largely passive — so far.  

On the Stanford University campus in early March I fell into a lively debate with two inquisitive, endearing Gen Z-ers who were adamant that any defence against command-and-control tech power would be taken apart by governments and corporations. Gesturing at his companion as he summed up their joint conclusion, one of them sighed, ‘Like he said, capitalism is the problem. Too much money is at stake — from exploiting this technology.’ Their gentle dejection lent them an air indistinguishable from anti-materialistic Hobbits reminding me that even their revered ancestors Bilbo and Frodo had experienced the corrupting pull of the ring of Sauron.

‘But you’re not going to give up without a fight, are you? After all, this technology is shaping what will be your world for far longer than mine.’

Their smiles as they gazed back at me were as sceptical as they were indulgent. They said that they would keep thinking about what might be done. In the months since then, I have had virtually identical exchanges with their contemporaries elsewhere. 

I have also stumbled on a confirmation of my impressions of them in Gen Z, Explained: The Art of Living in a Digital Age (2021), a collaboration between four scholars working in the UK and US. 

‘[A]dvances in technology scare me because they’re tied to corruption, power, like inappropriate uses of dominance,’ said one interviewee for the book. Technology is described as ‘a monster god on the horizon running towards us’ on the web site of Tim Urban, a writer popular with this group, the authors note. They report that Gen Z-ers ‘are often … deeply pessimistic about the problems they have inherited’ and ‘have a sense of diminished agency,’ because ‘institutions and political and economic systems seem locked, inaccessible to them.’ 

I have been puzzling over the contrast across time between their defeatism and the gung ho sang-froid of boy soldiers in Britain in World War I. In our information-soaked century, perhaps our teenagers and youngest adults are too well-educated about obstacles to making a difference to bear any resemblance to the twelve- and thirteen-year-old enlistees who lied about being over eighteen to serve, answering feverish recruitment appeals by government leaders. But the under-age signups slowed noticeably once news of the hellish deprivations and misery of fighting from muddy, sodden trenches in the bloodbath got home, despite the national mood of ‘almost hysterical patriotism.’

A combination of conflicts of interest related to investment opportunities in AI; international competition for tech dominance; and plain ignorance means that there are no calls to action from Gen Z’s elders. Nor are their any from their own tribe, as far as I can tell, because they are apt to be leaderless, preferring engagement ‘in a distinctively non-hierarchical, collaborative manner,’ — another independent observation of mine supported by Gen Z, Explained.

Nor is there the blanket media coverage you might hope for of the insults to any notion of privacy or safety in the data collection grinding on and on. Hardly any other media organs followed the online Daily Mail’s example in following the scoop that grew out of a fine investigation by the UK’s Observer — a report on 28 May that twenty regional National Health Service (NHS) websites had for years been using a ‘covert tracking tool’ of Meta/Facebook, through which they passed on to the social media giant ‘private details of web browsing’ by citizens. Data sucked up by what is arguably the least trusted bully-boy corporation — from the databases of the one organisation that has ranked repeatedly in first or second place in opinion polls about trusted British institutions — included …

granular details of pages viewed, buttons clicked and keywords searched. [ These are ] matched to the user’s IP address — an identifier linked to an individual or household — and in many cases details of their Facebook account.

Nor, considering its implications, has there been remotely like enough attention paid to the US Federal Trade Commission’s announcement on 31 May that it had punished Amazon with a $25 million fine, the amount agreed in a settlement of the FTC’s complaint alleging that ‘Amazon prevented parents from exercising their [data] deletion rights … kept sensitive voice and geolocation data for years, and used it for its own purposes, while putting data at risk of harm from unnecessary access.’

Another FTC settlement announcement on the same day imposed an effective penalty of $5.8 million on the online retailer for spying on women — by employees of its Ring subsidiary — in bedrooms and bathrooms, and ‘for failing to prevent hackers from commandeering the devices.’

Such, such, are the joys of misused tech power.

Celebrated brainboxes, the likes of the ‘AI godfather’ Geoffrey Hinton and Elon Musk, can surely do a bit more than make speeches you could easily take for techies doing their best to out-gloom biblical end-times pronouncements ( ‘ … an angel flying through the midst of heaven, saying with a loud voice, Woe, woe, woe, to the inhabiters of the earth …’). 

Even as they call for urgent AI regulation, they know that we do not have a fraction of the rule-makers and enforcers we need to make AI development safe.

Why is no one asking them what they propose for alternative technological brakes, or sleeping policemen?  

Notes on a U.S. congressional hearing: turning antitrust guns on Big Tech will not shield us from Orwellian puppeteering. Why did the politician-legislators choose the wrong focus?

‘Slowly the poison the whole blood stream fills … …’postgutenberg@gmail.com

‘… Slowly the poison the whole blood stream fills …’: William Empson

Notes scribbled after the second day of grilling this week for the chief executives of Amazon, Apple, Facebook and Google by the U.S. Congress’s antitrust judiciary committee: 

Can protecting citizen-consumers really be the point of telling Big Tech chiefs that they have too much power, when this is news to no one?

If yes: horse, barn door; 

problem has gone viral — the uncontrolled proliferation of harm to citizen-consumers (not Covid-19; the commercial surveillance virus);

hardly any citizen-consumers understand this or implications.

Conclusion: too late to save us so we’re doomed — barring lucky accident of stupendous dimensions.

1. In the frightening background to the hearing, unenlightened citizens: 

A disturbingly high proportion of consumers in six countries surveyed by the San Francisco technology security firm Okta this year have no idea of the degree to which they are being tracked by companies. They are equally oblivious to being milked for their personal data. Though ‘people don’t want to be tracked, and they place a high value on privacy42% of Americans do not think online retailers collect data about their purchase history, and 49% do not think their social media posts are being tracked by social media companies. … Nearly 4 out of 5 American respondents (78%) don’t think a consumer hardware provider such as Apple, Fitbit, or Amazon is tracking their biometric data, and 56% say the same about their location data.’

With those findings, the reason why rich Big Tech is only getting richer in a pandemic-battered US economy is obvious. It is just as clear that the average citizen cannot be expected to grasp that the execrable business practices of the technology leaders — including deceptive ‘privacy settings in devices sold by the most successful brands or guaranteed by popular platforms — are being copied by every type and size of business. 

2. Shouldn’t Congress’s focus be on eg., the unfair risks in installing apps — used to turn citizens into pawns of corporate surveillance?

Businesses once never thought of in connection with digital technology are forcing surveillance and tracking tools on us, mostly in the form of apps — but also when we think we are just popping in and out of their web sites. 

You can, for instance, log on to the site of a credit card company you trust and for the fifth month in a row, have to complain to the IT support desk about error messages obstructing you from completing your task. Finally — with an embarrassed acknowledgment of your loyalty to the brand — an unusually honest tech support supervisor confesses that the site’s glitches are not accidental but part of an effort to push customers towards installing the company’s app, and conduct their transactions on their smartphones. You say exasperatedly, ‘Oh, to track what I do all day long?’ The techie does not answer directly, only laughs and says that although most customers seem to love the app, he would not install it on his phone. He promises to notify colleagues responsible for the manipulation that you will never install the app. The site goes back to working perfectly for you. (Note: that was an actual, not an imagined, experience.)

3. The companies will not stop at tracking, data-gathering, and individually targeted advertisements

As in this site’s testament two years ago about another low-tech company, the esteemed media organ we called ACN.com, — ‘Big Brother takes an alarming step past watching us …’  — businesses are proceeding from spying on us and selling or sharing their discoveries with third parties to using them to limit or redirect our choices, and even scolding us for legal and reasonable behaviour that does not suit them. The ACN manager we argued with in that incident said that his organisation had ’special software tools’ that monitored every click and keystroke by visitors to its web site. In fact, the newspaper had graduated from unremitting surveillance to: 

demanding that we make personal contact with our monitors; insisting that we submit to interrogation by these monitors, and account for our actions; cross-questioning us about our answers, and about why we say that the obtuse interpretations by monitors — inadvertently or tactically — of what we are doing are mistaken.

Imagine what that would mean in even more intrusive and unscrupulous hands.

4. Politicians in both parties campaigning in the U.S. presidential election are copying the methods of commercial surveillance: is this why antitrust rather than tracking and data-gathering was the focus of the Congressional hearing?

On 14 July, the U.S. president’s digital campaigning strategist Brad Parscale boasted on Twitter about a ‘biggest data haul’ on supporters and prospective voters. That was done with the same nasty spying technology, software apps. The Republicans are not alone, here. The campaign of the Democratic front-runner has its own equivalent. In fact, an article published by the MIT Technology Review on 21 June said that across the globe, politicians are using apps to organize support, manipulate supporters and attract new voters. Many are using the particular app developed for the Indian prime minister in his last campaign — which ‘was pushed through official government channels and collected large amounts of data for years through opaque phone access requests.’ To be perfectly clear, electioneering software used ‘“just like a one-way tool of propaganda”’ is also being used to govern India.

The Trump campaign app seeks permission from those who install it for — among other startling invasions of privacy — confirming identity and searching for user accounts on devices; reading, writing or deleting data on devices; getting into USB storage; preventing the device from sleeping.

The authors of the piece, Jacob Gursky and Samuel Woolley, say: ‘As researchers studying the intersection of technology and propaganda, we understand that political groups tend to lag behind the commercial ad industry. But when they catch up, the consequences to truth and civil discourse can be devastating.’

How strange that there has not apparently been the smallest whisper about any of this in connection with the politicians’ heroic interrogations of Big Tech leaders this week … or is it, really?

5. Is poetry all we will have left for comfort?

Society is being hurt by these technologies and practices in damage going deep and acquiring subtle dimensions, inexpressible except in poetry — as in these lines from the 20th-century poet William Empson:

Slowly the poison the whole blood stream fills …

.

… It is not your system or clear sight that mills

Down small to the consequence a life requires;

Slowly the poison the whole blood stream fills.

‘Missing Dates’

Or there are the 1992 predictions of the late Leonard Cohen, in a song last quoted here a few months ago in a different context just as apt:

… There’ll be the breaking of the ancient

Western code

Your private life will suddenly explode …

.

… Give me absolute control
Over every living soul …

‘The Future’

 

How do you discover the actual origin of a bug — such as ‘surveillance capitalism’ — when its history as a feature is all but lost? Could a better Wikipedia help?

 

bug or feature? photograph by JACKI HOLLAND postgutenberg@gmail.com

Bug or feature? (at the edge of the flower’s dark centre) The shadowy face of advertising aimed at us as individuals — ‘micro-targeting’ —  makes it hard to learn about its idealistic beginnings. Photograph: Jacki Holland

If Google did not invent the phenomenon now being referred to as  ‘surveillance capitalism,’ who did? part 2 ( part 1 is here

Is the digital revolution moving too fast for academics to keep up? You could call the question mission-critical because the (possibly) inadvertent errors of some scholars are influencing regulators and law-makers drawing up rules for the digital economy. It follows naturally from the last post here on pG , which pointed out that Shoshana Zuboff is wrong to declare that Google pioneered the milking of unsuspecting internet users for our data; the routine extraction of intimate information about us and our lives in a system that she and various others have for some time been calling surveillance capitalism. 

In a piece for Fast Company a year ago, Professor Zuboff said that Google invented it … 

… more than a decade ago when it discovered that the “data exhaust” clogging its servers could be combined with analytics to produce predictions of user behavior. At that time, the key action of interest was whether a user might click on an ad.

But the Pepsi market research project using electronic beepers described here last month had the identical, advertising-oriented aims and contained almost all the components of today’s commercial surveillance, even if its technological tools were less sophisticated and intrusive.

It was completed in 1996, two years before Google was even incorporated in September 1998. Pepsi deployed the beepers to track, survey and assemble detailed taste and preference profiles of 50,000 young customers, stretching far beyond their soft drink consumption, and traded this information with twenty other companies — which also used the data to design more powerful, less resistible, advertisements for their products through what eventually came to be known as micro-targeting. It was attacked by outside observers sounding exactly like today’s critics of commercial surveillance for intruding on the privacy of its project’s participants.

The secretiveness about tools and data-milking methods of Google and other search technology giants  — as well as virtually every other company doing business on the internet — has warranted  their deeply negative portrayal in the media and scholarship. But most of the critics condemning them either failed to explain — or simply did not know — that the unwanted bug that they constitute, collectively, was lauded almost a quarter-century ago as a benign, intensely desirable prospective feature of the internet as it began to take off.

In a 1997 interview published in Wired, Tim Berners-Lee actually made such a prediction after a question from his interviewer, Evan Schwartz, about whether the advertising already starting to saturate the web was one of the undesirable, ‘unexpected turns’ that his creation had taken:

… Marketing on the Web is going to be a lot more humane than marketing in traditional mass media because it’s possible to treat people individually. If I’m interested in buying a canoe, I can say, “Hey guys, I want a canoe.” I can float that onto the Web. Then other people can satisfy their own interests by selling me a canoe, not to mention inviting me to a newsgroup about good places to go canoeing.

Doesn’t that raise privacy issues?

My gut feeling is that one should be able to negotiate how one’s information is used …

Of course there is no such negotiation — an innovation we must hope can soon be regulated into existence — but you will not find those early thoughts of TB-L on the subject by typing ‘Tim Berners-Lee advertising’ into a search box. Search results reflect the marked shift in his opinion on the subject, encapsulated in a Google listing of a 2019 article in Fast Company in which he spoke out against ‘advertising-based revenue models that commercially reward clickbait,’ and characterised these as one of ‘the web’s 3 biggest cancers’. 

This pG site’s reminder of that chat with TBL is a printout sitting in a cardboard box in a garage. Its neighbours in its file include notes from unpublished conversations with Silicon Valley executives the following year, in which they described rapidly evolving marketing methods closely coupled to product design and improvement tailored, like Pepsi’s, to swift feedback from customers — only far more frequent, and well on the way to becoming today’s nonstop monitoring. As senior marketing managers at a small software startup — selling a system used by employees of other companies — said:

Part of our beta process that we’re doing right now is we have customers actually giving us feedback on the product as we develop the product […] and the engineering is responding to it and we go back to the customers [… who are … ] essentially involved in our design with us. […] These people and what do they want is really what the issue is […] and we’re just monitoring it all the time. All the feedback goes into a web form and then, boom! gets screened like two or three times a day by product marketing and engineering to figure out […] major product changes or directions … 

Hunting for such information about Silicon Valley marketing in the Wikipedia entry titled ‘Surveillance Capitalism’ would do no good, even for those readers who make it past the excruciating, jargon-laden first sentence on its background — ‘the intensification of connection and monitoring online with spaces of social life becoming open to saturation by corporate actors, directed at the making of profit and/or the regulation of action.’ 

Neither is there any allusion to it except in the vaguest terms in the online encyclopedia’s pages devoted to ‘Digital Marketing’  or ‘Interactive Marketing.’ Under ‘Surveillance Capitalism,’**  there is no trace of optimistic early expectations for it, such as TB-L’s enthusiasm for ‘humane marketing’ — although the entry does make a passing reference to ‘self-optimization (Quantified Self)’ as an instance of ‘various advantages for individuals and society’ of ‘increased data collection’ — and whose own page describes ‘a community of users and makers of self-tracking tools who share an interest in “self-knowledge through numbers.”’ 

How could Professor Zuboff have missed a prototype as large and substantial as the Pepsi project, also unmentioned in any of those Wiki pages dedicated to high-tech marketing? She would have had to do field research in Silicon Valley to avoid her error of crediting Apple and Apple alone for capitalism tailored to the needs and predilections of individuals — passing over that swiftly in a strictly abstract, generalised passage of The Age of Surveillance Capitalism (2019) about an evolutionary trend, beginning with Henry Ford, for companies to serve ‘the actual mentalities and demands of people.’ 

At one juncture in her book, she seems to be saying that she could not do any immersive research on the topic because Google, all too predictably, would not permit this: ’[O]ne is hard-pressed to imagine a Drucker equivalent [ Peter Drucker, the still unsurpassed Austrian-born theorist on business management ] freely roaming the scene and scribbling in the hallways.’ But Professor Zuboff plainly did not know enough to realise that Google was not the place to look for answers about the origins of the relentless commercial surveillance loop, or that there were rich sources of information about its practices elsewhere in Silicon Valley. 

How can scholars — and all the reviewers of her book who failed to correct her misattribution of its invention to Google — avoid this sort of mistake in future? Defects in our collective treasure-house of knowledge?

Could an even better version of the collaborative, still indispensable, still miraculously non-commercial Wikipedia be the answer? Larry Sanger, its co-founder, who long ago left that institution, has been hatching plans for an improvement he is calling the Encyclosphere, and outlined in a lecture at a conference in Amsterdam last autumn. He has promised generously to answer questions about it from almost any competent writer, and perhaps will tackle the pair in the header for this post.

** in a download on 3 March 2020

Social media critics who do not separate their objections are cooking up an anti-Big Tech jambalaya confusing regulators about the ‘surveillance capitalism’ that Google did not pioneer

 

social media postgutenberg@gmail.com

We have to discriminate carefully between light and dark elements of social media platforms

Here is an indirect reply to a tweet from @nikluac to @postgutenbergB, a few days ago  — which contained a link to a New York Times opinion piece by Shoshana Zuboff, a professor emerita of the Harvard Business School. Flashing red lights set off by a single paragraph in her essay led to post-Gutenberg.com [pG] ’s first investigation of Professor Zuboff’s hugely influential, best-selling book published a year ago, The Age of Surveillance Capitalism. 

That work, which offers ‘little by way of a concrete agenda’ for internet-centred reform according to Evgeny Morozov, and other reviewers, is on a very different mission from this pG site — which argues for a specific scheme. The professor has succeeded uniquely and brilliantly at her task of so-called ‘consciousness-raising’. In seven hundred pages, her book explains and condemns the extent and precise mechanisms of what she and other analysts have named surveillance capitalism. 

It is the same phenomenon to which pG has been drawing attention since August of 2013  — with no claims of pioneering insight — in the course of campaigning for a proposal for the democratisation of publishing. This involved — in part — pointing out that like the Big Tech social media platforms, powerful newspapers were also spying on their readers without notification or consent. In posts here, digital invasions of privacy have been referred to variously as commercial surveillance or the surveillance business model — or, for anorexic attention spans incapable of absorbing more than a long header, as the ‘“free” surveillance/advertising-centred/data-cow business model’, or ‘the ‘pay-to-be-spied-on contract for e-commerce.’

Why did the following paragraph in Professor Zuboff’s NYT essay in late January — in the context of its headline and theme — set alarm bells jangling?

You Are Now Remotely Controlled

Surveillance capitalists control the science and the scientists, the secrets and the truth.

Only repeated crises have taught us that these platforms are not bulletin boards but hyper-velocity global bloodstreams into which anyone may introduce a dangerous virus without a vaccine. This is how Facebook’s chief executive, Mark Zuckerberg, could legally refuse to remove a faked video of Speaker of the House Nancy Pelosi and later double down on this decision, announcing that political advertising would not be subject to fact-checking. 

That is an intensely emotive jambalaya, and not a logical argument. It is a fact that the platforms do indeed serve as ‘bulletin boards’ for useful, unobjectionable and frequently important messages from millions of users, every day. The article unreasonably conflates the ‘hate speech’ debate — about the platforms as carriers of social viruses — with the discussion of what needs to be done about regulating commercial surveillance and the theft of our personal data. Professor Zuboff somehow blurs the refusal of social media platforms such as Facebook to control what some individual users post there with not one but two unrelated questions — first, about whether paid political advertising on those sites should be curbed or forbidden; secondly, about what limits should be placed on information-gathering about platform users.

In her book she mashes all those together on the grounds that refusing to censor their users means that the social media platforms attract more users; can keep them on their sites for longer to gather more information about them; and, by growing their audiences in this way, earn more advertising dollars. 

While that is all undoubtedly true, it does not add up to an argument for treating the platforms like the owners of newspapers that are responsible for the work of their employees. Besides, there is something far more critical at stake, here.

Professor Zuboff mostly ignores or pays only cursory attention to the indispensable role that the platforms have assumed for most of us as cyberspace equivalents of town halls, libraries, coffee houses, debating clubs, pubs and soapboxes, and of pamphleteering and other printed means of disseminating facts and opinions — among other institutions and media. 

In an interview with the editor in chief in the latest issue of Wired, the United Nations secretary-general, António Guterres, endorses the idea of access to the internet as a basic human right. He explains:

People are saying all the voices must be heard. The idea of a very small group of people can decide for everything is now being put into question very seriously. … [I]n each country, the trigger is different. In some cases it’s an economic-driven occasion, in others it’s pressure on the political system, in others corruption, and people react. But I see more and more people wanting to assume responsibility, wanting their voices to be heard. And that is the best guarantee we have that political systems will not be corrupted.

Here, pG — which has so far been among Facebook’s most relentless critics, most recently, for its new practice of selectively handing out gigantic pots of cash to famous newspapers and magazines — must concede that Mark Zuckerberg is right to say that ‘People of varied political beliefs are trying to define expansive speech as dangerous because it could bring results they don’t accept,’ and that he believes that ‘this is more dangerous to democracy in the long term than almost any speech.’ His idea of trying out ‘a court-style board to rule on site content’ — staffed not by Facebook managers but independent outsiders — is also a good one, as long as the arbiters are genuinely independent, and expensive professional lawyers from the rickety U.S. legal system do not get involved in the sorting out of complaints.

Also in this month’s issue of Wired, Gideon Lewis-Kraus argues in an excellent meditation on the Big Tech controversy that … 

The opportunity to vent on social media, and occasionally to join an outraged online mob, might relieve us of our latent desire to hurt people in real life. It’s easy to dismiss a lot of very online rhetoric that equates social media disagreement with violence, but […] the conflation might reflect an accurate perception of the symbolic stakes: On this view, our tendency to experience online hostility as “real” violence is an evolutionary step to be cheered.

[…] 

To worry about whether a particular statement is true or not, as public fact-checkers and media-literacy projects do, is to miss the point. It makes about as much sense as asking whether somebody’s tattoo is true.

By all means let’s urgently make rules or draft laws for curtailing user surveillance and data-gathering by Big Tech. Devious impersonations such as sophisticated, digitally-manipulated misrepresentations of people — such as the fake Nancy Pelosi video mentioned by Professor Zuboff — should be prosecuted like any other form of identity theft. If anything is making people angry enough to ensure all that, it is The Age of Surveillance — succeeding where earlier books drawing attention to the same or similar problems have had no remotely comparable impact.

Among them is one published in 1997 by the Harvard Business School Press — Real Time: Preparing for the Age of the Never-Satisfied Customer.** In it, the Silicon Valley marketing innovator and investor Regis McKenna shows Professor Zuboff to be mistaken in one of her central assertions, which is that surveillance capitalism was ‘pioneered and elaborated through trial and error’ by Google in 2001.

While search engine technology allowed for a massive refinement of commercial surveillance and made it incommensurably insidious when misused, at least one other company actually hacked out the path to it. Real Time drew attention to ‘an excellent illustration of the shades of interactivity to come.’  This was in a six-month interlude in 1996, in which PepsiCo offered teenage and Generation X consumers of Mountain Dew fizzy drinks radically discounted electronic beepers to use with no communication charges. 

They were also given access to a toll-free telephone hookup over which they could listen to interviews with sports heroes — and the chance to get discounts from twenty other companies keen to sell this demographic group things ranging from tortilla chips to snowboards. PepsiCo paged the 50,000 participants in its scheme once a week to ask them questions in a ‘real-time dialogue with them,’ and anticipated eventually creating ‘an enormous, nonstop, electronic focus group at a remarkably low cost.’ Unfortunately, as Real Time noted, this soon led to ‘a firestorm of unanticipated criticism’ of the soft drink producer,’ for exploitation:

The company had assumed that this, of all communications technologies, would be irresistible to parents — helping two-career couples worried about their children’s whereabouts to keep in touch with them. Instead, the promotion was denounced as disturbingly manipulative by parents and children’s advocates — like the Center for Media Advocacy in Washington, D.C., a watchdog group, and Action for Children’s Television.

The New York Times report on the project said that ‘soliciting information from youths through the Internet and pagers also raises privacy questions.’

A quarter-century later we know that the anxiety was prescient — but now we also have free speech protection to worry about, separately.

( A later post on the same topic is here

** Real Time was a short-order project, a book researched, written and edited on a brutal schedule, in less than six months, in 1996 — with the assistance of pG’s writer, who thanks @nikluac for the tweet that led to this excursion into the past.