Big Tech dangers we are not talking about — especially, how the theft of our personal data is opening the way to future subjugation and control at the scale of masses, not just individuals

 

dark cloud looming -- postgutenberg@gmail.com

This week marks the first anniversary of an attempt by the Wikipedia co-founder Larry Sanger to organise a social media strike. It did not attract the support it deserved. That was largely because mainstream media — including nearly all the best-known newspaper sites in the UK and US — declined to publicise it. Indeed they did not mention it at all, even though the BBC and the online version of The Daily Mail — two of the most-frequented news sites in the English-speaking world — ran reports about the plan and call to action. This site outlined the probable reason why: ‘ Mystery solved? Famous newspapers that ignored the Social Media Strike of 2019 have agreed to accept regular payments of millions of dollars from Facebook.

Grassroots tweeting and similar advertisements by the general public could — conceivably — have made up for the media silence. They did not. One reason why — probably outweighing all the others — is that in this ironic Information Age, we seem increasingly less able to absorb information and assess the reliability of its sources, especially when it is about risks and threats to our safety. 

We have to find new ways of establishing credibility. What could be better than handing out tools to let people run their own tests of any assertion? Read side-by-side, the two public-interest comments below show how helpful this can be — in the context of Big Tech’s siphoning of our personal data, the subject of innumerable posts, here (this one, for example). The first is a statement about a trend to which this site has been trying to draw attention since 2011. The second offers a way to assess its substance. They are recent, actual comments by readers on the Financial Times site (whose real-life identities pG does not know) made a few days apart, on different Big Tech-related articles there. 

The highlights are pG’s:

PiotrG

Big/Bug Tech relies on an ever-expanding expropriation of personal data to make money. Its endgame is to turn people into trained monkeys whose behaviour can be predicted and ultimately directed towards specific objectives. For now the objectives are commercial, but they could become social or political. That is the problem, and it won’t be solved by antitrust laws alone. 

However, concentration and excessive market power make the problem worse. A world where 10 people own information on 3-6 billion “customers” and manage to kill market competition, avoid supervision and remove internal (stakeholder) control is a perfect Orwellian nightmare.

Frederick E.

Anyone who thinks that it is easy to escape surveillance should install a pfsense router, or some equivalent. Set up firewall logging, even better deep packet inspection (including https via certificate installation). Then set up your privacy settings on your devices  they way you think is max what you need. Use them for a week as you would normally. Then check the firewall logs on your router. You will be surprised to see how much info from simple DNS, or DNS via https to much more detailed surveillance both facebook, google, microsoft or apple carry out. 

An average home with a computer, three phones and a tablet, plus roku (boy does that thing spy) and smart speakers leaks an inordinate amount of data even when privacy settings are set to max. 

Privacy settings are a false sense of security. Smart devices as well as computers are now designed to spy at the core OS level, no firewall, or app/plugin is going to stop it – these are higher level process that cannot override core level ones. 

The only way to block stuff at home is at the router level, but when you do so, many things simply stop working. The deal is be spied on, or don’t use it. This goes for free stuff or paid.

Unfortunately, where there should be discussion of what @PiotrG and @Frederick E. are trying to protect us from, there is precisely none about any such specifics.

How do you discover the actual origin of a bug — such as ‘surveillance capitalism’ — when its history as a feature is all but lost? Could a better Wikipedia help?

 

bug or feature? photograph by JACKI HOLLAND postgutenberg@gmail.com

Bug or feature? (at the edge of the flower’s dark centre) The shadowy face of advertising aimed at us as individuals — ‘micro-targeting’ —  makes it hard to learn about its idealistic beginnings. Photograph: Jacki Holland

If Google did not invent the phenomenon now being referred to as  ‘surveillance capitalism,’ who did? part 2 ( part 1 is here

Is the digital revolution moving too fast for academics to keep up? You could call the question mission-critical because the (possibly) inadvertent errors of some scholars are influencing regulators and law-makers drawing up rules for the digital economy. It follows naturally from the last post here on pG , which pointed out that Shoshana Zuboff is wrong to declare that Google pioneered the milking of unsuspecting internet users for our data; the routine extraction of intimate information about us and our lives in a system that she and various others have for some time been calling surveillance capitalism. 

In a piece for Fast Company a year ago, Professor Zuboff said that Google invented it … 

… more than a decade ago when it discovered that the “data exhaust” clogging its servers could be combined with analytics to produce predictions of user behavior. At that time, the key action of interest was whether a user might click on an ad.

But the Pepsi market research project using electronic beepers described here last month had the identical, advertising-oriented aims and contained almost all the components of today’s commercial surveillance, even if its technological tools were less sophisticated and intrusive.

It was completed in 1996, two years before Google was even incorporated in September 1998. Pepsi deployed the beepers to track, survey and assemble detailed taste and preference profiles of 50,000 young customers, stretching far beyond their soft drink consumption, and traded this information with twenty other companies — which also used the data to design more powerful, less resistible, advertisements for their products through what eventually came to be known as micro-targeting. It was attacked by outside observers sounding exactly like today’s critics of commercial surveillance for intruding on the privacy of its project’s participants.

The secretiveness about tools and data-milking methods of Google and other search technology giants  — as well as virtually every other company doing business on the internet — has warranted  their deeply negative portrayal in the media and scholarship. But most of the critics condemning them either failed to explain — or simply did not know — that the unwanted bug that they constitute, collectively, was lauded almost a quarter-century ago as a benign, intensely desirable prospective feature of the internet as it began to take off.

In a 1997 interview published in Wired, Tim Berners-Lee actually made such a prediction after a question from his interviewer, Evan Schwartz, about whether the advertising already starting to saturate the web was one of the undesirable, ‘unexpected turns’ that his creation had taken:

… Marketing on the Web is going to be a lot more humane than marketing in traditional mass media because it’s possible to treat people individually. If I’m interested in buying a canoe, I can say, “Hey guys, I want a canoe.” I can float that onto the Web. Then other people can satisfy their own interests by selling me a canoe, not to mention inviting me to a newsgroup about good places to go canoeing.

Doesn’t that raise privacy issues?

My gut feeling is that one should be able to negotiate how one’s information is used …

Of course there is no such negotiation — an innovation we must hope can soon be regulated into existence — but you will not find those early thoughts of TB-L on the subject by typing ‘Tim Berners-Lee advertising’ into a search box. Search results reflect the marked shift in his opinion on the subject, encapsulated in a Google listing of a 2019 article in Fast Company in which he spoke out against ‘advertising-based revenue models that commercially reward clickbait,’ and characterised these as one of ‘the web’s 3 biggest cancers’. 

This pG site’s reminder of that chat with TBL is a printout sitting in a cardboard box in a garage. Its neighbours in its file include notes from unpublished conversations with Silicon Valley executives the following year, in which they described rapidly evolving marketing methods closely coupled to product design and improvement tailored, like Pepsi’s, to swift feedback from customers — only far more frequent, and well on the way to becoming today’s nonstop monitoring. As senior marketing managers at a small software startup — selling a system used by employees of other companies — said:

Part of our beta process that we’re doing right now is we have customers actually giving us feedback on the product as we develop the product […] and the engineering is responding to it and we go back to the customers [… who are … ] essentially involved in our design with us. […] These people and what do they want is really what the issue is […] and we’re just monitoring it all the time. All the feedback goes into a web form and then, boom! gets screened like two or three times a day by product marketing and engineering to figure out […] major product changes or directions … 

Hunting for such information about Silicon Valley marketing in the Wikipedia entry titled ‘Surveillance Capitalism’ would do no good, even for those readers who make it past the excruciating, jargon-laden first sentence on its background — ‘the intensification of connection and monitoring online with spaces of social life becoming open to saturation by corporate actors, directed at the making of profit and/or the regulation of action.’ 

Neither is there any allusion to it except in the vaguest terms in the online encyclopedia’s pages devoted to ‘Digital Marketing’  or ‘Interactive Marketing.’ Under ‘Surveillance Capitalism,’**  there is no trace of optimistic early expectations for it, such as TB-L’s enthusiasm for ‘humane marketing’ — although the entry does make a passing reference to ‘self-optimization (Quantified Self)’ as an instance of ‘various advantages for individuals and society’ of ‘increased data collection’ — and whose own page describes ‘a community of users and makers of self-tracking tools who share an interest in “self-knowledge through numbers.”’ 

How could Professor Zuboff have missed a prototype as large and substantial as the Pepsi project, also unmentioned in any of those Wiki pages dedicated to high-tech marketing? She would have had to do field research in Silicon Valley to avoid her error of crediting Apple and Apple alone for capitalism tailored to the needs and predilections of individuals — passing over that swiftly in a strictly abstract, generalised passage of The Age of Surveillance Capitalism (2019) about an evolutionary trend, beginning with Henry Ford, for companies to serve ‘the actual mentalities and demands of people.’ 

At one juncture in her book, she seems to be saying that she could not do any immersive research on the topic because Google, all too predictably, would not permit this: ’[O]ne is hard-pressed to imagine a Drucker equivalent [ Peter Drucker, the still unsurpassed Austrian-born theorist on business management ] freely roaming the scene and scribbling in the hallways.’ But Professor Zuboff plainly did not know enough to realise that Google was not the place to look for answers about the origins of the relentless commercial surveillance loop, or that there were rich sources of information about its practices elsewhere in Silicon Valley. 

How can scholars — and all the reviewers of her book who failed to correct her misattribution of its invention to Google — avoid this sort of mistake in future? Defects in our collective treasure-house of knowledge?

Could an even better version of the collaborative, still indispensable, still miraculously non-commercial Wikipedia be the answer? Larry Sanger, its co-founder, who long ago left that institution, has been hatching plans for an improvement he is calling the Encyclosphere, and outlined in a lecture at a conference in Amsterdam last autumn. He has promised generously to answer questions about it from almost any competent writer, and perhaps will tackle the pair in the header for this post.

** in a download on 3 March 2020

Social media critics who do not separate their objections are cooking up an anti-Big Tech jambalaya confusing regulators about the ‘surveillance capitalism’ that Google did not pioneer

 

social media postgutenberg@gmail.com

We have to discriminate carefully between light and dark elements of social media platforms

Here is an indirect reply to a tweet from @nikluac to @postgutenbergB, a few days ago  — which contained a link to a New York Times opinion piece by Shoshana Zuboff, a professor emerita of the Harvard Business School. Flashing red lights set off by a single paragraph in her essay led to post-Gutenberg.com [pG] ’s first investigation of Professor Zuboff’s hugely influential, best-selling book published a year ago, The Age of Surveillance Capitalism. 

That work, which offers ‘little by way of a concrete agenda’ for internet-centred reform according to Evgeny Morozov, and other reviewers, is on a very different mission from this pG site — which argues for a specific scheme. The professor has succeeded uniquely and brilliantly at her task of so-called ‘consciousness-raising’. In seven hundred pages, her book explains and condemns the extent and precise mechanisms of what she and other analysts have named surveillance capitalism. 

It is the same phenomenon to which pG has been drawing attention since August of 2013  — with no claims of pioneering insight — in the course of campaigning for a proposal for the democratisation of publishing. This involved — in part — pointing out that like the Big Tech social media platforms, powerful newspapers were also spying on their readers without notification or consent. In posts here, digital invasions of privacy have been referred to variously as commercial surveillance or the surveillance business model — or, for anorexic attention spans incapable of absorbing more than a long header, as the ‘“free” surveillance/advertising-centred/data-cow business model’, or ‘the ‘pay-to-be-spied-on contract for e-commerce.’

Why did the following paragraph in Professor Zuboff’s NYT essay in late January — in the context of its headline and theme — set alarm bells jangling?

You Are Now Remotely Controlled

Surveillance capitalists control the science and the scientists, the secrets and the truth.

Only repeated crises have taught us that these platforms are not bulletin boards but hyper-velocity global bloodstreams into which anyone may introduce a dangerous virus without a vaccine. This is how Facebook’s chief executive, Mark Zuckerberg, could legally refuse to remove a faked video of Speaker of the House Nancy Pelosi and later double down on this decision, announcing that political advertising would not be subject to fact-checking. 

That is an intensely emotive jambalaya, and not a logical argument. It is a fact that the platforms do indeed serve as ‘bulletin boards’ for useful, unobjectionable and frequently important messages from millions of users, every day. The article unreasonably conflates the ‘hate speech’ debate — about the platforms as carriers of social viruses — with the discussion of what needs to be done about regulating commercial surveillance and the theft of our personal data. Professor Zuboff somehow blurs the refusal of social media platforms such as Facebook to control what some individual users post there with not one but two unrelated questions — first, about whether paid political advertising on those sites should be curbed or forbidden; secondly, about what limits should be placed on information-gathering about platform users.

In her book she mashes all those together on the grounds that refusing to censor their users means that the social media platforms attract more users; can keep them on their sites for longer to gather more information about them; and, by growing their audiences in this way, earn more advertising dollars. 

While that is all undoubtedly true, it does not add up to an argument for treating the platforms like the owners of newspapers that are responsible for the work of their employees. Besides, there is something far more critical at stake, here.

Professor Zuboff mostly ignores or pays only cursory attention to the indispensable role that the platforms have assumed for most of us as cyberspace equivalents of town halls, libraries, coffee houses, debating clubs, pubs and soapboxes, and of pamphleteering and other printed means of disseminating facts and opinions — among other institutions and media. 

In an interview with the editor in chief in the latest issue of Wired, the United Nations secretary-general, António Guterres, endorses the idea of access to the internet as a basic human right. He explains:

People are saying all the voices must be heard. The idea of a very small group of people can decide for everything is now being put into question very seriously. … [I]n each country, the trigger is different. In some cases it’s an economic-driven occasion, in others it’s pressure on the political system, in others corruption, and people react. But I see more and more people wanting to assume responsibility, wanting their voices to be heard. And that is the best guarantee we have that political systems will not be corrupted.

Here, pG — which has so far been among Facebook’s most relentless critics, most recently, for its new practice of selectively handing out gigantic pots of cash to famous newspapers and magazines — must concede that Mark Zuckerberg is right to say that ‘People of varied political beliefs are trying to define expansive speech as dangerous because it could bring results they don’t accept,’ and that he believes that ‘this is more dangerous to democracy in the long term than almost any speech.’ His idea of trying out ‘a court-style board to rule on site content’ — staffed not by Facebook managers but independent outsiders — is also a good one, as long as the arbiters are genuinely independent, and expensive professional lawyers from the rickety U.S. legal system do not get involved in the sorting out of complaints.

Also in this month’s issue of Wired, Gideon Lewis-Kraus argues in an excellent meditation on the Big Tech controversy that … 

The opportunity to vent on social media, and occasionally to join an outraged online mob, might relieve us of our latent desire to hurt people in real life. It’s easy to dismiss a lot of very online rhetoric that equates social media disagreement with violence, but […] the conflation might reflect an accurate perception of the symbolic stakes: On this view, our tendency to experience online hostility as “real” violence is an evolutionary step to be cheered.

[…] 

To worry about whether a particular statement is true or not, as public fact-checkers and media-literacy projects do, is to miss the point. It makes about as much sense as asking whether somebody’s tattoo is true.

By all means let’s urgently make rules or draft laws for curtailing user surveillance and data-gathering by Big Tech. Devious impersonations such as sophisticated, digitally-manipulated misrepresentations of people — such as the fake Nancy Pelosi video mentioned by Professor Zuboff — should be prosecuted like any other form of identity theft. If anything is making people angry enough to ensure all that, it is The Age of Surveillance — succeeding where earlier books drawing attention to the same or similar problems have had no remotely comparable impact.

Among them is one published in 1997 by the Harvard Business School Press — Real Time: Preparing for the Age of the Never-Satisfied Customer.** In it, the Silicon Valley marketing innovator and investor Regis McKenna shows Professor Zuboff to be mistaken in one of her central assertions, which is that surveillance capitalism was ‘pioneered and elaborated through trial and error’ by Google in 2001.

While search engine technology allowed for a massive refinement of commercial surveillance and made it incommensurably insidious when misused, at least one other company actually hacked out the path to it. Real Time drew attention to ‘an excellent illustration of the shades of interactivity to come.’  This was in a six-month interlude in 1996, in which PepsiCo offered teenage and Generation X consumers of Mountain Dew fizzy drinks radically discounted electronic beepers to use with no communication charges. 

They were also given access to a toll-free telephone hookup over which they could listen to interviews with sports heroes — and the chance to get discounts from twenty other companies keen to sell this demographic group things ranging from tortilla chips to snowboards. PepsiCo paged the 50,000 participants in its scheme once a week to ask them questions in a ‘real-time dialogue with them,’ and anticipated eventually creating ‘an enormous, nonstop, electronic focus group at a remarkably low cost.’ Unfortunately, as Real Time noted, this soon led to ‘a firestorm of unanticipated criticism’ of the soft drink producer,’ for exploitation:

The company had assumed that this, of all communications technologies, would be irresistible to parents — helping two-career couples worried about their children’s whereabouts to keep in touch with them. Instead, the promotion was denounced as disturbingly manipulative by parents and children’s advocates — like the Center for Media Advocacy in Washington, D.C., a watchdog group, and Action for Children’s Television.

The New York Times report on the project said that ‘soliciting information from youths through the Internet and pagers also raises privacy questions.’

A quarter-century later we know that the anxiety was prescient — but now we also have free speech protection to worry about, separately.

( A later post on the same topic is here

** Real Time was a short-order project, a book researched, written and edited on a brutal schedule, in less than six months, in 1996 — with the assistance of pG’s writer, who thanks @nikluac for the tweet that led to this excursion into the past.

Do drugs explain George Orwell’s ability to ‘communicate with the future’ from 1949 — and if so, have micro-dosing technologists or other intellectuals shown any sign of matching it?

 

icicles, Orwell, Big Brother posgutenberg@gmail.com

Through a glass, darkly: dystopian anxiety casts a pall of dread over the most innocent scenes, these days

A question for everyone ready to scream from the tedium of seeing George Orwell’s name coupled yet again with dystopia: yes, yes, but have you tried re-reading Nineteen Eighty-Four lately?  If for instance you, like the writer of this pG entry, last immersed yourself in it decades ago, aged about fourteen, shouting with laughter as you read out to your mother passages that struck you as fiendishly funny, which nearly always mentioned Big Brother, an outlandish caricature you couldn’t conceive of as connected in any way to your own rather boring life? 

At the start of 2020, there is not much to laugh about in Nineteen Eighty-Four. It has become too alarming and depressing to re-read voluntarily. Why? Because of its underestimations of the nastiest possibilities of intimate Big Brother surveillance, for one thing; and because we have no believable protection from its deployment by either governments or oversized corporations.

In the novel’s opening pages, when its protagonist Winston Smith starts a diary in a blank notebook — an out-of-date and semi-illicit ‘compromising possession’ — he can carefully seat himself in his living room outside the field of the spying telescreen that is capable of receiving and transmitting simultaneously: ‘Any sound that Winston made, above the level of a very low whisper, would be picked up by it.’ Extrapolating from today’s ‘internet of things,’ there will soon be nowhere for a Winston Smith or any of us to hide, as any number of networked ordinary household objects could be doing the telescreen’s job. On the page before that scene, he turns back from the window where he has been reflecting on the malign, barbed wire-clad Ministry of Love and, on his way to his kitchen, ‘set his features into the expression of quiet optimism which it was advisable to wear when facing the telescreen,’ on which Big Brother could be watching him. 

Only last month, a prominent UK newspaper reported behind its paywall, that ‘emotion recognition is the latest thing in surveillance,’ and that systems designed for this form of monitoring have been installed in the Chinese province of Xinjiang to ‘identify signs of aggression and nervousness as well as stress levels …’.

Orwell, writing in the late 1940s, has Winston worrying, as he begins his diary and considers its prospective readers far off in time, ‘How could you communicate with the future? It was of its nature impossible.’ He fears that it could be so different from the present as to make his dystopian predicament ‘meaningless’. Very much to the contrary, as we nearly all realise by now, it could hardly be more significant. Winston’s creator has no equal for writerly prescience about our moment, almost anywhere on the globe, even if one participant in an online discussion last January, @WMD, remarked that in the West, ‘we do seem to be much closer to the drug-induced, zonked out, sheep-like mentality’ of Aldous Huxley’s Brave New Worldwhich was published in 1932, nearly two decades before Nineteen Eighty-Four.

Narcotics, the indispensable element in Huxley’s nightmare future — his imaginary drug called soma, used by World Controllers to lull the population into blissful, hazy, submissive detachment from the consequences of their manipulations — came to mind recently in an untidy clump of wondering about how reminders of Nineteen Eighty-Four, and only that work of futuristic literary fiction, become more unavoidable each day. This led naturally to the question of what explains the steel-tipped accuracy of so much of its envisioning. Recalling Orwell’s four years as an officer with the Indian Imperial Police in Burma (Myanmar) in the 1920s was part of the associative clump, and trailing in its wake came thoughts of opium, of which Burma was then and is to this day a dominant producer. Ah! But then, what was the generally accepted understanding among Orwell experts about any connection — or lack thereof — between George and this narcotic, or any other mind-altering substance stronger than nicotine, caffeine or alcohol? 

The specific trigger for the meditation was probably a casual, intermittent discussion over several weeks about a friend who made a first pilgrimage to Burning Man last summer, and reportedly came away impressed by the high-wattage brainiacs from Silicon Valley, investment banking, and academia with whom he shared an ultra-exclusive tent for the duration of the celebration in the Nevada desert — with some of those minds seemingly amplified by full doses of psychedelics, not the micro-dosing said to be part of the ordinary work week at the office, for many of them. 

Imagine the surprise of finding no consideration by Orwell scholars of any role that drugs might have played in shaping Orwell’s flow of ideas about Nineteen Eighty-Four — unless anything like that is beyond easy reach, through search engines. Nothing, that is, except for a diligently researched, persuasive argument on the website of Darcy Moore  — an Australian school administrator, Orwell-admirer and memorabilia collector — that the novelist almost certainly had more than theoretical and imaginative experience of opium use. It reminds us that because Sonia Orwell, his widow, ensured at his request that no one was able to write his biography for over thirty years after he died, attempts to sift through his personal habits were obstructed while the information about them was still fresh. 

Among the facts Moore has assembled are these:

• Orwell’s father spent his working life as a supervisor of opium production, quality control and trade in India, when it was part of the British Empire.

• Nineteen Eighty-Four ‘has the protagonist agreeing “to distribute habit-forming drugs, to encourage prostitution, to disseminate venereal diseases…” before being given a political manifesto which mentions “the truth-producing effects of drugs”.’

• In reviewing the memoir of a well-known opium addict of his time, Orwell said that the bliss of using this substance was ‘indescribable,’ and Moore asks — reasonably — whether a man ‘who was prepared to quit his career against his father’s wishes to become a writer, steel himself to go down a coal mine with working men, get purposefully arrested, associate with a criminal underclass in Paris and London, spend time with the poor and homeless as well as risk his life in a time of civil war in Spain …’ would have hesitated to sample the drug himself. 

But none of the known facts about the writing of Nineteen Eighty-Four or Aldous Huxley’s Brave New World establish any direct connection between drug use and sublime artistic inspiration. Those of us who are abysmally ignorant about risky, habit-forming drugs — and who have been shocked by observing their worst effects directly — find it easier to relate to Aldous Huxley’s depictions of their deeply negative consequences in his famous futurama, apparently written before he had any personal experience of ingesting them. He did, however, become a radical convert to, and advocate of, the joys of psychedelics after he lost his virginity as an experimenter with controlled substances. Sadly for him, his novel Island, published in 1962, exactly three decades after Brave New World and a radical contradiction of it — since it depicts a drug-enhanced utopia — appears to have had few readers (not including this writer). The novelist and literary scholar Margaret Drabble has summed up the justifiable criticism by detractors of Huxley’s works — other than BNW — that they are ‘smart and superficial, a symptom rather than an interpretation of a hollow age.’

If only that were not so. If only experimenters with mind-expanding chemicals among today’s policy-shapers and influencers had more to offer us than testimonials virtually identical to Huxley’s about glorious, life-changing alterations in perspectives on the world and fellow human beings, but — also like him — with no specific great work in any field to point to for an illustration of such benefits. If only the British Psychological Society Research Digest, last August, had not concluded, about the most recent scientific investigations into this trend, that ‘no placebo-controlled study has found statistically significant effects of microdosing on creativity.’

If only the opposite were true, and someone was capable of writing, now, a counter-imagining of Nineteen Eighty-Four powerful enough and influential enough to accomplish what Orwell hoped to, when he wrote it — which was to head off the possibility of privacy-smashing, totalitarian mind control that instead, looks well set for conditioning our everyday existence in the not so distant future.