Pssst! It’s almost a secret, but Britain’s House of Lords wants your opinion about protecting your freedom to speak your mind online

 

Do not let the freedoms of the net wither and die




Choosing to call yourself Baroness Bull, Buscombe or Quin, or Lord Storey or Viscount Colville of Culross makes it seem unlikely that you would be of the slightest use in helping your government to draft policy for the digital revolution. Yet those names belong to actual peers serving on the Communications and Digital Committee of the UK’s House of Lords, which last month launched an inquiry into ‘how the right to freedom of expression should be protected online and how it should be balanced with other rights.’ You could think of their eminences as real life counterparts of steampunk science fiction, with a dusting of retrofuturism.


Even more surprising is the committee’s incisive understanding of what is at stake in the debate about the unprecedented powers the internet has given ordinary people to broadcast facts and opinions across the planet. This is obvious in the record of its first hearing on 24 November, and in the list of questions in its call for evidence.


Strange to say, the request for direct participation by the public — with a 15 January 2021 deadline — has not been published by any well known newspaper or mainstream media website, as far as pG can tell from searching on ‘uk parliamentary committee call for evidence freedom of expression online’. Why are traditional media assuming that we the people have no interest in preserving our free speech rights on the internet?


Those questions, as set out by the committee’s chair, Lord Gilbert of Panteg



In recent years there have been a growing number of controversies relating to the use of the right to freedom of expression online. We hope to hear views from all sides of the debate on the roles that platforms and the state should play in protecting or curtailing what users can say online.



Other questions posed by the committee include:
How should good digital citizenship be promoted?
Should online platforms be under a legal duty to protect freedom of expression?
To what extent should users be allowed anonymity online?
How can content moderation systems be improved?
Would strengthening competition regulation of dominant online platforms help to make them more responsive to users’ views about content and moderation?




The last question on that list is the most impressive. At the first hearing, Jeffrey Howard — who teaches ethics and political philosophy at University College London — had a particularly thoughtful and practical answer:




‘Part of the public debate on this topic has involved the proposal that these companies be broken up by appealing to various kinds of anti-trust arguments. It is, of course, worth pointing out that if the companies are broken up, the resources that they have at their disposal to engage in extremely expensive content moderation is reduced. It is not immediately obvious that that would be an effective solution.


‘It occurs to me that we lambast the social media companies whenever they fail to take down enough content, but the moment they take down too much content there is a scandal on the other side. I do not say that to let the companies off the hook by any stretch of the imagination, but simply to remind ourselves that this stuff is pretty difficult, and engaging in content moderation at scale across countless cultures and countless languages is a difficult business. We need to have patience, given that we are only at the inception of this process in the history of social media platforms.’




To Dr Howard’s answer, pG would add that the problem governments should rank far above anti-competitive behaviour by Big Tech platforms is the modus operandi that has made them so rich and powerful. That ‘business model’ depends on round-the-clock surveillance or privacy invasions — ‘data-gathering’ — that they use to assemble individual profiles deployed not just by them but also by companies who buy these from them to manipulate us. 


This is the source of the trouble. Using regulatory powers to root out or sharply curtail it should be the first order of business. 


Some version of another unavoidably complex question could be added to Lord Gilbert’s list, in two parts: 


Would curbing the surveillance-and-profiling of users by the dominant online platforms diminish the strong incentive that this data-gathering gives them, today, to preserve their users’ freedom of expression? If so, what can be done to avoid any such undesirable consequence?


Of course that assumes that Britain’s government has not fallen as deeply in love with the potential of tracking and data-gathering as governments elsewhere. This could be a leap too far, as much as pG hopes that it has avoided the contagion, and will continue to do so. See: 
Notes on a U.S. congressional hearing: turning antitrust guns on Big Tech will not shield us from Orwellian puppeteering. Why did the politician-legislators choose the wrong focus?’ 

Notes on a U.S. congressional hearing: turning antitrust guns on Big Tech will not shield us from Orwellian puppeteering. Why did the politician-legislators choose the wrong focus?

‘Slowly the poison the whole blood stream fills … …’postgutenberg@gmail.com

‘… Slowly the poison the whole blood stream fills …’: William Empson

Notes scribbled after the second day of grilling this week for the chief executives of Amazon, Apple, Facebook and Google by the U.S. Congress’s antitrust judiciary committee: 

Can protecting citizen-consumers really be the point of telling Big Tech chiefs that they have too much power, when this is news to no one?

If yes: horse, barn door; 

problem has gone viral — the uncontrolled proliferation of harm to citizen-consumers (not Covid-19; the commercial surveillance virus);

hardly any citizen-consumers understand this or implications.

Conclusion: too late to save us so we’re doomed — barring lucky accident of stupendous dimensions.

1. In the frightening background to the hearing, unenlightened citizens: 

A disturbingly high proportion of consumers in six countries surveyed by the San Francisco technology security firm Okta this year have no idea of the degree to which they are being tracked by companies. They are equally oblivious to being milked for their personal data. Though ‘people don’t want to be tracked, and they place a high value on privacy42% of Americans do not think online retailers collect data about their purchase history, and 49% do not think their social media posts are being tracked by social media companies. … Nearly 4 out of 5 American respondents (78%) don’t think a consumer hardware provider such as Apple, Fitbit, or Amazon is tracking their biometric data, and 56% say the same about their location data.’

With those findings, the reason why rich Big Tech is only getting richer in a pandemic-battered US economy is obvious. It is just as clear that the average citizen cannot be expected to grasp that the execrable business practices of the technology leaders — including deceptive ‘privacy settings in devices sold by the most successful brands or guaranteed by popular platforms — are being copied by every type and size of business. 

2. Shouldn’t Congress’s focus be on eg., the unfair risks in installing apps — used to turn citizens into pawns of corporate surveillance?

Businesses once never thought of in connection with digital technology are forcing surveillance and tracking tools on us, mostly in the form of apps — but also when we think we are just popping in and out of their web sites. 

You can, for instance, log on to the site of a credit card company you trust and for the fifth month in a row, have to complain to the IT support desk about error messages obstructing you from completing your task. Finally — with an embarrassed acknowledgment of your loyalty to the brand — an unusually honest tech support supervisor confesses that the site’s glitches are not accidental but part of an effort to push customers towards installing the company’s app, and conduct their transactions on their smartphones. You say exasperatedly, ‘Oh, to track what I do all day long?’ The techie does not answer directly, only laughs and says that although most customers seem to love the app, he would not install it on his phone. He promises to notify colleagues responsible for the manipulation that you will never install the app. The site goes back to working perfectly for you. (Note: that was an actual, not an imagined, experience.)

3. The companies will not stop at tracking, data-gathering, and individually targeted advertisements

As in this site’s testament two years ago about another low-tech company, the esteemed media organ we called ACN.com, — ‘Big Brother takes an alarming step past watching us …’  — businesses are proceeding from spying on us and selling or sharing their discoveries with third parties to using them to limit or redirect our choices, and even scolding us for legal and reasonable behaviour that does not suit them. The ACN manager we argued with in that incident said that his organisation had ’special software tools’ that monitored every click and keystroke by visitors to its web site. In fact, the newspaper had graduated from unremitting surveillance to: 

demanding that we make personal contact with our monitors; insisting that we submit to interrogation by these monitors, and account for our actions; cross-questioning us about our answers, and about why we say that the obtuse interpretations by monitors — inadvertently or tactically — of what we are doing are mistaken.

Imagine what that would mean in even more intrusive and unscrupulous hands.

4. Politicians in both parties campaigning in the U.S. presidential election are copying the methods of commercial surveillance: is this why antitrust rather than tracking and data-gathering was the focus of the Congressional hearing?

On 14 July, the U.S. president’s digital campaigning strategist Brad Parscale boasted on Twitter about a ‘biggest data haul’ on supporters and prospective voters. That was done with the same nasty spying technology, software apps. The Republicans are not alone, here. The campaign of the Democratic front-runner has its own equivalent. In fact, an article published by the MIT Technology Review on 21 June said that across the globe, politicians are using apps to organize support, manipulate supporters and attract new voters. Many are using the particular app developed for the Indian prime minister in his last campaign — which ‘was pushed through official government channels and collected large amounts of data for years through opaque phone access requests.’ To be perfectly clear, electioneering software used ‘“just like a one-way tool of propaganda”’ is also being used to govern India.

The Trump campaign app seeks permission from those who install it for — among other startling invasions of privacy — confirming identity and searching for user accounts on devices; reading, writing or deleting data on devices; getting into USB storage; preventing the device from sleeping.

The authors of the piece, Jacob Gursky and Samuel Woolley, say: ‘As researchers studying the intersection of technology and propaganda, we understand that political groups tend to lag behind the commercial ad industry. But when they catch up, the consequences to truth and civil discourse can be devastating.’

How strange that there has not apparently been the smallest whisper about any of this in connection with the politicians’ heroic interrogations of Big Tech leaders this week … or is it, really?

5. Is poetry all we will have left for comfort?

Society is being hurt by these technologies and practices in damage going deep and acquiring subtle dimensions, inexpressible except in poetry — as in these lines from the 20th-century poet William Empson:

Slowly the poison the whole blood stream fills …

.

… It is not your system or clear sight that mills

Down small to the consequence a life requires;

Slowly the poison the whole blood stream fills.

‘Missing Dates’

Or there are the 1992 predictions of the late Leonard Cohen, in a song last quoted here a few months ago in a different context just as apt:

… There’ll be the breaking of the ancient

Western code

Your private life will suddenly explode …

.

… Give me absolute control
Over every living soul …

‘The Future’

 

Big Tech dangers we are not talking about — especially, how the theft of our personal data is opening the way to future subjugation and control at the scale of masses, not just individuals

 

dark cloud looming -- postgutenberg@gmail.com

This week marks the first anniversary of an attempt by the Wikipedia co-founder Larry Sanger to organise a social media strike. It did not attract the support it deserved. That was largely because mainstream media — including nearly all the best-known newspaper sites in the UK and US — declined to publicise it. Indeed they did not mention it at all, even though the BBC and the online version of The Daily Mail — two of the most-frequented news sites in the English-speaking world — ran reports about the plan and call to action. This site outlined the probable reason why: ‘ Mystery solved? Famous newspapers that ignored the Social Media Strike of 2019 have agreed to accept regular payments of millions of dollars from Facebook.

Grassroots tweeting and similar advertisements by the general public could — conceivably — have made up for the media silence. They did not. One reason why — probably outweighing all the others — is that in this ironic Information Age, we seem increasingly less able to absorb information and assess the reliability of its sources, especially when it is about risks and threats to our safety. 

We have to find new ways of establishing credibility. What could be better than handing out tools to let people run their own tests of any assertion? Read side-by-side, the two public-interest comments below show how helpful this can be — in the context of Big Tech’s siphoning of our personal data, the subject of innumerable posts, here (this one, for example). The first is a statement about a trend to which this site has been trying to draw attention since 2011. The second offers a way to assess its substance. They are recent, actual comments by readers on the Financial Times site (whose real-life identities pG does not know) made a few days apart, on different Big Tech-related articles there. 

The highlights are pG’s:

PiotrG

Big/Bug Tech relies on an ever-expanding expropriation of personal data to make money. Its endgame is to turn people into trained monkeys whose behaviour can be predicted and ultimately directed towards specific objectives. For now the objectives are commercial, but they could become social or political. That is the problem, and it won’t be solved by antitrust laws alone. 

However, concentration and excessive market power make the problem worse. A world where 10 people own information on 3-6 billion “customers” and manage to kill market competition, avoid supervision and remove internal (stakeholder) control is a perfect Orwellian nightmare.

Frederick E.

Anyone who thinks that it is easy to escape surveillance should install a pfsense router, or some equivalent. Set up firewall logging, even better deep packet inspection (including https via certificate installation). Then set up your privacy settings on your devices  they way you think is max what you need. Use them for a week as you would normally. Then check the firewall logs on your router. You will be surprised to see how much info from simple DNS, or DNS via https to much more detailed surveillance both facebook, google, microsoft or apple carry out. 

An average home with a computer, three phones and a tablet, plus roku (boy does that thing spy) and smart speakers leaks an inordinate amount of data even when privacy settings are set to max. 

Privacy settings are a false sense of security. Smart devices as well as computers are now designed to spy at the core OS level, no firewall, or app/plugin is going to stop it – these are higher level process that cannot override core level ones. 

The only way to block stuff at home is at the router level, but when you do so, many things simply stop working. The deal is be spied on, or don’t use it. This goes for free stuff or paid.

Unfortunately, where there should be discussion of what @PiotrG and @Frederick E. are trying to protect us from, there is precisely none about any such specifics.

How do you discover the actual origin of a bug — such as ‘surveillance capitalism’ — when its history as a feature is all but lost? Could a better Wikipedia help?

 

bug or feature? photograph by JACKI HOLLAND postgutenberg@gmail.com

Bug or feature? (at the edge of the flower’s dark centre) The shadowy face of advertising aimed at us as individuals — ‘micro-targeting’ —  makes it hard to learn about its idealistic beginnings. Photograph: Jacki Holland

If Google did not invent the phenomenon now being referred to as  ‘surveillance capitalism,’ who did? part 2 ( part 1 is here

Is the digital revolution moving too fast for academics to keep up? You could call the question mission-critical because the (possibly) inadvertent errors of some scholars are influencing regulators and law-makers drawing up rules for the digital economy. It follows naturally from the last post here on pG , which pointed out that Shoshana Zuboff is wrong to declare that Google pioneered the milking of unsuspecting internet users for our data; the routine extraction of intimate information about us and our lives in a system that she and various others have for some time been calling surveillance capitalism. 

In a piece for Fast Company a year ago, Professor Zuboff said that Google invented it … 

… more than a decade ago when it discovered that the “data exhaust” clogging its servers could be combined with analytics to produce predictions of user behavior. At that time, the key action of interest was whether a user might click on an ad.

But the Pepsi market research project using electronic beepers described here last month had the identical, advertising-oriented aims and contained almost all the components of today’s commercial surveillance, even if its technological tools were less sophisticated and intrusive.

It was completed in 1996, two years before Google was even incorporated in September 1998. Pepsi deployed the beepers to track, survey and assemble detailed taste and preference profiles of 50,000 young customers, stretching far beyond their soft drink consumption, and traded this information with twenty other companies — which also used the data to design more powerful, less resistible, advertisements for their products through what eventually came to be known as micro-targeting. It was attacked by outside observers sounding exactly like today’s critics of commercial surveillance for intruding on the privacy of its project’s participants.

The secretiveness about tools and data-milking methods of Google and other search technology giants  — as well as virtually every other company doing business on the internet — has warranted  their deeply negative portrayal in the media and scholarship. But most of the critics condemning them either failed to explain — or simply did not know — that the unwanted bug that they constitute, collectively, was lauded almost a quarter-century ago as a benign, intensely desirable prospective feature of the internet as it began to take off.

In a 1997 interview published in Wired, Tim Berners-Lee actually made such a prediction after a question from his interviewer, Evan Schwartz, about whether the advertising already starting to saturate the web was one of the undesirable, ‘unexpected turns’ that his creation had taken:

… Marketing on the Web is going to be a lot more humane than marketing in traditional mass media because it’s possible to treat people individually. If I’m interested in buying a canoe, I can say, “Hey guys, I want a canoe.” I can float that onto the Web. Then other people can satisfy their own interests by selling me a canoe, not to mention inviting me to a newsgroup about good places to go canoeing.

Doesn’t that raise privacy issues?

My gut feeling is that one should be able to negotiate how one’s information is used …

Of course there is no such negotiation — an innovation we must hope can soon be regulated into existence — but you will not find those early thoughts of TB-L on the subject by typing ‘Tim Berners-Lee advertising’ into a search box. Search results reflect the marked shift in his opinion on the subject, encapsulated in a Google listing of a 2019 article in Fast Company in which he spoke out against ‘advertising-based revenue models that commercially reward clickbait,’ and characterised these as one of ‘the web’s 3 biggest cancers’. 

This pG site’s reminder of that chat with TBL is a printout sitting in a cardboard box in a garage. Its neighbours in its file include notes from unpublished conversations with Silicon Valley executives the following year, in which they described rapidly evolving marketing methods closely coupled to product design and improvement tailored, like Pepsi’s, to swift feedback from customers — only far more frequent, and well on the way to becoming today’s nonstop monitoring. As senior marketing managers at a small software startup — selling a system used by employees of other companies — said:

Part of our beta process that we’re doing right now is we have customers actually giving us feedback on the product as we develop the product […] and the engineering is responding to it and we go back to the customers [… who are … ] essentially involved in our design with us. […] These people and what do they want is really what the issue is […] and we’re just monitoring it all the time. All the feedback goes into a web form and then, boom! gets screened like two or three times a day by product marketing and engineering to figure out […] major product changes or directions … 

Hunting for such information about Silicon Valley marketing in the Wikipedia entry titled ‘Surveillance Capitalism’ would do no good, even for those readers who make it past the excruciating, jargon-laden first sentence on its background — ‘the intensification of connection and monitoring online with spaces of social life becoming open to saturation by corporate actors, directed at the making of profit and/or the regulation of action.’ 

Neither is there any allusion to it except in the vaguest terms in the online encyclopedia’s pages devoted to ‘Digital Marketing’  or ‘Interactive Marketing.’ Under ‘Surveillance Capitalism,’**  there is no trace of optimistic early expectations for it, such as TB-L’s enthusiasm for ‘humane marketing’ — although the entry does make a passing reference to ‘self-optimization (Quantified Self)’ as an instance of ‘various advantages for individuals and society’ of ‘increased data collection’ — and whose own page describes ‘a community of users and makers of self-tracking tools who share an interest in “self-knowledge through numbers.”’ 

How could Professor Zuboff have missed a prototype as large and substantial as the Pepsi project, also unmentioned in any of those Wiki pages dedicated to high-tech marketing? She would have had to do field research in Silicon Valley to avoid her error of crediting Apple and Apple alone for capitalism tailored to the needs and predilections of individuals — passing over that swiftly in a strictly abstract, generalised passage of The Age of Surveillance Capitalism (2019) about an evolutionary trend, beginning with Henry Ford, for companies to serve ‘the actual mentalities and demands of people.’ 

At one juncture in her book, she seems to be saying that she could not do any immersive research on the topic because Google, all too predictably, would not permit this: ’[O]ne is hard-pressed to imagine a Drucker equivalent [ Peter Drucker, the still unsurpassed Austrian-born theorist on business management ] freely roaming the scene and scribbling in the hallways.’ But Professor Zuboff plainly did not know enough to realise that Google was not the place to look for answers about the origins of the relentless commercial surveillance loop, or that there were rich sources of information about its practices elsewhere in Silicon Valley. 

How can scholars — and all the reviewers of her book who failed to correct her misattribution of its invention to Google — avoid this sort of mistake in future? Defects in our collective treasure-house of knowledge?

Could an even better version of the collaborative, still indispensable, still miraculously non-commercial Wikipedia be the answer? Larry Sanger, its co-founder, who long ago left that institution, has been hatching plans for an improvement he is calling the Encyclosphere, and outlined in a lecture at a conference in Amsterdam last autumn. He has promised generously to answer questions about it from almost any competent writer, and perhaps will tackle the pair in the header for this post.

** in a download on 3 March 2020