Issue 52

Privacy in a digital world

Digital privacy is important, but involves potentially costly trade-offs: unless you're willing to cede some privacy, you'll miss out on a seemingly ever-growing number of services  which depend on your data to function. The Snowden revelations and  subsequent concerns about the level of data we freely give away to both  corporations and governments may have shifted the landscape slightly  toward more privacy, but we are still some way from knowing what the right level of privacy actually is.

But  at least consumers are now aware of the privacy trade-off and most just  don't care: abuses of user data are regularly reported in the media yet  judging by their actions, the vast majority of users seem to place a  low value on privacy. For a quick and dirty proof of that, just take a  look at Facebook's user count and revenue, both of which continue to  grow despite the numerous, well-publicised exploitations of its own  users' privacy.

But  at least with Facebook people are free to leave and sign up to a more  privacy-friendly alternative if they so choose, or even eschew social  media altogether. When governments abuse people's privacy it's not so  simple, with the United States a major offender, having built a database containing every American's face:

It  also emerged in June that the FBI has a larger database of over 640  million faces, compiled from states' driver licenses, that it also  allows its agents to search against. That database is seemingly run by  an internal FBI unit called Facial Analysis, Comparison and Evaluation  (FACE), though a GAO report appeared to indicate that the database is  routinely accessed by cops as well as the Feds, with the database being  questioned more than 390,000 times since 2011.

The policies  covering use of that database is still unknown, as are the systems that  federal government uses to carry out facial recognition. What is known  is that several companies have contracts with the government for facial  recognition, including Amazon. Again, the details of those contracts are  unknown.

As the article states, technology in this  area has has outpaced civil rights laws and it's not as though a  driver's licence or state-certified ID, used to compile the original  database, are easy to do without. Worse, local authorities have already  sold the data meaning it's now a part of both public and private databases.

If  the United States is doing it, you can bet your bottom dollar that  other, less privacy friendly nations, are doing the same. As the FT reported last week, China is using "emotion recognition... to identify criminal suspects":

“Using  video footage, emotion recognition technology can rapidly identify  criminal suspects by analysing their mental state . . . to prevent  illegal acts including terrorism and smuggling,” said Li Xiaoyu, a  policing expert and party cadre from the public security bureau in Altay  city in Xinjiang. “We’ve already started using it.”

The problem is, it's not very good at what it purports to achieve (like most AI):

Companies  around the world, including Amazon, Microsoft and Google, are all  developing emotion recognition, but scientists say the technology does  not work very well. “This technology is still a bit of a gimmick and is  unlikely to be rolled out on a large scale in the next 3-5 years,” said  Ge Jia, an influential Beijing-based technology blogger.

Something  not working well has never stopped a government from persisting to the  bitter end in the past, especially when it just might help it shore up  control of its people, for example by oppressing minority groups (this  tech has been rolled out in Xinjiang, where most of China's Muslim  Uyghurs live). In the meantime they will keep collecting data that will  be stored forever and it's not clear how, even with strong new privacy  laws, this can ever be unwound.

Except, perhaps, with obfuscation, which if done on a large enough scale, will render "big data" irrelevant:

At  its most abstract, obfuscation is the production of noise modeled on an  existing signal in order to make a collection of data more ambiguous,  confusing, harder to exploit, more difficult to act on, and therefore  less valuable. Obfuscation assumes that the signal can be spotted in  some way and adds a plethora of related, similar, and pertinent signals —  a crowd which an individual can mix, mingle, and, if only for a short  time, hide.

There is real utility in an obfuscation approach,  whether that utility lies in bolstering an existing strong privacy  system, in covering up some specific action, in making things marginally  harder for an adversary, or even in the “mere gesture” of registering  our discontent and refusal.

There are numerous products that help with obfuscation, for example by instructing Firefox to block browser fingerprinting, or by installing browser add-ons such as Go Rando,  which "randomly chooses your emotional "reactions" on Facebook,  interfering with their [sic] emotional profiling and analysis".

Compared  to opting out of services altogether which, as mentioned, often comes  with significant costs, obfuscating your digital activity is relatively  cheap. If enough people do it - whether purposefully or passively  through changing industry standards such as DNS over HTTPS - large datasets will be full of so much misleading data that they will effectively become worthless.

Enjoy the rest of this week's issue. Cheers,

— Justin


The bits

The Musk family are something else

In  the "image of the week" below I have provided snippets of testimony  from the SolarCity lawsuit starring Elon Musk's brother, Kimbal. The  whole fiasco is quite incredible and investors, if not already, should  exercise extreme caution in any Musk-led business.

Learn more:

China is still winning the 5G race

Despite  the bans imposed on Huawei and the ongoing trade dispute, China remains  the undisputed leader in the 5G space. It's also diversifying supply  chains to become less dependent on the US components in the future. As I  wrote last week,  if the openly anti-innovation House Financial Services Committee is any  representation of the country as a whole, the United States' reign as  disruptor in chief could be over.

Learn more:

Political advertising

Facebook  and Twitter have taken contrasting positions here, with Facebook opting  to 'police' political adverts and Twitter banning them outright.  Neither policy will be easy to enforce but I much prefer Twitter's  stance. The two CEOs' comments are telling. First the Zuck:

"Would  we really block ads for important political issues like climate change  or women's empowerment?" he asked. "Instead, I believe the better  approach is to work to increase transparency. Ads on Facebook are  already more transparent than anywhere else."

Now here's Dorsey's take:

"It's  not credible for us to say: 'We’re working hard to stop people from  gaming our systems to spread misleading info, buuut if someone pays us  to target and force people to see their political ad…well...they can say  whatever they want!'"

While  neither platform has any obligation to allow freedom of speech,  Facebook's selective enforcement of political adverts - effectively playing politics - bothers me far more than Twitter's "go elsewhere" stance.

Learn more:


Other bits of interest


Image of the week

The  Musks borrow heavily against equity in their businesses and regularly  receive margin calls. In that context, Elon Musk's unusual Twitter  statements make a lot of sense, as they're designed to pump the stock  price of one of his companies long enough for him, and his family, to  restructure their loans.


This week's data breaches

I  don't know what's worse, the fact that Gaggle exists or that schools  around the world pay sixty grand a year for the privilege of using it.

The breaches:

That's all for now. If you enjoyed this issue, feel free to share it via email


Issue 52: Privacy in a digital world was compiled by Justin Pyvis and delivered on 05 November 2019. Join the conversation on the fediverse at Detrended.net.