The True Power Of Algorithms And Social Media

Well-known big data expert Martin Hilbert has closely followed the digital effects of the Coronavirus.

“The growth of digitization was always exponential, but the pandemic accelerated it with steroids,” says Martin Hilbert, a German researcher at the University of California-Davis and author of the first study that calculated how much information there is in the world.

Recognized for having warned about Cambridge Analytica’s intervention in Donald Trump’s campaign a year before the scandal broke, Hilbert has closely followed the digital effects of the coronavirus and his conclusions are not very optimistic: people do not know how to deal with it. power of algorithms, governments do not know how to use them in favor of the population and companies are reluctant to adopt effective ethical guidelines.

This should be of particular concern to Latin America, “because they are world leaders in the use of social networks,” warns Hilbert, who spent a decade in Chile as a UN official and today lives 40 minutes from Silicon Valley.

In conversation with BBC Mundo, he shared his opinion that new technologies pose challenges of such scope that they could require an evolution of human consciousness .

Martin Hilbert
Martin Hilbert, a German researcher at the University of California-Davis and author of the first study that calculated how much information there is in the world.

What news did the pandemic bring to our relationship with digital networks?

It had two simultaneous effects: it made us more sensitive to the toxic aftermath of digitization, but it accelerated our dependence on it.

And he also confirmed that the second effect is more powerful than the first: being aware that this addiction hurts us does not produce any change in our behaviors.

Why do you think that happens?

You have to understand how this digital economy works, where the scarce resource to be exploited is human attention.

The business of the technology giants – Google, Apple, Facebook, Amazon – is not to offer you commercial ads: it is to modify your behaviors to optimize the performance of those ads.

And they can do it because algorithms, by processing millions of data about your behavior, learn to predict it, much better than you do.

But to get to know you and influence you they need to keep you connected. Therefore, the so-called persuasive technologies fulfill their mission when you are addicted and cannot divert your attention from them.

From what the documentary The Social Media Dilemma shows, many in Silicon Valley regret creating such technologies.

Here in Silicon Valley the buzzword is human downgrading , which sums up the following idea: from so much discussing when technology was going to exceed our capabilities, we lost sight of the fact that machines were focusing on knowing our weaknesses.

Winning a game against the chess champion was the least of it. His true source of power has been to lead us into our narcissism, our anger, anxiety, envy, gullibility, and, by the way, our lust.

That is, persuasive technologies appeal to keep yourself in the weakest version of yourself so that you spend your time on the networks.

Some critics have said that the documentary is alarmist, or that it lacks the historical perspective to understand that these phenomena are not so new.

Like all documentaries, it leaves important aspects uncovered, such as the intersection between technology and inequalities. But I do not perceive an exaggerated alarmism.

Those who criticize these speeches have a typical phrase: “These things always existed.” And it’s true. In fact, Facebook did a study to show that the social network influences political polarization less than our innate attachment to like-minded friends.

But the same study showed that Facebook’s recommendation algorithms double that effect, and therein lies the problem. Eggs and meat have always raised cholesterol, but in recent decades we’ve boosted that effect with an avalanche of ice cream and potato chips. I explain?

What happens is that it is difficult for us to admit the effect on ourselves.

We are very concerned about seeing our children glued to a digital pacifier all day, unable to concentrate or assimilating unrealistic expectations about their bodies. But we are something else, we use the networks for fun, nobody puts a pacifier in our mouth.

But it is a fact that digital technology also provides us with essential services. The pandemic has made it quite clear.

Without a doubt, and that has no turning back.

The growth of digitization has always been exponential. 25 years ago we did not have cell phones and it is impossible to imagine. But the pandemic sped it up on steroids. Although it also showed its limitations, didn’t it?

I have been teaching online for years and I know its disadvantages very well, but now every primary school teacher discovered that with 7-year-olds it doesn’t work at all.

It also accelerated the previously more theoretical privacy debate: what does Siri listen to, what does Alexa listen to? Siri is no longer needed, all the houses are connected and the whole family is in the house.

The other day an innocent dad put on his pants while my 6-year-old daughter was in class, and of course, there were about 30 families watching a half-naked old man in the back. Or suddenly you hear a couple fighting in the other room. Even if you don’t want to, you already get into other people’s houses all the time.

Digital surveillance tools have been another difficult problem to deal with. We used to resist them, but this year we were very interested in South Korea’s tracking app, for example, which was the most invasive of all.

Sure, people are almost mad that the tracking apps still don’t work. And the problem is not technological, it is political. Here two serious problems are evident.

First, people still don’t quite understand what big companies do with their data. In March, when Apple and Google announced their app, they all said “oh no, now Apple and Google want to collect that data from us.” Apple and Google always collect that data!

And second, governments were unable to react to the simplest technological challenge.

The private ones told them “we put the data, you develop the app”. And the governments in half a year did not manage to coordinate or push a political dialogue, because they do not have the language for this, they cannot sell a message.

In the United States, they could not even reach an agreement within each state. And a little over a month ago, Apple and Google said “already, governments are so incapable that we are going to take this matter into our hands.”

As the law prevents them from installing the app without state permission, they will integrate the function into the phone’s operating system and each user will see if they enable it. This shows that the advantage of the private sector in this matter is insurmountable today.

At least in the West.

Exactly, this did work in Asian countries that had learned about SARS – although the South Korean app, as you said, publishes more data than necessary – and in authoritarian countries where there is simply no discussion.

In China, they even check your credit card details to monitor your quarantine. For the government, the emergency justifies it, period. But Western governments don’t know what to justify because they don’t even know how to raise the discussion. It is worrying.

To be fair, the dispute between privacy and security has never been easy to raise in democratic countries.

I grew up in a divided Germany where a surveillance state controlled half the country, so I care a lot about my privacy. But I worry more about my 70-year-old mother who still lives in Germany, right?

The real problem, as Yuval Harari has warned, is preventing emergency measures from hanging over when normalcy returns.

The pandemic also allowed us to verify that false news multiplies even when there are no political interests behind it.

Yes, the problem here is the economy of attention itself.

The algorithm does not care which way the fake news takes you, it simply serves to trap you because it fits better than the truth with our cognitive biases. In particular, with two of them.

Which?

One is confirmation bias: if a piece of information reinforces your opinion, it has been found to be 90% less likely to be identified as false. And even if they tell you it was false, it is 70% more likely that later you will remember it as true.

The other is the novelty bias.

We evolved to pay disproportionate attention to the new. The one who didn’t, the tiger ate. And the truth is not usually new, you’ve heard it before.

Thus, fake news gets 20 times more retweets on the networks than real news.

And the advantage of algorithms is that these behaviors are predictable: we are irrational, but predictably irrational.

So if you were an algorithm programmed to attract clicks, what would you do to excel at your job during a pandemic? Prioritize alarming messages that blame religious minorities for spreading the virus, or the gringo army for bringing it to Wuhan.

You will do very well on the famous “neutral metrics”, which supposedly privilege “what we like” but actually maximize profits at the expense of polarization.

And of our emotional well-being, as many psychologists believe.

Last year, an experimental study concluded that disabling Facebook for a month increases your subjective well-being by as much as making an additional $ 30,000 a year.

The explosion of the networks has coincided with measurable increases in anxiety, in the perception of loneliness, in adolescent suicide, especially among girls …

Let us understand that these algorithms do not affect everyone the same: they look for the weakest among us and hit them under the belt.

If a 14-year-old girl searches for a video on YouTube about how to eat better, the algorithm will soon recommend a video about anorexia, because experience tells her that it will catch her attention. And if she’s weak, she’ll take that path.

YouTube users, who number two billion, watch an average of 40 minutes of videos a day, of which the algorithms recommend 70%. About 5% of the recommendations are absurd conspiracy theories: that the Earth is flat, that vaccines are dangerous, etc.

In numbers, two out of every seven people in the world see an average of 1.5 minutes of conspiracy theories a day. It is almost a global religion! I don’t think so many Christians pray daily.

If you watch those kinds of videos, you start to doubt everything. And if the truth of the facts no longer counts, neither do the rules. That is why creating confusion is so important to populist or authoritarian leaders.

There are also absurd theories about digital manipulation, or about the hidden intentions that Mark Zuckerberg would have.

Sure, some believe that Zuckerberg studies our personality to go to a dark basement with the Joker and Darth Vader to plan how to take over the world.

But it doesn’t work like that. There aren’t even many psychologists in Silicon Valley.

Persuasive technologies find our weaknesses by trial and error, with blind A / B testing: they put two versions of a message and see which one produces more clicks.

This is how they discovered that posts that express outrage get double the likes and almost triple the shares .

This blind method, in fact, rediscovered strategies that appeared years ago in casino design manuals, designed to make you addicted.

Another very successful emotion is fear, because reacting to the fear of the tribe is also evolutionary learning.

When a buffalo senses the fear of another member of the herd, it runs without knowing why.

And you didn’t check your stack of toilet paper in February because you had news about the supply chain, but because of collective fear. Well, #toiletpapergate and #toiletpapercrisis were the main trends on Twitter at the end of February.

To say something in their favor, some social networks leaked many false news about the pandemic, in an unprecedented effort on their part.

Yes. Amazon removed many products that lied about the virus, and Facebook posted warnings on millions of posts that did the same.

When people saw those warning labels, 95% of the time they didn’t click on the story. But how good is that, if the vast majority only read headlines? People don’t bother to read the content of 70% of the links they retweet.

And that 5% who were not deterred by the warning are already two million people.

Avaaz, a nonprofit organization, reported that 104 false claims about the virus were viewed more than 117 million times on Facebook during March, and that the company took up to 22 days to issue the warnings.

And we talk about content in English, in other languages ​​they filter much less.

This should concern Latin Americans, because they are world leaders in the use of social networks: 3.5 hours a day on average.

Are you in favor of states regulating the use of these technologies more strongly?

Of course! It is true that efficient regulations often come when an industry has reached a certain scale, because it is difficult to anticipate risks.

When the automobile appeared, one of the arguments in its favor was that it would make cities healthier by reducing horse droppings.

But we cannot leave the rules of society in the hands of a few engineers. Where should the data be stored? What kind of data? For what purpose can they be used?

We have to get these nerdy questions out of the programmers’ garage, because we are breaking various social agreements with the power of this deregulated economy.

In a recent article you propose that, just as we have modified behaviors to take care of the virus, we should adopt “digital disinfection” measures.

Clear. People know that eight hours of screen work is enough. But she goes into her bedroom, takes two breaths and takes out her cell phone anyway, she can’t help it anymore.

And as much as Apple and Google add features to help you monitor your digital consumption, their technologies are still designed for addiction.

You say “no, I’m just going to check a notification.” And 40 minutes later, you say “oh what happened to me!” It turns out that your paleolithic brain is no match for supercomputer machine learning about your will.

Hence the more existential questions about what the human will is in this context.

Schopenhauer already said: “The human can do what he wants, but cannot want what he wants. That is not new either.

What is new is that artificial minds, discovering the biases of that will, have begun to compete with it for our conscious perception of reality.

This may sound crazy, but I think we are putting a new evolutionary pressure on Homo sapiens.

Because if we want to coexist with machines that process information much better than we do, humanity will have to produce a leap in consciousness. That is, to evolve towards forms of consciousness less attached to information processes.

And do you think we can induce such an evolution?

Don’t ask an academic for such enlightenment, but I’ll tell you something that surprised me a lot.

I recently analyzed, with data from Facebook, what people have done in their spare time during the pandemic in Latin America. And the only activity that skyrocketed compared to normal times was meditation, both in people’s interests and in app downloads.

Women, who always led the use of these apps, doubled their use. And men tripled theirs, reaching the level women had in 2019.

And what is meditation looking for? Disconnect even from your thoughts.

And persuasive technologies function as extensions of our minds, of that inner dialogue that we cannot stop.

Like when you are angry and you argue in your head with the other person and tell him everything that he has done wrong to you and everything that he does not know.

These technologies connect to that internal dialogue, externalize it through social networks and there they grab you.

So it’s interesting that it’s meditation, a possible antidote to that, that has exploded. 15% of Facebook users in Latin America already show interest in it.

Would it be contradictory that they look for the antidote in the same networks?

It is not about turning off the internet. It is also not an option if you want to be part of the evolution of this society.

In Silicon Valley, in fact, there is also quite a bit of interest in meditation. They are experimenting with sound frequencies, to find those whose brain effects help induce detachment and rest from this constant connection.

And do you know what they discover? That certain frequencies produce the same effect on your brain as a campfire.

Again, there is nothing new here, spiritual traditions sought that effect thousands of years ago to clear your head.

Because if you look inside yourself, in your head there is not a single opinion, there is a committee arguing. And when people sense again that they need to get rid of those voices, it is because they discover that they are the same ones that run on Facebook.

Now, detaching yourself from those voices is not as simple as downloading an app, they are big words.

But before, to try, you had to quit your job, your family and go to the mountains to find a teacher. The idea is that now you can make your bonfire at 7 in the afternoon in your apartment.

You think the way out, then, will not be to start from technology but to fight it with more technology.

And this is because technology is normatively neutral: it can scale problems or solutions, depending on how we use it.

Now, I speak of this interest in meditation as a positive sign, but it is not going to be the magic potion.

Just as a baby discovers the contours of his body by biting his finger, we are just getting to know the contours of our digitally expanded minds.

But I am convinced that learning to distance yourself from these technologies will mean, in the long term, learning to distance yourself.

An egomaniac without the internet, in that sense, would not be part of the solution.

Is the idea of ​​a chip in the brain compatible with what you are proposing? Or are they exclusive?

If that chip keeps you at the neural level that processes information and translates it into reasoning and emotions, it would not do that.

Consciousness is on another neural level, it seems that it occurs in a circuit called DMN and that basically connects the entire brain.

And I imagine that with a neural interface it can also be stimulated, but it will be as always in technology: the first application will be for commerce and the second for pornography.

In the meantime, what hygiene measures could you recommend?

Wash your mind often for at least 20 seconds, especially after a mindless scroll on social media during which you were exposed to algorithms specialized in lowering your defenses.

Cover your mouth when you are about to spread hateful or unread content. And take responsibility for being a potential vector of contagion in this collective problem.

Leave a Reply

Your email address will not be published. Required fields are marked *