Bringing Humanity Back to Technology

TRISTAN HARRIS AND THE CENTER FOR HUMANE TECHNOLOGY 

Tristan Harris was a former design ethicist for Google and is the co-founder and president of the Center for Humane Technology. He has also been called “the closest thing Silicon Valley has to a conscience”. Upon reflection on his previous position, he speaks about how he noticed that the motive at Google was to increase addictiveness to their platform, but he wondered if anybody else was questioning whether this was healthy and sustainable for the users. He worked and sent a proposal, A Call to Minimize Distraction & Respect Users’ Attention, to his colleagues at Google proposing to bring “ethical design” to products and received positive traction for a bit but ultimately no long-term change.

After founding the Center for Humane Technology, he has done various lectures, testimonies and at this point, I would call them pleas to audiences of concerned citizens in hopes of enlightening the masses on what he has experienced as an insider to the largely undisclosed operations within the technology industry and why we should be frightened. 

THE BUSINESS MODEL AND ITS EARLY DAYS

Computer scientist, Jaron Lanier, is one of the most fascinating people I have come across. He pushes the idea that social media and the internet are changing us as individuals on such a subtle scale. And that is precisely why it is so successful. He is right in saying that the younger generations have grown up with online connection being the primary medium of communication and even sometimes more preferred than face-to-face interaction. With that being said, connection and communication online are rooted in manipulation from an outside party. He says in a Ted talk, “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them.” In other words, there is really no such thing as coincidence when our devices are controlled by very specific algorithms. 

In the 90s when technology and internet platforms were emerging there was a big discussion around whether or not to charge for their access. To pay would mean to exclude and create a further divide amongst different socioeconomic groups. To make these platforms accessible to almost everybody, however, had unforeseen costs. In order for a company to be profitable while providing free access to its platforms, it had to paste ads. At first, it was advertising in the original sense. But as the machines and people got better at their jobs this became surveillance capitalism that gradually modifies the behavior of its users. No longer passive and no longer for the better of the masses.

Our attention became the product. Our daily choices and thoughts are slightly manipulated, slightly induced by whatever content the algorithm has chosen for us today. These are thoughts that we may not have intended to have, had we not opened our phones. But this change is so subtle that is not affecting our lives in a way that we can calculate or even really notice. Lanier puts it as “the gradual, slight, imperceptible change in your own behavior and perception that is the product”. 

More on this in The Social Dilemma

“TIME WELL SPENT”

Humans have vulnerabilities. The sooner we acknowledge this the closer we will be to understanding just how much our minds and behavior and most observably, our daily experiences are being modified over time by third parties. We have impulses, we are emotional, and we are becoming increasingly more distracted. 

“Time Well Spent” is a movement catalyzed by Tristan Harris and set the foundation for the soon to be Center for Humane Technology. He recognizes and articulates how as humans, after stripping away most of the superficialities of society today, all we have left is our time. And so much of that time is now used on the internet. With the digital age, we are slowly transitioning into a digital society. 

Now, this is not all negative. The digital age has brought with it innovation, ideas, and growth. However, there are side effects that rival the benefits. Our iPhones have practically become attached to us, in our pockets, our bags, in our hands, everywhere that we go. In some sense, we could even be called accomplices to our own demise. But with the way that this world is developing, it is seemingly impossible to stay “connected” in this society without having a smartphone. And this addiction has been normalized because our phones and technology are the most accessible medium that we have to keep up with this ever-changing world. 

“A CALL TO MINIMIZE DISTRACTION AND RESPECT USERS’ ATTENTION”

In his presentation, Harris lists the vulnerabilities in humans that are strategically exploited by technology companies. Most notably, Google and Facebook. Harris, being a former employee at Facebook, acknowledges that technology was meant to bring ease and efficiency to our lives. Yet the industry has transformed into a black hole of our attention and time. He frequently calls it ‘the arms race to the bottom of the brain stem’. Essentially, which big tech company can go deepest into human psychology to deceive us?

So, what are these vulnerabilities?

Firstly, the way that we think. Thinking and reasoning are what sets us apart from most other species. But the way we make decisions depends on how much time we are given to do so. If we react immediately and give in to impulse, we may not decide with the best judgment. But if we are given ample time to reason we may change our minds completely. So, when we interact with a bottomless and frictionless well of content, it can be predicted that we may go against our best interest at the moment. Before picking up our phones after hearing a buzz, or refreshing our social media page, there is no intended moment of contemplation or risk analysis. Tristan Harris compares this habitual action to the lottery in the sense that when we do so we are hoping to receive content that gives us temporary satisfaction, dopamine. However, this rush only lasts so long. 

Through these platforms, we are given intermittent variable awards. Our brains have dopamine “pathways” that respond to different sorts of stimuli. Social stimuli like all sorts of social media features, such as likes, reposts, friend requests, followers, etc. through which we receive validation from peers, are an example of these. Through a process called long-term potentiation, regular media use strengthens the association between refreshing our pages or posting content and the short-term satisfaction that comes with receiving suitable content or feedback from friends. This brings us into what Harris calls, “short-term, dopamine driven, feedback loops”. 

Secondly, humans suffer chronically from FOMO. Fear of missing out. Or in better terms, loss aversion. At times it can be unnerving to be away from our phones or the internet for extended periods of time because we may miss an important email or news update. Understandable. 

But imagine if users were given confidence that they could disconnect from their virtual worlds more frequently without missing anything important? This may seem impossible or far-fetched but the first step to this innovation is to recognize the issue and start brainstorming. 

Lastly, is our laughable ability to underestimate the time it will take to do anything online. When clicking on some notification to bring me back into my virtual world my mind willfully underestimates how long I will spend online. What I forget to factor in is the addictive rabbit hole of content that I will soon be a click away from. 

So of course, there are steps that we can each take to ween off of this addiction and create better habits to become a more productive person in the life that we want. But really it is not an individual problem. The responsibility to interact healthily and conscientiously with a product that is built to go against our psychology should not be solely our own responsibility.

“ALGORITHMS AND AMPLIFICATION: HOW SOCIAL MEDIA PLATFORMS’ DESIGN CHOICES SHAPE OUR DISCOURSE AND OUR MINDS”

Tristan Harris testified to a subcommittee of the Senate in the US on Privacy, Technology, and the Law voicing his concerns on the directory of technology platforms and their lasting impacts on humanity. He testified along with Dr. Joan Donovan, a research director specializing in hate speech and disinformation on social media platforms. As she specializes in such a desensitized and vast subculture of these platforms, she speaks firsthand about seeing hateful misinformation run with algorithms and how it threatens democracy. She describes the exponentially increasing power of big technology companies as “tech companies flying a plane with nowhere to land.” Alluding to the notion that technology has been growing exponentially while our brains have hardly developed. This is a very volatile dynamic if unable to be controlled. 

Tristan Harris brings up a compelling cyclical phenomenon in his testimony where he breaks down how polarization is a direct effect of social media and the internet. With peoples’ short-term attention span online, the posts that generate the most activity on social media platforms are those that are short and brief. So, when social discourse begs ideas that only become more complicated, and we have character limits on our posts/want people to stay engaged, we are only able to say a few words about an incredibly multifaceted and nuanced topic. This certainly leads to misinterpretation because under these constraints it is almost impossible to successfully convey ideas to one another in a manner of having a real conversation. This leads to rebuttal posts and emotional reactions. Which ranks up social media activity by means of reposts, and outrage which by now we see the algorithms thrive off. 

Part of Dr. Donovan’s research position is to search and study “fake news” on the internet. She makes the interesting connection that social media algorithms will feed repetitive content with similar keywords and ideas so if one is to see something multiple times or on different platforms, they are more likely to be persuaded by it. Regardless of its validity. “Fake news” thrives on the algorithms because it usually contains more loaded keywords and generates intense feedback whether in the agreement or in outrage it makes no difference. This is another symptom of polarization. 

Essentially, polarization is good for the algorithm but dangerous for humanity. We can no longer hear each other.

During this hearing, we also heard from heads of content/public policy of Facebook, YouTube, and Twitter. They all made similar claims. 

Facebook is attempting to increase transparency about its algorithms with the users and about the many controls that users have on the platforms to give feedback. Specifically, about their recommended content and even the ability to change the way that their timeline feeds them content. In 2019 they launched a “Why Am I Seeing This?” feature to help users understand why they are receiving certain posts, an inside look at how their algorithm content ranking process works on an individual level. 

Twitter mentions their new “Responsible Machine Learning” initiative through which they hope to increase transparency about their practices and learn how to implement precautionary controls to prevent undesirable effects of technology on their users. As well as tackling misinformation on their platform with “Birdwatch”, a pilot program that would allow actual users to step in and flag content they believe to be disinformation or harmful. 

YouTube refers to a new metric called the YouTube Community Guidelines Enforcement Report (VVR) which estimates the proportion of content that violates their guidelines every quarter and this percentage is supposedly decreasing with respect to their increase in funding/research for combating malcontent. They also launched the website How YouTube Works in 2020, in an attempt to increase transparency between the company and its users.

While their testimonies sound solid and positive, none really got to the core of the issue, which is the issue being about the fundamental business model of all these big tech companies. Focusing on alleviating the symptoms of the algorithms by filtering out bad content and helping users personalize the functions on their individual platforms is definitely a step in the right direction but it makes this an individual problem, which it is not. It puts the responsibility to manage addiction on the user rather than on the infrastructure and sways us from the more important discussion of the indisputable harmful effects of these platforms on society.  

REBUILDING THE INTERNET – A WORK IN PROGRESS 

At times this dilemma can feel hopeless, but we have to believe that a better internet made for the people can exist. Here are some loose steps to get there.  

1.     Recognize the problem and start educating.

2.     Public pressure, meaning we need to start a large-scale conversation, voicing our demands to policymakers and tech leaders.

3.     Legislation and regulation. 

4.     More humane technology. 

5.     Sustainability.

The Dilemma 

There is an extreme imbalance of power between big technology corporations and their users. In order to successfully correct the asymmetry of information and power, this must start from the top, from technology empires like Google, Facebook, YouTube, etc. Companies like these need to be held accountable for their effects on populations worldwide. We can start by acknowledging that technology is not neutral. It may not have emotions of its own, but algorithms are optimized according to some corporations’ decisions of a profit-oriented degree of success. One that does not necessarily align with the people. Thus, the business model needs to be tweaked. Instead of making the goal to increase screen time and online activity, imagine if technology was optimizing our reflective brains, rather than our lizard brains, and was encouraging in-person connection, rather than further isolating us on our screens. Doing so would put people first. We have seen multiple instances like Cambridge Analytica and the 2016 election with Facebook so we know that these platforms continue to put user growth and monetary profit and commercial interests over the well-being and freedom of the users. Instead, what we need are common goals and values. 

Regulation 

It has become apparent that big tech companies are unable to regulate themselves to control these issues. There is hardly an incentive to do so. There is little to no regulation or limitation in this field because it is a relatively unprecedented sector of business, and the immense power of these technology powerhouses is intimidating to go up against. 

Companies need to prioritize a platform that is healthy for users to interact with by introducing concrete legislative work that goes deep into the framework of their business models and would concern consumer protection, non-discrimination laws, fiduciary law, and “anti-monopoly” style measures. Introducing non-atomizing solutions would lay the groundwork for collective relief. If we were to negotiate individual rights that are dependent on individual situations (backgrounds, socioeconomic profiles, etc.), then each would have to negotiate their rights individually. Once again putting too much responsibility on the individual. We need to push for a standard of privacy and respect towards big technology companies to individual users. This would include precautionary approaches so that companies must identify and internalize/take control of the harm before it is released to the public.

Complete equality and transparency may very well not be feasible but in that case, those holding the power should be expected to serve the public. The responsibility is on those creating the algorithms to in turn protect users when needed. The end goal should be sustainability. Internet for the people. Because it must be possible to interact with these platforms in a healthy way. 

Jaron’s Take 

Jaron Lanier has provided the most unbiased opinion I have seen which I think is perfect to conclude with. He clarifies that there are no “bad guys” in this dilemma. Because ultimately the destruction of society is the destruction of corporations as well. We are all human and even those working behind the scenes at Google or Facebook can fall victim to the very mechanisms they help create. He names this power struggle more as stupid rather than evil. Lanier’s hypothetical solution would be to have everybody pay for access to social media. In the same way that we pay for movies and streaming platforms like Netflix, Hulu, etc. To some, this would become a monetary burden, but it may make us think twice about downloading applications or spending so much time online. As well as offer a more ethical way to generate profits as a business. 

However, this approach is ultimately discriminatory towards families with lower incomes and could create new imbalances among communities. It is in no way ideal and there is a lot to reflect on and tweak. But again, these are just ideas and the conversation is only beginning. 

But of course, for now, this is all in theory…

To stay updated with the latest news regarding humane technology and confronting the power imbalance, check out the Center for Humane Technology and Your Undivided Attention, the associated podcast where co-hosts Tristan Harris and Aza Raskin discuss numerous solutions and ideas with leading thinkers around the world. 

More with Dr. Donovan on media manipulation and disinformation: 

https://just-infras.illinois.edu/speaker-series/joan-donovan/

Some more with Jaron Lanier: 

Sources:

https://www.theverge.com/2018/1/17/16903844/time-well-spent-facebook-tristan-harris-mark-zuckerberg

https://www.nbcnews.com/tech/tech-news/facebook-living-breathing-crime-scene-says-one-former-manager-n837991

https://medium.com/what-to-build/dear-zuck-fd25ecb1aa5a

https://www.judiciary.senate.gov/meetings/algorithms-and-amplification-how-social-media-platforms-design-choices-shape-our-discourse-and-our-minds

https://www.humanetech.com/policy-principles

https://www.humanetech.com/policy-reforms

https://www.judiciary.senate.gov/imo/media/doc/Bickert%20Testimony.pdf

https://www.judiciary.senate.gov/imo/media/doc/Culbertson%20Testimony.pdf

https://www.judiciary.senate.gov/imo/media/doc/Veitch%20Testimony.pdf

Search for an article

Categories
Posted Recently
where to find us

Would you like to join us or work with us? Don’t hesitate to send us a message!
Here’s our contact page