Enlightened Algorithms

Alt + E S V

Enlightened Algorithms

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

I was recently struck by some overlapping themes I encountered from some very different authors. Specifically, I’ve been enjoying Sapiens by Yuval Noah Harari, a work which surveys the history of humankind and the forces that led to our dominance as a species. One section explores religion as a unifier of disparate groups of people. Harari discusses many of the major religions, and his description of Buddhism caught my attention in relation to some of our current discussions in technology.

“He [Siddhārtha Gautama, on whose teachings Buddhism is founded], resolved to investigate suffering on his own until he found a method for complete liberation. He spent six years meditating on the essence, causes, and cures for human anguish. In the end he came to the realization that suffering is not caused by ill-fortune, by social injustice, or by divine whims. Rather, suffering is caused by the behavior patterns of one’s own mind.

Gautama’s insight was that no matter what the mind experiences, it usually reacts with craving, and craving always involves dissatisfaction. When the mind experiences something distasteful, it craves to be rid of the irritation. When the mind experiences something pleasant, it craves that the pleasure will remain and will intensify.”
Sapiens, Yuval Noah Harari

Contrast this with recent press coverage about YouTube’s recommendation algorithm. Zeynep Tufekci’s New York Times Op-Ed details her experience with YouTube’s recommendation engine. She began by watching videos from each side of the political spectrum, and in both cases ended up viewing increasingly radicalized content from the autoplay suggestions.

“Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.”
YouTube, the Great Radicalizer, Zeynep Tufekci

While the posited relationship between viewership and revenue certainly exists, Google countered the claims about the company’s prioritization of watchtime in a recent article in The Guardian.

“YouTube told me that its recommendation system had evolved since Chaslot [a former Google employee] worked at the company and now ‘goes beyond optimising for watchtime’. The company said that in 2016 it started taking into account user ‘satisfaction’, by using surveys, for example, or looking at how many ‘likes’ a video received, to ‘ensure people were satisfied with what they were viewing’.”
‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth, Paul Lewis

In reading these descriptions of YouTube, I was struck at how much a centuries-old dictum intersects with problems we are currently grappling with. Does the algorithm’s search to maximize satisfaction promote increasingly escalating content, and is that pattern in fact tied to the long-observed human tendency to crave? Is it possible that perceived algorithmic problems are actually problems of human nature, or is it more likely that they’re simple artificially introduced byproducts?

Algorithms are often viewed as mathematical formulas that are objective and logical. And while algorithms are indeed programmatic in their outputs, at their core they are representations of human thinking. They reflect our understanding of the world; this includes some amazing innovations, but it also includes our biases and human fallibilities.

If, as Gautama postulated, human nature is to crave more, then without careful consideration to the contrary our algorithms will reflect this (both by virtue of being programmed by humans and by virtue of being deployed on behalf of companies that profit from satisfying humans.) If an algorithm seeks to maximize user contentment (and as a byproduct, maximize engagement with its product), is there any way to reconcile this definition of satisfaction with the Buddhist definition of satisfaction comes from renouncing craving?

In a world where attachment rates are an important measure of success, it’s difficult to foresee a world where companies “renounce craving and attachment” in their customers or in the algorithms that serve those customers. However, it did seem worth highlighting this concept as a potential method of embedding of biases into algorithms.

I don’t proclaim to have any answers on the topic, but I did find the intersection of these ideas worth exploring. If you have any thoughts on how to reach algorithmic enlightenment (or whether that’s even a goal worth striving towards), I’d love to hear them!

Disclosure: Google is a RedMonk client.

One comment

  1. Hi

    It is an interesting point that you highlighted about “Gautama from the Sapiens”.

    The main point is what one craves for — is it for good for public at large and will benefit every living being or is it self-serving? This difference is the most important IMO.

    Craving as such is neither good nor bad. If we don’t crave or desire for good things that benefit humanity then the world will be filled with bad/evil things.

    Therefore, algorithms also are neither good nor bad but the people code them can make it either work for the better or worse depending on the philosophical bent of their mind.

    Hence, it will help if the people working on algorithms were people who read good books in order to gain better philosophical understanding of human behaviour which they can use for giving shape to their algorithms that will help humanity.

    To negate craving/desire itself is not the right solution and it is not possible to remove it from human beings.

Leave a Reply to Sanjay Rao Cancel reply

Your email address will not be published. Required fields are marked *