Personalization Algorithms, and Why They Don't Understand Us Creative Types
“Personalization is a process of gathering, storing and analyzing information about site visitors and delivering the right information to each visitor at the right time.” – Algorithms for Web Personalization
In 2011, Eli Pariser uncovered the filter bubble. In front of our eyes, Google and Facebook become geniuses at giving us what we “want,” based on algorithms that guess our interests and concerns. Today, nearly ever digital news outlet, search engine and social media app engages in an “invisible, algorithmic editing of the web.”
“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.” – Mark Zuckerberg, Facebook.
According to Pariser, the idea is one of relevance – digital platforms racing to deliver entertainment, opinions and news most relevant to you as the reader. Personal relevance has become the new watchword for internet communication companies. The “race for relevance” on the web has involved efforts to predict what consumers are going to click, watch, read or buy on based on their previous web history for example (Netflix movie suggestions), or based on their what their closest friends do and share online (Facebook).
This concept is not unlike that of relevance to the reader in news values, or the criteria of newsworthiness that journalists often use to select and produce the news based on a wide range of potential stories. In a study of British journalists specializing in science, medicine and related subjects, Hansen (1994) found that these journalists “deploy conventional news-value criteria, but emphasize in particular the importance of a ‘relevance to the reader’ criterion in the selection of science news” (p. 111). A study of American science journalists published in 1979 also revealed a trend in science journalism toward consumer-oriented coverage of science “…so that readers can answer the question ‘what does it mean to me?’” (Dennis & McCartney, 1979, p. 13).
So it appears that Facebook, Google, Netflix an even Twitter are not alone in valuing and chasing relevance to their consumers. The functions of personalization filters embedded in these sites involve actions ranging from “simply making the presentation more pleasing to anticipating the needs of a user and providing customized and relevant information to the user” (Algorithms for Web Personalization). But just because journalists have been using “relevance to reader” criteria to evaluate news value of potential stories for centuries, does not mean that this news value serves the best interests of digital media audiences, or even that it was a good assumption from the beginning.
“In two important ways, personalized filters can upset this cognitive balance between strengthening our existing ideas and acquiring new ones. First, the filter bubble surrounds us with ideas with which we’re already familiar (and already agree), making us overconfident in our mental frameworks. Second, it removes from our environment some of the key prompts that make us want to learn.” – Eli Pariser, Filter Bubble, p. 84
Personalization algorithms embedded into digital media search engines, news aggregators and social media platforms are not just neutral mathematical formulas for making our searches and news consumption more pleasurable and efficient. These algorithms have consequences for our views of the world: they may get in the way of creativity and innovation. According to Pariser, “the filter bubble isn’t tuned for a diversity of ideas or of people.”
Pariser quotes Siva Vaidhyanathan’s The Googlization of Everything: “Learning is by definition an encounter with what you don’t know, what you haven’t thought of, what you couldn’t conceive, and what you never understood or entertained as possible. It’s an encounter with what’s other – even with otherness as such. The kind of filter that Google interposes between an Internet searcher and what a search yields shields the searcher from such radical encounters.”
“The filter bubble has dramatically changed the informational physics that determines which ideas we come into contact with. And the news, personalized Web may no longer be as well suited for creative discovery as it once was.” - Eli Pariser, Filter Bubble, p. 103
Essentially, personalization filters amplify confirmation bias by presenting us with ideas and topics Google and Facebook have already figured out we “like,” based on various signals including what we click, where we live, who our friends are, and many more. (Confirmation bias happens in science too, when a scientist looks only for data that confirm a previous theory or desired conclusion.) Actually, personalization algorithms are indirectly designed to amplify confirmation bias. Bringing you ideas, opinions and news that confirm your prior attitudes and beliefs is the goal.
But this is where I believe one of the faultiest assumptions of personalization algorithms is most readily apparent. Think about it: personalization filters assume you prefer information that confirms what you already know, what you already believe or enjoy. Even Pariser reflects this idea when he writes, “[c]onsuming information that conforms to our ideas of the world is easy and pleasurable; consuming information that challenges us to think in new ways or question our assumptions is frustrating and difficult,” (p. 88).
But is this always true? I recently conducted interviews with science communicators for a PhD project, prompting each communicator to tell me what news values they consider important as they translate science research into news for their audiences[1]. While a majority of those I interviewed did mention personal relevance to the reader as an important consideration when producing science news, the exceptions to this rule were most interesting. A handful of science communicators seems to consider this tactic not conducive – even unethical – to inspiring broad-mindedness and an interest in science among their readers.
According to Shalom H. Schwartz’ universal human value typology, people’s fundamental goals in life can be explained by several different motivational dimensions, or value orientations. One of these is a scale of conservation/traditionalism value endorsement vs. openness to change value endorsement. People on the conservation/traditionalism end of the scale in many situations value conformity, respect for tradition and security, while people at the other end value curiosity, freedom, exploration, excitement and choosing their own goals in life.
Without going into any research on the topic, it would seem that catering to people’s value and respect for what they already know, and to their preference for ideas that conform to their pre-existing worldviews, is a tactic that would primarily engage people on the conservation/traditionalism end of this particular value orientation. But while this might be a good assumption among some demographics, for some people some of the time, it is probably a terrible assumption for people who value curiosity, open-mindedness and exploration of diverse ideas. People like me, I would hope.
So not only do personalization algorithms reduce my potential for creative thoughts, according to Pariser, but they also wrongly assume that consumption of self-confirming information is something that I value in my life. And if they get that wrong, then what is the point?
I think that the people who study and create personalization algorithms need to take a critical eye to the assumptions they make about people’s values and interests, people’s worldviews and concerns. In keeping with one of Pariser’s arguments, could incorporating more fuzzy-logic and “drift” into personalization algorithms better suit more creative and open to experience individuals? Whether the personalization filter is ethical at all is a different story (and one that doesn’t look too promising). But on top of ethical considerations, personalization algorithms might just plain-and-simple be getting our fundamental values, especially our orientation toward openness to change, wrong.
Dennis, E. E. & McCartney. J. (1979). Science journalists on metropolitan dailies. The Journal of Environmental Education, 10, 9-15.
Hansen, Anders. (1994). Journalistic practices and science reporting in the British press. Public Understanding of Science, 3(2), 111-134. doi: 10.1088/0963-6625/3/2/001
[1] This data is currently unpublished, but aimed for conference and/or journal publication.