And so Netflix has gone through several different algorithms over the years… They’re using Pragmatic Chaos now. Pragmatic Chaos is, like all of Netflix algorithms, trying to do the same thing. It’s trying to get a grasp on you, on the firmware inside the human skull, so that it can recommend what movie you might want to watch next — which is a very, very difficult problem. But the difficulty of the problem and the fact that we don’t really quite have it down, it doesn’t take away from the effects Pragmatic Chaos has. Pragmatic Chaos, like all Netflix algorithms, determines, in the end, 60 percent of what movies end up being rented. So one piece of code with one idea about you is responsible for 60 percent of those movies.
But what if you could rate those movies before they get made? Wouldn’t that be handy? Well, a few data scientists from the U.K. are in Hollywood, and they have “story algorithms” — a company called Epagogix. And you can run your script through there, and they can tell you, quantifiably, that that’s a 30 million dollar movie or a 200 million dollar movie. And the thing is, is that this isn’t Google. This isn’t information. These aren’t financial stats; this is culture. And what you see here, or what you don’t really see normally, is that these are the physics of culture. And if these algorithms, like the algorithms on Wall Street, just crashed one day and went awry, how would we know? What would it look like?
[Transcript of How algorithms shape our worldI]
When Pythagoras discovered that “things are numbers and numbers are things,” he forged a connection between the material world and mathematics. His insight “that there is something about the real world that is intelligible in mathematical terms, and perhaps only in mathematical terms,” was, according to Charles Van Doren, “one of the great advances in the history of human thought.” (p35) Are we at a similar precipice with culture and information, when algorithms shape our world and culture? When non-human actors can significantly impact upon the information we receive, and the choices we make? And if so, what does that mean for museums, for culture, for the way we understand our world?
This is a question I sometimes find myself grappling with, although I’m not sure I have any answers. The more I learn, the less it seems I know. But I’d like to take a couple of minutes to consider one aspect of the relationship between the algorithm and the museum, being the question of authority.
In 2009, Clay Shirky wrote a speculative post on the idea of algorithmic authority, in which he proposed that algorithms are increasingly treated as authoritative and, indeed, that the nature of authority itself is up for grabs. He writes:
Algorithmic authority is the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying “Trust this because you trust me.” This model of authority differs from personal or institutional authority, and has, I think, three critical characteristics.
These characteristics are, firstly, that algorithmic authority “takes in material from multiple sources, which sources themselves are not universally vetted for their trustworthiness, and it combines those sources in a way that doesn’t rely on any human manager to sign off on the results before they are published”; that the algorithm “produces good results” which people consequently come to trust; and that, following these two processes, people learn that not only does the algorithm produce good results, the results are also trusted by others in their group. At that point, Shirky argues, the algorithm has transitioned to being authoritative.
Although I’ve previously touched on the idea of algorithmic curating, I’d never explicitly considered its relationship to authority and trust, so I decided to look a little deeper into these issues. Were there any commonalities between the type of authority and trust held by and in museums, and that held in algorithms?
Philosopher Judith Simon refers to Shirky’s post in an article considering trust and knowledge on the Web in relation to Wikipedia. She argues that people trust in Wikipedia’s openness and transparency, rather than in the individual authors. She writes “that the reason why people trust the content of Wikipedia is that they trust the processes of Wikipedia. It is a form of procedural trust, not a trust in persons.”
I think this procedural trust is also what we put in the algorithm. Blogger Adrian Chan puts it this way:
The algorithm generally may invoke the authority of data, information sourcing, math, and scientific technique. Those are claims on authority based in the faith we put in science (actually, math, and specifically, probabilities). That’s the authority of the algorithm — not of any one algorithmic suggestion in particular, but of the algorithmic operation in general.
We do not necessarily trust in the particularities; we trust the processes. Is the trust that people have in museums similarly procedural? Do we trust in the process of museum work, rather than in the individual results or in the people who work in museums?
There are a myriad of assumptions that we make about people working in museums; that they are well trained and professional; that they are experts in their particular domain. We implicitly trust the people, then, and the work that they do. However, in many cases, such as when we visit an exhibit, we don’t know who the specific people are who worked on the exhibition. We don’t necessarily know who the curator was, or who wrote the exhibition text. The lack of visibility inherent in many current museum processes obscures the individual and their work. The museum qua museum, therefore, acts as a mechanism for credibility because it purports to bring the best people together; because the people who work within are known to be trained professionals who use scientific methods, regardless of whether we know specifically who they are or what their particular training is. Ergo, the trust we have in the museum must also be a form of procedural trust. (Amy Whitaker concurs, “Institutional trust is founded on process, on the belief that there are proper channels and decision-making mechanisms and an absence of conflict of interest.” p32)
Shirky also speaks to the social element involved in authority. He explains:
Authority… performs a dual function; looking to authorities is a way of increasing the likelihood of being right, and of reducing the penalty for being wrong. An authoritative source isn’t just a source you trust; it’s a source you and other members of your reference group trust together. This is the non-lawyer’s version of “due diligence”; it’s impossible to be right all the time, but it’s much better to be wrong on good authority than otherwise, because if you’re wrong on good authority, it’s not your fault.
Authority isn’t just derived from whether we can trust a source of information, but additionally whether we can be confident in passing that information along and putting our name to the fact that we made a judgement on its trustworthiness. We shortcut the process of personal judgement using known systems that are likely to give us accurate and trustworthy results; results we can share in good faith. We trust museums because museums are perceived to be trustworthy.
Do the film companies that run their scripts through Epagogix’s algorithms do so because it helps them shortcut the process of personal judgement too? Can algorithms provide better insight, or just safer insight? Eli Pariser says this of Netflix’s algorithms:
The problem with [the algorithm] is that while it’s very good at predicting what movies you’ll like — generally it’s under one star off — it’s conservative. It would rather be right and show you a movie that you’ll rate a four, than show you a movie that has a 50% chance of being a five and a 50% chance of being a one. Human curators are often more likely to take these kinds of risks.
Right now, museums that do not embrace technology and technologically-driven solutions are often perceived to be risk averse, because doing so challenges existing practice. I wonder whether, with time, it will be those institutions that choose not to make choices driven by data that will become perceived as the risk-takers? This is a profession that is tied so strongly to notions of connoisseurship; what relationship will the museum have with the algorithm (internally, or external algorithms like those that drive Google and other sites)? I don’t have any answers yet, but I think it’s worth considering that museums no longer just share authority with the user-generated world; authority is also being shared with an algorithmically-shaped one.
What do you think?