This was supposed to be the first of my post-MW2013 posts, wrapping up the conference and starting to pull together the underlying themes and ideas that emerged for me during the week in Portland. And then I arrived in Texas, and Google brought me God in the form of a thousand search results; an unexpected kind of creeping normalcy that painted the world a different colour to the way I usually see it. So I thought I’d detour from plan and spend a couple of minutes thinking about some of the immediate questions that this raised for me.
When I search on almost any issue back in Australia, I don’t get a lot of religion in my results. I don’t know whether it’s because we are a largely secular country, or because the profile of people whom otherwise “look” like me to Google (ie, using a Mac, female) in Australia aren’t very religious. Therefore, to look into the Google mirror and find the results reflected back at me so distorted from their usual bent, and from my sense of self, was somewhat jarring. In The Filter Bubble Eli Pariser comments that “from within the bubble, it’s nearly impossible to see how biased it is.” (10) What I think I’ve experienced here in Texas is my first real opportunity to look at the search results presented to me from beyond my normal cocooned perspective. The sensation grates.
It also raises interesting questions for me about the idea of a canon of knowledge, because these kinds of personalised results surely make it much harder to form an agreed-upon body of ideas or frame of reference for history, much less the present. (This is something that Danny Birchall and I touched lightly on in our Museums and the Web paper about curating the digital world.)
I am not even close to making sense of what these kinds of distorting lenses mean for us in museums, but here are some first thoughts. We are all now at the mercy of these kinds of algorithms, because they are in some ways a necessary strategy for coping with the scale of non-hierarchical online information; whether we work in museums or not. The information we have access to, then, is rarely going to be everything we might need or want. This is ok, I think. It’s surely always been the case that with so much information in the world only some has been esteemed over others.
But the perniciousness of algorithmic invisibility, that it is next to impossible to understand how and where those non-neutral search engine biases comes from, seems to present museums with both a challenge and an opportunity. By declaring where our own knowledge is drawn from as it relates to the collection or otherwise, or acknowledging when it is missing or known to be incomplete, we gain the opportunity to act as a different voice within the digital space, with different interests and values. In addition, utilising such an approach could enable those who use our resources to both provide other perspectives by knowing where our conclusions were drawn from.
What do you think? Is this an issue that museums need to tackle, and if so, how should it affect their approach to knowledge sharing and gathering?