In response to the 2010 Edge Annual QuestionHow Has The Internet Changed The Way You Think? classicist James O’Donnell proposed that his fingers have become part of his brain. That “the sign of thinking is that I reach for the mouse and start “shaking it loose”… My eyes and hands have already learned to work together in new ways with my brain in a process of clicking, typing a couple of words, clicking, scanning, clicking again that really is a new way of thinking for me.
That finger work is unconscious. It just starts to happen. But it’s the way I can now tell thinking has begun as I begin working my way through an information world more tactile than ever before.”
I, too, often kind of “type” my thoughts into mid-air; a kinetic response to thinking. Is this just a direct response to the time I spend in front of a computer, thinking, and to the devices I use? Possibly. But it isn’t the only response to what O’Donnell terms “the living presence of networked information.” My greater response is far more encompassing than that. Because of my direct connection to my networks – to you – to people with shared interests and frames of reference with whom my buy-in to conversation is the input of interesting content or comment, the way I actually look at the world, the way I listen, and the attention I pay, is shifting. It is as though the network itself is a giant amorphous creature, and I am merely one eye that can see for it; a scout whose role is to look at the world for the information that will benefit the network itself, and bring it back.
When I last visited a museum, you were there with me. I carried you, reader, in my thoughts; in my pocket; in my devices. Though you had no eyes of your own, you stood behind my eyes when I looked at the objects; when I observed the space. Your presence changed my looking. How could I translate what was in front of my eyes and make it meaningful for you? My looking was looking on your behalf as much as my own. Simultaneously, I took note of what interested me and what might interest you. I tried to pay attention to the sorts of things you might want to know, so that if you asked questions, I’d have answers. I did so to have content for this blog; to have reasons for connection. I did so to ensure that I’d have something to contribute to our conversations; something to talk about.
My network, of which you are part, is shifting the way I understand the world. In one way, this is because much of the information that I encounter comes through you. You link to articles, you share news, you provide new perspectives in comments and discussion. You filter forward those things that you think are worth paying attention to, and in so doing shape the way I, as part of your network, understand the world.
But this is not the only way your presence is reshaping how I negotiate or interact with the world. It is actually changing the way I see, and hear. The sorts of Tweetable phrases stories I now listen for, the photographs I take, the anecdotes I file away that might be of use or interest for this blog, they all shape the kind of attention I pay to the world. When I attend a conference without Twitter, I hear entirely different nuance in the presentations from what I hear when I am seeking to translate and Tweet the ideas. The looking that I do; the listening. It is all changed as a result of my interactions with the network; with you.
The perceptions that we have of the world shape the way we understand it, and new technologies lead to new perceptions. Perhaps my greatest shift in this direction came once I had an iPhone; once I began to carry in my pocket a device that would allow me to capture and share what I was seeing or hearing instantly via tools like Twitter – tools that were simultaneously personal and not directed at any single other person. This device, these capabilities altered the way I experienced the physical world. I no longer had to be stuck behind my desk to share content, to make connections; I could do so from the wild. This started changing what I saw by changing what I looked for.
I now see socially. I listen, not just for myself, but for what I can translate and share to my networks. I pay attention to the ideas that you, my network, is interested in, and in so doing, I encounter the world through that lens. The things I notice are not of interest to me alone. I notice those things that I think you would be interested in too, and I think of you when I am noticing them.
When I am in situ in the museum, I encounter the space through the framework of shared assumptions that my digital networks use. My network, who I am connected to, what their interests are, changes how I see and understand the museum and the world. It changes the way I encounter and read objects. It even changes where I go, because I attend events that will be of interest and relevant to the people in my network.
What are the implications of this? We know that the museum visitor does not encounter the object as a tabula rasa, a blank slate. He or she constructs meaning based on existing knowledge and past experiences. But if I’m right about this change in looking, if looking is now taking place socially, then there are new elements and influences at play as well.
What do you think? Are you aware of changes that connection to the network has made to how you look at and interpret the world? Is this any different from the way you encountered spaces or events prior to carrying a networked device on your person?
And so Netflix has gone through several different algorithms over the years… They’re using Pragmatic Chaos now. Pragmatic Chaos is, like all of Netflix algorithms, trying to do the same thing. It’s trying to get a grasp on you, on the firmware inside the human skull, so that it can recommend what movie you might want to watch next — which is a very, very difficult problem. But the difficulty of the problem and the fact that we don’t really quite have it down, it doesn’t take away from the effects Pragmatic Chaos has. Pragmatic Chaos, like all Netflix algorithms, determines, in the end, 60 percent of what movies end up being rented. So one piece of code with one idea about you is responsible for 60 percent of those movies.
But what if you could rate those movies before they get made? Wouldn’t that be handy? Well, a few data scientists from the U.K. are in Hollywood, and they have “story algorithms” — a company called Epagogix. And you can run your script through there, and they can tell you, quantifiably, that that’s a 30 million dollar movie or a 200 million dollar movie. And the thing is, is that this isn’t Google. This isn’t information. These aren’t financial stats; this is culture. And what you see here, or what you don’t really see normally, is that these are the physics of culture. And if these algorithms, like the algorithms on Wall Street, just crashed one day and went awry, how would we know? What would it look like?
[Transcript of How algorithms shape our worldI]
When Pythagoras discovered that “things are numbers and numbers are things,” he forged a connection between the material world and mathematics. His insight “that there is something about the real world that is intelligible in mathematical terms, and perhaps only in mathematical terms,” was, according to Charles Van Doren, “one of the great advances in the history of human thought.” (p35) Are we at a similar precipice with culture and information, when algorithms shape our world and culture? When non-human actors can significantly impact upon the information we receive, and the choices we make? And if so, what does that mean for museums, for culture, for the way we understand our world?
This is a question I sometimes find myself grappling with, although I’m not sure I have any answers. The more I learn, the less it seems I know. But I’d like to take a couple of minutes to consider one aspect of the relationship between the algorithm and the museum, being the question of authority.
In 2009, Clay Shirky wrote a speculative post on the idea of algorithmic authority, in which he proposed that algorithms are increasingly treated as authoritative and, indeed, that the nature of authority itself is up for grabs. He writes:
Algorithmic authority is the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying “Trust this because you trust me.” This model of authority differs from personal or institutional authority, and has, I think, three critical characteristics.
These characteristics are, firstly, that algorithmic authority “takes in material from multiple sources, which sources themselves are not universally vetted for their trustworthiness, and it combines those sources in a way that doesn’t rely on any human manager to sign off on the results before they are published”; that the algorithm “produces good results” which people consequently come to trust; and that, following these two processes, people learn that not only does the algorithm produce good results, the results are also trusted by others in their group. At that point, Shirky argues, the algorithm has transitioned to being authoritative.
Although I’ve previously touched on the idea of algorithmic curating, I’d never explicitly considered its relationship to authority and trust, so I decided to look a little deeper into these issues. Were there any commonalities between the type of authority and trust held by and in museums, and that held in algorithms?
Philosopher Judith Simon refers to Shirky’s post in an article considering trust and knowledge on the Web in relation to Wikipedia. She argues that people trust in Wikipedia’s openness and transparency, rather than in the individual authors. She writes “that the reason why people trust the content of Wikipedia is that they trust the processes of Wikipedia. It is a form of procedural trust, not a trust in persons.”
I think this procedural trust is also what we put in the algorithm. Blogger Adrian Chan puts it this way:
The algorithm generally may invoke the authority of data, information sourcing, math, and scientific technique. Those are claims on authority based in the faith we put in science (actually, math, and specifically, probabilities). That’s the authority of the algorithm — not of any one algorithmic suggestion in particular, but of the algorithmic operation in general.
We do not necessarily trust in the particularities; we trust the processes. Is the trust that people have in museums similarly procedural? Do we trust in the process of museum work, rather than in the individual results or in the people who work in museums?
There are a myriad of assumptions that we make about people working in museums; that they are well trained and professional; that they are experts in their particular domain. We implicitly trust the people, then, and the work that they do. However, in many cases, such as when we visit an exhibit, we don’t know who the specific people are who worked on the exhibition. We don’t necessarily know who the curator was, or who wrote the exhibition text. The lack of visibility inherent in many current museum processes obscures the individual and their work. The museum qua museum, therefore, acts as a mechanism for credibility because it purports to bring the best people together; because the people who work within are known to be trained professionals who use scientific methods, regardless of whether we know specifically who they are or what their particular training is. Ergo, the trust we have in the museum must also be a form of procedural trust. (Amy Whitaker concurs, “Institutional trust is founded on process, on the belief that there are proper channels and decision-making mechanisms and an absence of conflict of interest.” p32)
Shirky also speaks to the social element involved in authority. He explains:
Authority… performs a dual function; looking to authorities is a way of increasing the likelihood of being right, and of reducing the penalty for being wrong. An authoritative source isn’t just a source you trust; it’s a source you and other members of your reference group trust together. This is the non-lawyer’s version of “due diligence”; it’s impossible to be right all the time, but it’s much better to be wrong on good authority than otherwise, because if you’re wrong on good authority, it’s not your fault.
Authority isn’t just derived from whether we can trust a source of information, but additionally whether we can be confident in passing that information along and putting our name to the fact that we made a judgement on its trustworthiness. We shortcut the process of personal judgement using known systems that are likely to give us accurate and trustworthy results; results we can share in good faith. We trust museums because museums are perceived to be trustworthy.
Do the film companies that run their scripts through Epagogix’s algorithms do so because it helps them shortcut the process of personal judgement too? Can algorithms provide better insight, or just safer insight? Eli Pariser says this of Netflix’s algorithms:
The problem with [the algorithm] is that while it’s very good at predicting what movies you’ll like — generally it’s under one star off — it’s conservative. It would rather be right and show you a movie that you’ll rate a four, than show you a movie that has a 50% chance of being a five and a 50% chance of being a one. Human curators are often more likely to take these kinds of risks.
Right now, museums that do not embrace technology and technologically-driven solutions are often perceived to be risk averse, because doing so challenges existing practice. I wonder whether, with time, it will be those institutions that choose not to make choices driven by data that will become perceived as the risk-takers? This is a profession that is tied so strongly to notions of connoisseurship; what relationship will the museum have with the algorithm (internally, or external algorithms like those that drive Google and other sites)? I don’t have any answers yet, but I think it’s worth considering that museums no longer just share authority with the user-generated world; authority is also being shared with an algorithmically-shaped one.
During recent weeks, I’ve felt somewhat suffocated by the media and social media coverage of events like Sandy Hook. I’m normally a news junkie, but lately I’m struggling to cope with the onslaught of information, of bitsy and incomplete pieces of coverage and comment across every platform I use. This is probably no great surprise – public interest in Newtown coverage exceeds all mass shootings since Columbine, so there has been a lot of it. But I think the mass of live-time comment like this has really brought home to me just how much open publishing platforms are really shifting the way we communicate about significant news events. After all, there is no mediation with social media.
In the past, this chaotic process of journalistic sausage-making was kept mostly hidden from TV viewers and newspaper readers. Inside the newsrooms at these outlets, reporters and editors were frantically trying to collect information from wire services and other sources, verifying it and checking it as best they could, and then producing a report at some later point.
Even with checking and verification, news filed within the first day of an event is often shown to be inaccurate 24 hours later, as this piece on how continuous access to rich digital news archives is presenting complexity shows. The Guardian’s Chris Elliott writes:
Readers have hitherto accepted that each edition of a newspaper is a snapshot of the available information at the time the newspaper went to press. For instance, estimates of casualties in a catastrophic event, or details about a suspect in a crime, may change as more information becomes available.
Where these early iterations of the story remain on the site, the Guardian has relied on the date stamp and time of posting to indicate that this was the state of knowledge at the time it went up on the site. Where we find a story was significantly inaccurate or misleading based on knowledge at the time of publication, we amend the article and publish a footnote to explain the change as well as a published correction.
The news archive is flattening out, and becoming super-available (paywalls notwithstanding). Indeed, with so much of the Guardian’s content accessed 48 hours after it was originally posted (nearly 40%), there is growing demand from some judges for news organisations to remove material from their online archives that could prove to be influential to jury members in criminal cases. But at least news organisations have some guidelines and mandate to publish corrections to their work when its found to be inaccurate. Social media users have no such mandate.
Live-time coverage that feeds on and draws from social media still needs the establishment of codes and conventions, perhaps like declaring what a reporter won’t report as well as what he/she will. As this 2011 discussion of the practices of news curator (yeah, I said it) Andy Carvin reports, “[t]here are few established rules or journalistic policies” for real-time, crowd-sourced approaches to news. Still, this social-media form of reporting and comment is adding a new layer of publication on all events of significance.
So what does all of this means for museums exactly? Well, maybe it doesn’t mean anything directly. But if social media is, indeed, where the new first rough draft of history is being written, and museums are heritage institutions, then we need to be paying attention and trying to make sense of how it could impact curatorial practices. Or research. Or history. Or everything about how we understand our world.
How do we – as institutions or society – deal with and make sense of this increasingly unsettled discourse, filled with so many more voices than were ever possible before? I don’t know that we are equipped to do so yet. How do we capture the ones that are, and will be, important (without buying Flickr and other sites)? We cannot simply hold onto the front page of the local or national newspaper and feel that the job is done. It cannot work like that.
I leave you with some thoughts from an interview with David Stout, domestic correspondent for the Continuous News Desk of the NYTimes (2008):
I think we are indeed writing the rough draft of history, although some drafts are rougher than others. I think writing history, as opposed to daily journalism, requires a certain distance in time and dispassionate reflection. It’s also, of course, far more detailed. For that reason, I have read a lot of history. It helps me to keep my perspective, and reminds me that many things run in cycles.
What do you think? Do you agree that social media and live-time reporting is becoming the new first rough draft of history? And if so, what does that mean?
It’s not just my newly-found geekout obsession with Geocaching either. It’s been happening on the pages of the Cooper-Hewitt collection alpha, launched last week and designed to let you lose yourself in its pages. For me, at least, it’s been working. But there’s lots at play in this collection, so I thought I’d run you through some of the elements that catch my eye and mouse-clicks.
A collection that’s “of the web” The opening gambit that the collection makes is that it’s the first one self-proclaimed to be “of the web”, linking to (edit – and pulling in from) outside sources like Wikipedia, Freebase and other museum collections. This idea that a museum can gain authority by pointing to/sharing other useful and authoritative content is something that Koven Smith, Nate Solas and others have been talking about for sometime (the Walker being the first to take this approach to their website more generally), and it’s exciting to see it realised on a collection. Just as interesting is the way Cooper-Hewitt reaches out to its users to build the knowledge around the collection via external links, asking:
Do you have your own photos of this object? Are they online somewhere, like Flickr or Instagram? Or have you created a 3D model of one of our objects in SketchUp or Thingiverse? If so then then tag them with ch:object=18452119 and we will connect ours to yours!
This is an exciting move, and I will watch with interest to see how it develops (I cannot yet find an example of an object where a user object has been incorporated in with the museum record, so I don’t know what it will actually look like in practice). This doesn’t seem to actually invite the audience voice into and onto the collection record directly (ie through comments); but it holds promise to weave in external interpretations and iterations of the museum object with the museum’s own interpretation, acting as a form of digital citation. The museum collection record can then act as anchor for discussions/interpretations around the object itself, which to me seems to be an interesting take on the idea of “authority”.
I am curious to see how people react to the invitation to link their images and interpretations of collection objects with the museum’s. Will amateur collectors share their images and knowledge about the objects, and if so, will that force more attention onto the collection itself as the centre for a bigger conversation? (An aside – the museum asks people not to steal their images, so I wonder what the implications are of providing links to other people’s own photographs of collection objects. Does this provide an interesting way to make available images of collection objects without the museum providing them itself?)
A collection that speaks using a natural tongue While I think the inclusion of externally-derived links/information on the collection is the big move that will get museum people talking, there is much more to like in the collection alpha. One of my favourite touches is the plain-text descriptions of works (often also accompanied by images). I am totally charmed by these textual descriptions. Not only does the language seem far less confronting than traditional museum-ese, but in the cases where an image of the actual work is unavailable, this conjures up a beautiful sense of the object itself. I have a peculiar urge to create tshirts and art products from these descriptions, and hang them on my body or my walls (before photographing them and linking them back into the collection, of course). This is my current favourite.
Night scene of a skyscraper consisting of a massed cluster of low tiered sections below culminating in a monumental tower. The structure is illuminated by the city street lights below and streams of light from a chapel- like central section. A white cross is visible at the top of the tower. Pedestrians walk among silhouetted leafless trees below. Photostat #1964-5-13
How beautiful is that? Wouldn’t you love to get 50 different people to draw or create a work of art that met this description, and see what they all looked like?
Simple design solutions These descriptions also serve as a stand-in (as do other natty little invisible design objects) for images where they are unavailable, in a gorgeous response to the problem of digitisation and permissions, otherwise spelled out in this disclaimer:
We can’t show you any images of this object at the moment. This may be because we have not yet digitized this object or, if we do have a digitized image, we don’t hold the rights to show it publicly. We apologize for any inconvenience.
The Cooper-Hewitt has found seemingly simple design solutions to the problems that all museums are facing, of digital rights and access and the cost of massive digitisation projects.
Working with what you’ve got What else? I think it’s great that this collection has been done as an ‘alpha’ release, a minimally viable product. I like that the eccentricities of the raw data are acknowledged. I enjoy the nomenclature used when acknowledging the “village of people” involved in making an object, and the focus on people/creators as well as objects. I love the way that the inconsistencies in data are explained, such as in the “Periods” section.
Our curators use the term period to refer to a chronological timeframe, historical period, or artistic movement.
While dates may be fuzzy or unknown, and the boundaries of some artistic movements are open to interpretation, these descriptions help to create connections among works.
Different curatorial departments use descriptive terms in different ways. Over time, this creates tremendous diversity in the vocabulary that appears in collection records. Inconsistencies, incomplete information, and obsolete terminology are par for the course, and we are working to reduce the instances in which they occur.
These sorts of descriptions explain why an object fits under the umbrella of a particular term, and why some of the descriptions are imprecise or less than perfect. It allows for imperfections in the data, but also acknowledges why they exist. Each period description also includes the number of objects you’ll find within it, and the percentage of the online collection that it holds, ie “American Modern — there are 531 objects made around this time which is about 0.43% of our online collection” which helps give context and proportion to the period in comparison to the larger collection.
Navigating a dozen ways, but still delightfully lost There are nice ways of navigating this collection, which give credence to both what the museum thinks is important (ie departments), some classic parameters (countries, periods, media), and also the options for searching by people, their roles, or random. I also like being able to click on a time period, like “1900”, and getting a page that says this “We may not know what everyone in our database, did during the 1900s but we know about a few of them.”
At the moment, the display seems to be weighted towards those individuals or periods with the most number of objects, and therefore the largest percentage of the online collection, which would be a logical choice in terms of highlighting collection strengths or at least the weight of the collection. I’d be interested in whether there were future ways to weight the collection that might put emphasis on individuals in the collection who were considered to be important, but who doesn’t necessarily have a lot of objects, but right now the approach is logical.
Because there aren’t many images at the centre of the navigation, I tend to click on the most interesting or random words, and I wonder whether this is typical search behaviour or not. I will be interested to see how other people navigate this collection, and whether they get as sucked into it as I do. But so far I have indeed been wandering serendipitously.
There is much more I could write about, but I’ll leave it here. This is a very exciting step for the Cooper-Hewitt, and for online museum collections in general. I look forward to seeing how it develops and is received. Congrats to Seb, Aaron and Micah on the launch. Also, I think that both Aaron and Micah will be at MCN2012, and Aaron is a keynote at NDF2012 in New Zealand. Now is the time to swot up on questions to ask at these conferences on both sides of the world next month.
Have you had a play in the Cooper-Hewitt collection yet? What do you think?
How different might museum catalogues be if they had been designed for public consumption from the start? A couple of weeks ago, Mia Ridge mused on this question via Twitter. Her timing was impeccable, for even as she asked I was setting up a chat with Matthew Israel, Director of The Art Genome Project at Art.sy which could be a good starting place for those interested in an answer.
Here, I speak to Matthew about the project. (Fair warning – this is a long post, but an interesting read.)
Matthew Israel
[S] Hi Matthew. Can you tell me a little more about The Art Genome Project, how it started and how it all works? What exactly do you mean when speaking of “genes” in an art context? How do they differ from traditional classifications?
[M] Hi Suse. Thanks so much for your interest in the project and for including the project and Art.sy on museumgeek. Before I begin I should say that the best way to learn about how Art.sy and The Art Genome Project work is to go to the website and if you’re interested in learning more of the specifics of The Art Genome Project, we have set up a tumblr.
In short, The Art Genome Project is sophisticated, nuanced metadata that informs and enables related art search. Genes are our names for this kind of metadata (you can also think about them as the possible characteristics that one might apply to art). Examples include art-historical movements, time periods, techniques, concepts, content, geographical regions and even aspects of an artwork’s appearance. There are currently over 500 “core” genes in the project and another 400+ which capture influences on artists of both other artists and art-historical movements.
It’s important to note that the genes we have created are not by any means just invented by us. Fortunately we are the beneficiaries of hundreds of years of art-historical scholarship; we source from discussions in books, periodicals and on the web surrounding contemporary art; and most importantly, we have established consistent communication with all of our partners (i.e. the galleries, museums, foundations, collections and estates that feature their work on Art.sy) in order to understand their thoughts on the genome and the genoming process.
What’s also really significant to explain is that every artist and artwork has their own genome in order to show how different, for example, Pablo Picasso’s oeuvre (his collected works) is in comparison to individual works he created and how greatly individual works can differ from each other. For example, in the case of Picasso, this enables us to explain to users the differences between Blue and Rose period works or between papier-collé works done in 1912 and his almost surrealist works of the 1930s.
Additionally, genes are not tags — though we have many tags on the site — because tags are binary (something is either tagged “dog” or not). Genes, in contrast, can range from 0-100, thus capturing how strongly a gene applies to a specific artist or artwork. While the specific numbers are not important, this enables us to explain to users that Warhol is highly related to the term Pop Art, while an artist who might have been associated with Pop at points during their career–say Ray Johnson–can be represented as less associated to Pop.
[S] I’ve been following The Art Genome Project Tumblr for a little while now. Many of the posts are about the evolution of the genes; about how and why particular genes come into being. For instance, on this post on “Double Genes” Holly Shen discusses how, in early work, you sometimes “combined two separate but related characteristics into one gene, knowing that eventually the time would come when each characteristic would gather enough artworks and artists on its own”, whilst this post speaks of the evolution of economics-related genes. What prompts an evolution in the ‘genetic code’? Were any genes initially created that you’ve since discarded?
[M] It’s really so great for us to hear you’ve been following the tumblr. Please feel free to comment on any of our posts! Your question is a good one. It gets to the crux of how a new format for organizing knowledge evolves (and must evolve) over time. I think at base, since this is the first time anyone has tried to systematize the vocabulary for art history in this particular way, and as with any significant research project, initial ideas always need to be re-evaluated. People say writing is 10% writing and 90% editing and I would say the same of the genome project. That’s the fascinating part. It’s like more traditional art-historical work. You’re given a set of objects, you create sets according to certain similarities and then see how they hold up to scholarly inquiry and then you re-evaluate and re-evaluate again, etc.
A basic example of how we have split up a gene is a gene we created for Light. We initially thought having works by Impressionists in which light is a central aspect of the image would be interesting to see next to contemporary works which use Light as a medium. However as time went on, we realized more and more that this was confusing for users and these were really satisfying as their own categories. And there were enough objects in each category to split them apart.
[S] How do you ensure consistency across the different genetic coders or mappers, particularly with a system that is still evolving?
[M] Consistency is a huge priority, maybe one of the top priorities of The Art Genome Project. Historically, the kind of data entry we are doing and the production of a shared set of knowledge for how to use a system such as ours has been undervalued or unsystematized because the set of users is a set of specialists working in disassociated institutions or contexts. Hopefully our focus on the public and emphasis on the fact that this vocabulary is an educational tool will bring more exposure to and appreciation for this type of work. Though we can improve on consistency at Art.sy, we are doing various types of things to establish standards, such as our establishment of an Art Genome Wiki (a kind of knowledge base for genoming); weekly Genome Team meetings; and our use of programs like Basecamp, Trello, Pivotal Tracker and teamwide chat to keep revisions and discussions as transparent as possible. In this way, we have realized maintaining consistency is not just about top-down review and establishing rules, but it’s also about constant dialogue.
[S] When you and I recently spoke about the Project, you mentioned a desire to document aspects of art that are more integral to an artist’s practice or concerns than might be included in traditional classifications. What genes have emerged in response to this idea?
[M] Good questions. Regarding other criteria for art, traditional art object classification systems (and I generalize here, because there are various exceptions) really have focused on the specific details of objects (dimensions, medium, provenance) and one subject heading, however The Art Genome Project–while we capture all of these more specific details–is focused much more on what is going on in the work and in an artist’s practice. It’s more like what one would lecture about to educate someone about an artist or artwork.
Regarding “new” genes, yes, we represent the well-known aspects of an artist’s works but also try to show additional aspects that maybe most users don’t know about in order to give voice to the diversity of ways to understand and interpret artists and works of art but also to the complexity of works of art in a way maybe traditional avenues have not had the ability to do. What’s also interesting is we have the advantage of not having to be held within the boundaries of a book or its formatting. This is definitely a liberation for art-historical thinking I think, yet at the same time it is something entirely different to create what such an educational experience looks like, feels like and translates to the user.
[S] Something else that resonated when we spoke was your expressed desire to create ‘valuable educational metadata’. There are a couple of things that I want to explore about this idea. The first is the implication that you are rethinking art classification with a public end user in mind; and more specifically, a public learner. What impact do you think this has had on the planning and execution of the Project?
[M] I would say this has been the major priority of the project. We see The Art Genome Project as the structure for a new pedagogical experience. Many of those involved in the project are educators or come from an educational background and I think this experience informs so much of what we do. One major example is the fact that we define our genes on the site, so that in the process of searching and clicking on things you like or gravitate towards or find interesting, you are being given educational texts that explain specifically why you might enjoy these connections. We also have made a real effort to create text on all parts of the site (but primarily in our artist biographies and gene definitions) that is very clear to the user but not “dumbed down.” I don’t think this kind of content is that available to the mass audience.
[S] You’ve written about mapping serendipity with the Project. Do you think that the Project could actually challenge or disrupt art education, by drawing equivalencies and parallels between works of art that might be “genetically” related, but not historically? In a Time Magazine article (behind paywall), the equivalencies drawn by The Art Genome Project are problematised thusly:
Another problem Art.sy faces is its classification system, which rubs some artists the wrong way. “I don’t think what I am doing has anything to do with Cindy Sherman,” says British artist Jonathan Smith after being told the site links his work to hers via a staged-photography gene. “That sounds like something a programmer would think of.”
Given that classification has played such a major role in the history of art, do you think drawing new equivalencies between historical works of art, or between historical and contemporary works, could have a disruptive effect? As an art historian, how do you feel about this?
[M] Honestly, while the term “disrupt” is often used with new websites to describe how they deal with a particular historical space, I don’t think about The Art Genome Project as disruptive. In truth, the job of art historians, and furthermore, art critics, curators, etc. is to draw new equivalences anyways and we are just doing the same thing in a different way. Yes, here there is the term “gene” or as you say “genetically related” but it’s not really all that different from an historian exploring new relationships between artworks. I also should say that our genoming is based on historical information so it would by no means contradict historical connections. I should also say that we have gotten some of our best feedback from art historians, which we have used to improve the site.
[S] Of course, the Art Genome Project isn’t being done with strictly educational outcomes in mind; it also has commercial ones. Do you think the overlapping interests of the Project could compromise its educational value?
[M] It’s funny, I don’t often get asked that. To be completely honest, there are really very few (if any) commercial constraints on The Art Genome Project. As I am sure you realize, this situation is extremely important as we have many non-profits involved (museums, foundations, estates). Art.sy’s goal is to make all the world’s art accessible to anyone with an Internet connection and that’s really been the focus of our efforts over these past few years.
[S] Finally, I am curious about the maintenance and scaling of such a labour-intensive approach to classification. Is the Project, and therefore Art.sy, limited in how big it can get? Do you have curators in the same way a museum does; making decisions about what is included in the Project?
[M] We’ve been thinking about this a lot lately. We do have a review process, of which I have already spoken, but we are definitely looking at how much we can scale without losing any of the quality we demand out of our genoming. What’s amazing about being here is that Art.sy is half engineers and half art professionals so we are not tackling this question alone (with the tools of art historians), but with significant input from our engineers. I’m honestly consistently amazed at how helpful those involved in tech (which again, is half of Art.sy) are in helping us deal with our problems, particularly regarding more efficient processes and workflow. One thing we are looking at is how much of the process we can automate so that an input we are needing to make is processed in the optimal manner. We also accomplished a lot this summer in a collaboration between members of the genome team and an engineer on our appearance genes. The eventual goal was to use our data to train a program to understand more specific recognition of the visual characteristics of artworks and we’re currently in the process of evaluating some of our conclusions. You can read more about this on our most recent blog post.
[S] Ok, now it’s your turn. What haven’t I asked about that excites you with the Project? What else should the museum community know about it?
[M] Great way to end. Hmmm. What excites me? I hope you don’t mind that I made a list…
Creating a classification system that retains the nuances and mysteries within art and allows anyone with an Internet connection the opportunity to learn about art and art history.
Open sourcing our research on our tumblr.
Working on a truly collaborative project, with those from the arts but also with computer scientists, engineers and mathematicians.
Research we have undertaken on The Art Genome’s roots, specifically the history of art classification systems.
Giving people (who have wanted to learn about art but didn’t know where to start) a place to start.
Trying to capture “happy accidents” in the classroom, i.e. mapping the serendipitous connections that happen when you teach, to help educate.
Trying to create an educational experience that is active, exploratory, and self-motivated.
The possibility of educators using Art.sy as a teaching tool, to explore the history of a movement or to see how a term’s interpretation has changed over time, such as Collage or Documentary Photography.
Getting other people (like you) excited and involved in what we are doing!
[S] Thanks so much, Matthew! There is much to think about here.
Matthew Israel is an art historian (PhD, Institute of Fine Arts, New York University) and is currently the Director of The Art Genome Project at Art.sy. His book on American artists’ political engagement during the Vietnam War, Kill for Peace: American Artists Against the Vietnam War, is forthcoming in Spring 2013 from University of Texas Press. Matthew has taught modern and contemporary art history as well as critical reading and writing at New York University; Parsons, The New School for Design; and The Museum of Modern Art, New York. He has also written articles for Art in America, Artforum, Frieze and ARTnews and contributed to books and catalogues for The College Art Association, The Whitney Museum of American Art, the New Museum, Gagosian Gallery, Cheim & Read Gallery, and Marianne Boesky Gallery. Additionally, Matthew has worked as the Administrator at the Peter Hujar Archive, the Director of Operations at The Felix Gonzalez-Torres Foundation and as a research or press consultant for the New Museum, the Art Spaces Archives Project, Matthew Marks Gallery and Gagosian Gallery.
What do YOU think? Does this kind of tech-enabled and public-focussed classification have implications for museums? Could you imagine a similar idea working in a non-art context?
There must have been a collective intake of breathe from museum professionals around the world last month when Matthew Inman from The Oatmeal put the call out to build a Goddamn Tesla Museum, inviting donations and support via Indiegogo. The crowdfunding project has now raised more than $1.2million, with the city of New York promising to match $850,000 of that money. Imagine that. More than 30,000 people have pledged money towards an as-yet-nonexistent museum/science centre. Science! Nerds! Money for a museum! How totally rock and roll.
Despite the attention that has come to it since Inman’s involvement, the project isn’t a new one. The Tesla Science Center (formerly known as the Friends of Science East, Inc.) has been formally active since February 14, 1996, so although the Tesla Science Center has now come to the fore with the crowdfunding project, it has not simply appeared out of thin air. This has been a long-burning campaign that has just undergone a radical shift in prominence. From being a pet-project and passion for the TSC, something that must at times have seemed no more than a pipe dream, the Tesla Science Center now holds potential to be real. What a colossal shift in the course of a month.
The shift in attention, prominence, and possibility brings with it all kinds of interesting questions. First, let’s assume that the FSE does acquire the property (there are other bidders, like Milka Kresoja). What then? Are the Board of Directors for the TSC in a position to capitalise upon their sudden rush of funds and support? Is the museum actually feasible? And how will those thousands of people who have contributed to the project feel when it starts to move from months into years before the Tesla Museum becomes real?
This is one of the as-yet-untested aspects of such a big crowd-funding project; can a project built on hype and excitement, which invites emotional and economic investment (some of it significant) from people all over the world, continue to hold attention, to live up to its own build up? Or is there an inevitable backlash when projects change, adapt, or even fail?
Back before I dedicated myself to solving the many mysteries of museums, I worked in the music industry, so hype is something I have a fairly keen interest in. I have watched indie bands pick up buzz as early adopters gathered around and invested in them; knowing that they were in on something secret and special; a band with the compelling allure of potential. Once that buzz starts, capitalising upon it relies on timing and maintaining momentum. A band full of potential that waits too long to impress and live up to their early promise may all too soon be written off as a casualty on the hunt for the next big thing. Hype, buzz, potential – whatever word you want to use for it – can be all too fleeting, particularly if the return on investment is a long time coming.
Marketing company Gartner uses hype cycles to help characterise what happens following the introduction of new technologies. The hype cycle follows five phases, being a trigger in which “Early proof-of-concept stories and media interest trigger significant publicity. Often no usable products exist and commercial viability is unproven”; a peak of inflated expectations; a trough of disillusionment, when “interest wanes as experiments and implementations fail to deliver… Investments continue only if the surviving providers improve their products to the satisfaction of early adopters”; a slope of enlightenment; and finally a plateau of productivity, in which “Criteria for assessing provider viability are more clearly defined. The technology’s broad market applicability and relevance are clearly paying off.” Although the methodology is intended for technology adoption, such a cycle can likely also apply to this situation.
Gartner Hype Cycle
It is in this space that the Goddamn Tesla Project will prove to be an interesting test case. Mark Walhimer estimates that it takes between 5 and 10 years to start a museum, but if comments on The Oatmeal’s post like this one – “Good luck Matthew! This Goddamned Tesla Museum needs to happen. RIGHT MEOW!!!!” – give any indication, then the slow-burn from now to then might indeed cause supporters of the project to fall into the trough of disillusionment.
On the Indiegogo fundraising site, it is acknowledged that:
Even if we raise the full amount and end up with $1.7 million, this isn’t enough to build an actual museum / science center. But it will effectively put the property into the right hands so it can eventually be renovated into something fitting for one of the greatest inventors of our time.
Similarly, on The Oatmeal’s FAQs about the project, Matthew Inman has written:
If this is a success, can you build a museum right away? What happens next?
The property the laboratory is on is a bit of mess. It needs to be cleaned up, restored, and there’s a ton of work to be done to actually turn this into something worthy of Tesla’s legacy. The money we’re raising is simply to secure the property so no one can ever mess with it and guarantee that it’s a historic site. It opens up years and years of time to figure out how to build a proper Nikola Tesla museum. However, I would love to have some kind of Nikola Tesla festival on the property on July 10th of 2013 (Nikola Tesla Day), and have some kind of zany Tesla-coil-BBQ-cookout.
The short-term goal of a Tesla Festival may be enough to satisfy those who have invested in the project to see it as being worthwhile. Such an event would give a sense of culmination and momentum; both important for capitalising upon early hype and potential. But we aren’t likely to get real perspective on whether crowdfunding a museum from scratch can prove to be a rewarding model for either the museum or its funders for many years. In this way, the Goddamn Tesla Museum is likely to prove an interesting test case. It might be here that some real questions around museum innovation can be answered.
What do you think? Can interest in a project like this one be sustained over time, or is it inevitable that those enthusiastic geeks the world over will become disillusioned as the Museum takes years to move from idea to actuality?
It’s always a great start to a day when the first two links you click inspire a flurry of fresh thought. I have been getting stuck into some PhD writing this week, and fast losing myself in the doldrums of theory. So waking this morning to a little bit of inspiration was just what I needed.
The first shot of inspiration, that woke me far more than a coffee would, was this super-cool research on What Makes Paris Look Like Paris? (on Openculture via Jasper Visser & Seb Chan). I remember as a child someone telling me that all cities had colours; that some cities were grey, and some brown. Some were blue. These dominant colours reflected the materials that had been used in construction, the fashions that had shaped the way the city was constructed, the natural resources that were available to the builders. And so I often look out at new cities as I approach them, and watch to see what colour they are. This research reminds me of that, for it analyses Google Street View imagery to look for the common visual elements of a city, like its architectural features. It turns out that not only do cities have colours, they also have distinguishing architectural features.
Imagine what kind of new information and understand lies within our collections, if similar techniques were deployed. What kind of features are common to paintings of Paris? Are there common colours used to depict Paris, or are the architectural elements captured in the research above visible? And what else can we learn about our collections through these kinds of techniques? It is easy to think of this (for me) in terms of art collections, but I am sure there is much that can be found in archeological data and other museological data. (Not that most museum data is that great, as Mia Ridge recently discovered when playing in the Cooper Hewitt datasets.)
There is some work being done in this area, of course, but I’m interested in what else we can find in our collections using these kinds of techniques. This morning, I also watched What do they have? Alternate Visualizations of Museum Collections in which Piotr Adamczyk, when he was at the Met, talks about the possibilities for new information that might be possible in collections data. During the question time, he speaks of his interest in using data to look at provenance and figure out the history of the object in order to visualise who had it, when and where. For me, this is exactly the sort of flow of information that I would find so interesting about collections. I am really interested in the power structures and power makers in any sector. When I met curator Helen Molesworth recently, I asked her what I would discover about her influence on permanent collections, were I to look across the course of her career; who and what she had collected consistently or in different institutions over her life as a curator. It was a question that floored her, because it was one no one had ever asked before. But to me, this is the interesting stuff of museums. Who are the individuals that change the shape of our collections, and indirectly then, the shape of our material wants and expectations? Who has shaped the art market by collecting the works of an individual and increasing their value for other collections? Which individuals have really changed the shape of our cultural heritage, its value and its impact? Who has championed the work of previously unknown artists, and turned them into a hot commodity?
So my other early moment of inspiration in starting the day was watching Koven Smith’s MuseumNext talk, which was just gone online. In it, Koven speaks about curators using algorithms to produce collection narratives – interpretive algorithms. Now it seems to me that this idea starts to coincide with the work being done above, whereby collection researchers and communicators working in a museum could have a focus on a whole collection, and how it relates to the rest of the world, rather than only having curators (or researchers) whose focus is on exhibitions and material culture.
When I last wrote about big data and museums, I quoted from Mia Ridge, who mentioned that there are probably lots of other people who can do great things with museum data, much more than museums can and potentially should. And I agree with that. But I also wonder if making sense of our collections at a macro level with these sorts of techniques and possibilities isn’t also something museums should be doing. I don’t know about that, but I do think it’s something to think about.
What do you think? What would you like to see visualised using museum collections? Are there new ways of looking at the work we do that technology is making possible in ways that weren’t previously available? And should this work be done within the museum, or is it just the responsibility of the museum to enable others to do it?
track, collect and distribute curated news in order for newsrooms to focus on local reporting. “Providing context to everything we curate is vital to providing a comprehensive news report,” said digital projects editor Mandy Jenkins. “It’s our responsibility to bring these stories to each of our local markets.”
Steve Buttry, Director of Community Engagement & Social Media at Digital First, has written a long and interesting post on news curation techniques, types and tips that’s worth reading for insight on the approaches of news organisation to curation. But after reading these two posts, I wanted to investigate a little further. I stumbled upon the Register Citizen Open Newsroom Project and its Newsroom Cafe. Now this makes for fascinating reading. The Newsroom Cafe is based in Torrington, Connecticut and it offers more than just food and drink… There are no walls between the cafe area and the newsroom, and “readers are invited to find the reporter that writes about their community or area of interest – or editors – and talk about concerns, ideas, questions.”
There are a number of strategies that the project employs, like allowing public debate of internal policies, fact checking programs, partnerships with other media and non-media organisations, and digital first reporting, that give useful pause for thought for museum organisations tackling the problems of the integration of digital and non-digital. In addition, the Newsroom Cafe offers a community media lab with a full-time community engagement editor. They also make a space for an artist-of-the-month program(!), featuring the work of local artists in the cafe space itself, while the most popular feature of the Cafe has been public access to the newspaper’s archives from their 134 year history – both of which are things that should catch the attention of museum people.
In addition, the Register Citizen holds their daily story meetings at a table on the edge of the cafe, and the community is allowed to sit in, listen or even participate. This is where it gets really interesting.
Video of the meetings is also live-streamed on RegisterCitizen.Com, and we use a live chat to allow readers to watch from home and type in a question or comment in real time. Those words are displayed on a large monitor above the conference table, so editors and readers can interact and respond to people tuning in from afar.
…A funny thing happened after we moved into the “open newsroom” last December. We stopped having “editorial board meetings,” at least in the traditional sense where an outside organization or politician meets behind closed doors with a committee of editors, reporters and the publisher. We were still getting requests, but when we did, we made sure the industry association or special interest in question knew how we do things now. We’ll meet with you, but it will be in public, our readers will be invited to attend and participate, and we’ll be live-streaming it on the web. For the most part, after that became clear, the party requesting the editorial board meeting said “No thanks.” Others, including Connecticut Governor Dannel Malloy, embraced it, and the public’s involvement, by all accounts, improved and advanced the discussion. An exciting opportunity has emerged for us to create a new kind of editorial board process.
These movements interest me for a number of reasons. For one thing, as a participatory location, The Newsroom Cafe starts to draw together the news room with its community in a far more immediate way. Rather than simply seeking community involvement online, the local community is drawn into the process even in the physical space but integrates that with digital processes.
The transparency and openness of the process has also changed the dynamics of editorial, in a way that seems to have upended previous practice. What would it be like to hold a curatorial or exhibition meeting in the museum cafe, when anyone could join in? Or, what would happen if museums opened up about the evolution of knowledge that occurs around their collections, and allowed the public into that process? Such a question recalls to me a paper by Bruno Latour on the revision of knowledge. He asks
does it distract visitors to know that there were paleontologists fighting one another, that fossils had a market value, that reconstitutions have been modified so often, that we “don’t know for sure”, or, as another label [in the NY Museum of Natural History exhibit A Textbook Case Revisited] states, “While it’s intriguing to speculate about the physiology of long extinct animals we cannot test these ideas conclusively”? The more fossils there are, we feel the more interesting, lively, sturdy, realistic, and provable are our representations of them; how come we would feel less certain, less sturdy, less realistic about the same representations when they multiply? When their equipment is visible? When the assembly of paleontologists is made visible?
It’s here, in the idea of opening up about the changes in museum knowledge, that I think museum transparency could really come into its own. But examples like the Newsroom Cafe further demonstrates the eroding demarcation of roles that were once more easily differentiated. The news becomes situated in space and time, becomes woven into the community. The digital and “real” have joined more seamlessly there, and such moves make it less easy to know what is to be the role of the museum, and what is the role of, well, some other organisation in the connected age. Boundaries are eroding, and while the material culture role of museums seems unthreatened, the experiential, educational and knowledge roles might be up for grabs. Even shopping centres are getting in on the act, taking on the Internet by stressing experience. Such movements could still have implications for museums, which is why I think we need to be looking to these kinds of projects to learn if and how they work.
What do you think about the Register Citizen Open Project? How would you feel if the public was allowed to sit in on exhibition or curatorial meetings? Do you think that museums should make visible the revisions in their knowledge? Let me know.
This morning, Australia was greeted with the news that major media organisation Fairfax will shed 1900 staff, shift its two major newspapers from broadsheet to tabloid format, and erect paywalls around the websites of those major metropolitan dailies – all in response to decreasing ad revenue. It is expected that News Ltd. will follow suit, and make cuts in coming days.
Meanwhile, two US cities with metropolitan populations of more than a million (New Orleans and Birmingham) are about to become the first without daily newspapers. Such news heralds the latest movement in the ever-shifting media landscape as traditional broadcast organisations try to adjust to the changing information/media infrastructure.
These changes were the subject of the recent USA FCC report on the Information Needs of Communities: The Changing Media Landscape in a Broadband Age. It is a long (468 pages), but interesting, read about the changing media landscape in the US, and although the media sector is in many ways different from the museum sector, there are also plenty of similarities, as some museum bloggers have recently noted. As the report captures:
It is a confusing time. Breathtaking media abundance lives side-by-side with serious shortages in reporting. Communities benefit tremendously from many innovations brought by the Internet and simultaneously suffer from the dislocations caused by the seismic changes in media markets. (7)
In a just-published assessment of the Fairfax changes, journalist Jonathan Green argues that the Internet is not to blame for the media organisation’s failure, but instead that poor revenue models were what dragged it down “Because the business is not content, not journalism; the business is selling advertising.”
Society doesn’t need newspapers. What we need is journalism. For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. That’s been a fine accident to have, but when that accident stops, as it is stopping before our eyes, we’re going to need lots of other ways to strengthen journalism instead.
What Shirky has written here could as easily have museums as its focus. Society doesn’t need museums. What it needs is mechanisms for selecting, preserving and communicating objects and information about our past and present in order that we can better prepare for the future. To date, museums have been an important vehicle for answering that need. But it is not the institution itself that is significant – it is the purpose it seeks to fill.
Even within the sector, we can see that this is true. When Ed Rodley started his making a museum from scratch series, the first post attracted all sorts of questions about why it was that his collection needed to be a museum. As Koven put it:
just because you have a collection, you don’t necessarily have to display it. Just because you have a building, that building doesn’t necessarily have to be used to display those collections, or as a place for people to visit.
So surely the question we should be asking, as individuals, institutions and as a sector, is how do we achieve the purposes of selection, preservation and dissemination? Is it by collecting physical objects (as has historically been the case) and storing them, selectively displaying those that have particular illustrative or narrative qualities, as it has been? Or is it by investigating new models for publication, like the Walker has done, and integrating those models more closely with the physical building of the museum? Or will the approach need a completely new way of thinking through the problem?
A statement by museum scholar David Carr is of interest here. When reading, substitute the word Internet every time you see the word museum:
A museum is not about what it contains; it is about what it makes possible. It makes the user’s future conversations, thoughts, and actions possible. It makes engagements with artifacts and documents that lie beyond the museum possible. It constructs narratives that help us to locate our memories, passions, and commitments. The museum illustrates irresistible new thoughts and stimulates revisions of former thoughts. The museum invites us to reconsider how we behave and what we craft in the worlds of lived experience. The gift of a museum for every user is an appreciation of complexity, a welcoming to the open door of the unknown, the possible, the possible-to-know, and the impossible-to-know.
David Carr, “Mind as Verb,” in Museum Philosophy for the Twenty-First Century. 16. Author’s emphasis.
The environment and nature of the Internet means that it is innately set up to achieve many of the very things that Carr posits the museum seeks to accomplish. In fact, I would argue it is far better suited for making the user’s future conversations, thoughts and actions possible. The very existence of the Internet, then, raises questions about the role of the museum.
Museum leaders need to rethink digital, and look at it from a more strategic perspective, one which can really deliver on the mission of the institution and the needs of the public. Museum leaders need to recognise that a powerful website can deliver just as much as a powerful exhibition and fund the roles within the institution to produce something credible online.
Although I agree with his perspective, I don’t think it goes far enough. Digital does not just change modes of delivery. It changes the nature of the very problem that museums purport to solve. That the model we have had to date has largely worked may be more a happy accident than indicative of its superior design.
Of the Fairfax changes, Jonathan Green says:
There was a moment, maybe 10 years ago now, when a bold management at Fairfax might have picked the company up by the scruff of the neck, rationalised the staff, integrated the online and print operations, trimmed the paper size, and moved the content toward a premium mix of context and analysis.
They would have looked adventurous, bold, purposeful; they would have left the competition in their dust. But that was 10 years ago.
Now is that time for museums. We still need the things that museums do. We still need to know how to select, preserve and disseminate, whether objects or information. What we don’t need is museums. If those same needs can be met by other means (digital or otherwise), the impact on museums will be significant. I think it’s important to keep this in mind as we look to the future, particularly as we see the effects of the Internet on other traditional institutions.
What do you think? Does society need museums, or just the things that museums seek to do? And if the latter, what should that mean for museums as they approach the coming decade?
I’ve started to notice a couple of interesting patterns or trends in the digital museum dialogue over the last couple of weeks and months. Just taking a quick flick around the blogs and looking at some of my favourite museum thinkers, we have Koven speaking at MuseumNext about the Kinetic Museum, and asking What if a museum’s overall practice were built outwards from its technology efforts, rather than the other way around?. Ed’s making a museum from scratch series is moving towards imagining a radically transparent museum – one in which labels might include information about who wrote them, objects might have whole histories available, or information that leads visitors back outside the walls of the museum to continue their journey beyond the physical space. And Seb has proposed that “the exhibition as a form needs to adapt. Radically. And I don’t mean into a series of public programs or events.” His great post from last week, too, considered new ways of designing exhibitions as immersive events with digital parallels.
There are two things that I find fascinating about this. The first is that this dialogue is forming a kind of dispersed ‘Koinonia’, or collaborative thinking. Although each of us is physically removed from one another (in my case, across oceans, and for the others, at least a few hours of travel between), we are all bouncing off, and building upon, the ideas, questions and inspirations being shared by the others.
But the second reason this is interesting to me is that in each case, they we are all starting to reimagine or redesign physical museum experiences with ideas drawn from digital experiences. The museum technology conversation seems to be shifting from merely how does technology impact the business of the museum practice to how should it impact the museum building or the design of museums physically. Of course, there is precedence for these conversations with Nina Simon’s approach to exhibition design, which draws upon Web2.0 philosophies. But these new discussions seem to further explore the concept of creating the physical space of the museum upon the principles and values of the Internet.
So what are these values, and how could they apply to museum/exhibition design?
For me, the immediate ones that come to mind include transparency and openness, agility and responsiveness, customisable and personal experiences, and sharable, social and participatory interactions. Many of these ideas are ones that I’ve spoken about previously on this blog, but I’ve always focussed on how they might/should apply to museum online efforts.
Ed’s concept of radical transparency in the museum is provocative. In Too Big To Know, David Weinberger proffers that one of the basic elements of the Net experience is that “[t]he Net is a vast public space within which the exclusion of visitors or content is the exception.” (174.) He also points out the abundance of the Internet, where “there is more available to us than we ever imagined back in the days of television and physical libraries.” Taking these ideas into the physical museum space could see the size and complexity of working collection made visible and public as default, whilst still being able to distil ideas through the use of selected objects chosen for formal exhibition/display. This approach also puts a contemporary spin on the idea of curation, where the curator draws attention to the things worth seeing within the abundant content available. As I commented, the recently opened MAS | Museum Aan de Stroom in Antwerp has a visible storage area that houses about 180,000 artefacts from the collection. Imagine being able to see the entirety of a collection, as well as its details. What kind of public value might such an approach have?
(Of course, such an approach would likely have implications for cost, security etc. – there are many as-yet-unresolved issues here.)
What else? I think one of the most enduringly appealing things about the Internet is that it is highly personal and customisable. My experience online is likely very different from yours. You and I, we will read different things, and be drawn to different sites. We will even visit the same sites, but on different browsers and devices, or at different times of day. So how could a museum make an experience that put emphasis on “immersive exploration rather than a linear narrative“, as Seb has been asking? What kind of approach to exhibition design is needed to give individuals ownership over their experiences and yet still maintain connective narrative tissues to make sense of the core concepts and ideas at play?
Digital experiences are sharable, and frequently participatory. But they are also agile, kinetic, and scalable from global to local, and back again. Our conversations and interactions online are not limited to our physical proximity, but they are often related to it. I chat to people all over the world on Twitter, but also make a point of meeting up with them in person when circumstances allow. There is an overlap between my digital and physical experiences, a parallelism (as Seb recently observed). So how could these parallel experiences be incorporated into museum setting? Could the museum tap into and contribute to global themes and conversations before and after the visit (online or offline), and then focus on the local and particular in the actual space? Would that be the right approach?
Matt Popke, in the comments on Seb’s mixtape post, joins in.
I just think the bar has been raised a bit in the “historical narrative” part of the equation. People live in a google age now. If you encounter something you are not familiar with you simply google it and find out whatever you want to know (or maybe you think you find it, that’s another issue entirely). People are accustomed now to having mountains of information available to them at a whim. Tiny tombstone labels on collection items or informational plaques near an exhibit just don’t satisfy like they used to.
The challenge is finding a way to incorporate *all* of the rich history and context of an item in the display of that item, or otherwise finding a way to deliver more in an exhibition than we’re used to, more context, more data, more story. We need to deliver this information in a way that feels explorative, like the audience is taking their own path through our collection and discovering their own version of the narrative. Hypertext, as a medium, is perfect for this kind of intellectual exploration when dealing with an individual. How do we create a hypertext-like experience in a physical space that multiple people can enjoy simultaneously?
There are lots of ideas here, and most of them are entirely unresolved. Still, this trend in the conversation seems to bend more and more to be broaching the divide between the physical and virtual and trying to rethink or disrupt current approaches to museum or exhibition design. Why this is happening now, I’m not sure. (And does it have implications for museum careers? Will your next exhibit designer be someone with an interest/background in tech?) But it is an interesting line of questioning to pursue.
What happens when museums begin to bring the values and ideas that are normally associated with the Internet into the physical design of the museum?