I often hear people comment that one thing that museums should do as educators in the digital age is teach people how to critically assess the information they encounter on the Internet. Superficially, this sounds pretty smart. But if I’m honest, I don’t think I know for sure how to judge the validity or otherwise of information I come across online. There have been many times when I’ve believed something that wasn’t true, but sounded like it could be. There is enough amazing/crazy/surprising stuff in the world, how can I know which particular example is not – especially when it falls outside an area in which I have a certain level of domain knowledge. I’m sure I’m not the only one who has erred in judgement.
So this is what I want to know: How do you decide what to trust on the Internet? And how much faith do you have in your own capacity to judge veracity and legitimacy?
I use the same techniques I use to evaluate information from any source… a library book, a journal, a news article, etc. This pretty much sums it up: http://lib.colostate.edu/howto/evalweb2.html
Rich is right — this is a generic skill, not an Internet specific skill. Real life skills apply online.
It’s all about the source — people who consistently cite their sources online (with links, captions, etc) are generally the most careful with the information that they spread. Also, a good bout of fact-checking never hurts!
Bruce, I do agree. Having said that, I do feel like there are some changed conditions online than off, from the simple potential of being confronted with more information online than off, much of which needs its own individual assessment, because it isn’t embedded within an institutional space (itself not a guarantee of validity, but which can increase a feeling of reliability). Similarly, information can spread far more quickly online. So while the chances of encountering misinformation in (say) a newspaper have always been high, they might be higher still now. The same issues that have always existed still do, but there are new nuances in how they interact, I think. Do you?
This quick post was inspired in part by a Tweet from Àlex Kippelboy (@Kippelboy) a couple of days ago, that “If cheap journalism keeps going, I imagine the day newspapers won’t be accepted anymore as a reliable source on Wikipedia.” So I was thinking about source reliability and fact-checking, and the changing dynamics of news/information. Willa, where do you go to check your facts? Who do you regard as authoritative?
The concept of information literacy is one that the modern library world takes very seriously. It is not so much the raw information but it’s source that can at least be subjected to filters; ie: is it a private website, .org or blog or a commercial website .com or government .gov. Separating opinion from information is the hard part. For basic information enquiries, trusted websites can always be found through your local library website.
I am teaching a course this semester titled “Wikipedia as a scholarly research tool” because I really think Wikipedia can be used in that way. I became completely convinced of same when I was preparing the first assignment for my Museum Practices seminar that deals with the ethics of repatriation, using the Elgin Marbles as a case study. I could not find a better single source to introduce the subject in a concise and holistic manner than the Wikipedia entry: http://en.wikipedia.org/wiki/Elgin_Marbles With 74 scholarly and popular references, lots of links, the Wikipedia entry seemed then, and I would argue remains today, an excellent starting point for students to explore multiple perspectives of the discussion. Is all of the information accurate and worthy of trust? Obviously, the British Museum links are telling their story, and so forth, but the Wiki entry is a fantastic intro to the subject.
What I enjoy about the availability of information today is that the development of critical thinking skills become more important. I am dismayed that sometimes in the classroom, students don’t want to apply this level of engagement with the material. Rather, the singlular “correct” answer is sought.
As someone now in their 60s I recollect well the archaeological “Bibles” that I was meant to memorize as a student. I will never forget my M.A. advisor reviewing one of my papers, leaning back in his chair, exhaling smoke from his cigarette, and saying “Well now, here your taking on Jimmy Griffin who wrote the book . . . ” and fact is, I got to know James B. Griffin later on and the guy wanted to be taken on and not deified.
As others have commented, the trust must come through time and exposure. I will suggest however, that .gov, .edu, .com, or .org are really not good indicators of where trust should be placed. Governments and educational institutions seems as quick to put their spin on a story as the “Fair and Balanced” Fox News or the Colbert Report!
Robert, the course on Wikipedia sounds really interesting. What kind of content do you cover in it?
And I agree. Critical thinking skills, and the skepticism and humility that Matt mentions below, seem so important now. Has that changed your teaching at all?
I am not completely certain how the Wikipedia course is going to end up. As a one hour undergraduate honors course, and first time offering, it will be rather loose. I will base the content in large part on the experience of the students when they walk in the class. Minimally, we will thoroughly understand that which is wiki, create pages, have an informal edit-a-thon, explore the groups such as GLAM and so forth – and very importantly, how to use Wikipedia for research. The Wiki educators portal actually has a good bit of info on possibilities.
re Matt’s comments below, I find that I am getting more adamant that students not come to class looking for singular answers, but an understanding of the question that is being asked as well. The classic back and forth I have had with students is the “right” number of words for an exhibit label. One can find authoritative resources for not more than 50 and as many as 200. But what is the application and question?
A long time ago I read in David Hurst Thomas’ Ten Commandments of Statistics “Thou shalt not worship the 95% Confidence Interval.” To me that is example of a starting point. Particularly in the fields of social science and humanities in which I work, what really is the difference between 95% or 88%? Both are pretty damn significant, and likely could be flipped or are biased a considerable amount by the paradigm in which the research is operating.
So yes, there is a ton of stuff out there, but a good bit is completely spurious to the question being asked. I find a disturbing trend in simply throwing data at the wall to see what will stick. I recently had a back and forth on FB with someone re Stand Your Ground legislation re the Martin killing in Florida. The individual kept citing data to support his contention that Stand Your Ground actually benefited blacks in the States more than whites. However, when looking at his data, it was consistently spurious to the question and actually more supported my argument that whites benefited from the Stand Your Ground legislations at a disproportionate amount. After the fellow provided links to several studies, my arguing the data in the studies did not address the question or actually supported my contention, the fellow just went away and ended the dialogue. This seems to point to a trend I see in the States where data/facts are grabbed/cited without an understanding of their relevance to the situation, regardless of whether they are “true” or not.
I read somewhere that a rabbinic practice in the Jewish tradition was that when you cite a line of scripture to support your argument you must also cite all of the lines of scripture that counter your argument. Whether true or not, I think a good practice. One of my favorite trick questions is in the Abrahamic tradition does scripture say “they shall beat their swords into plowshares, and their spears into pruning hooks” or “they shall beat their plowshares into swords, and the pruning hooks into spears”? The answer is, they are both right depending on the books you are reading – Micah, Joel, or Isaiah, and have to do with whether the reference dates to pre- or post- Babylonian exile, etc. – but you get the idea.
It’s not the answer that is so important as is the question.
Related/unrelated is an interesting post on taking the quest beyond Google searches but actually applying to questions:
http://historytech.wordpress.com/2013/08/09/if-they-can-google-it-why-do-they-need-you/
Fact-checking is getting harder. There’s a lot of bad research and bad science out there that can only be distinguished from the good stuff by taking the time to read their methodology and really pick it apart. I can’t do that for most things. There are only so many hours in the day. In the end, it boils down to trust, but that is fraught with its own perils.
Simply citing your sources doesn’t really prove anything except that you know how to make things “look” credible. Unless the reader actually checks those sources, vigorously and not just superficially, what does it really mean to cite them? Not that this is a new problem, we’re just bombarded with so much more of it than we used to be.
I often wonder if the reason I trust a particular source or site is simply because I like the voice they speak to me with. I frequently worry that my logical processes are barely involved in this at all anymore. Since I know I can’t personally verify most of the information that’s flung in my direction, I must be trusting some kind of heuristic (and heuristics are always flawed) to determine what does or does not make sense to me. I don’t think anybody else has enough time to really verify everything either. Look at how easy it is for false information to spread online rapidly and with the air of real authority around it. And when it comes to evaluating a source’s biases, what of my own?
Knowledge isn’t really objective (even when facts are). I think the best approach is one of constant skepticism and humility, the acceptance that you will inevitably get it wrong sometimes, and the honesty with one’s own “audience” that you can and will.
Matt, my feelings on this issue are much closer your comment more than some of the others. I often read something online that sets my spidey senses to tingling, but I don’t necessarily know why it’s done so. When the piece that’s caught my attention is on a subject outside my area, I might try to triangulate the information with other sources, but depending on what it is I can find it hard to sort legitimate or likely information from the ‘other stuff’. Fact-checking is getting harder. And this is the just stuff that catches my attention.
I don’t know that I am equipped to judge validity of information on the sheer number of different topics I come across online and, like you, I certainly don’t have the (or take the) time to do an in-depth analysis of everything. While I endeavour to be selective and use my discretionary judgement – particularly when looking for academic or professional sources – I too think that I am more likely to trust particular authorial voices. But as you say, this can be hugely subjective, and is often quite intuitive. It isn’t necessarily rational, but can be based on something that seems to ‘make sense’ or be plausible, based on my existing understanding of the world.
And just look at the results when it goes wrong; when poor discretionary judgement meets speed and the capacity for easy sharing. (Such as discussed in this NYTimes piece on Reddit’s possible culpability in a suicide a few months ago http://www.nytimes.com/2013/07/28/magazine/should-reddit-be-blamed-for-the-spreading-of-a-smear.html?pagewanted=all&_r=0)
I disagree a bit with Rich and Bruce – the monetary barriers to creating (apparent) “authority” are lower online – you don’t have to have a building, a printed book, or show up in person to give the appearance of an authoritative source. You don’t need real people to vouch for you online; you can invent them and then link across in support of your content. Malicious fabrication is easier and cheaper than before.
There’s also the problem in a world of web-scraping bots that “bad” information gets instantly copied all over the place, so you can no longer rely on the (admittedly crude) tool of “Do *lots* of people say this?” Poor or misleading information gets copied around as much as good. That was much less true when copying involved human effort, time, and expense.
As an example, I run a sort of IMDB of art history at http://www.the-athenaeum.org. Sometimes I get an e-mail that thus-and-such painting (uploaded by a member, and who knows where they got their information) is incorrectly attributed, has the wrong title, etc. When I go to verify online, more often than not I find the search results dominated by copies *web-scraped from our own original incorrect record*. Here I am trying to correct a fact, and 99% of what I find is copies of our bad data. Literally hundreds of sites sometimes.
I think “trust” still has to be earned over time, and backed up by something concrete at the end of a chain of references, and we just don’t have the tools to automate that (yet!). It’s an argument for the continuation of institutions full of specialised people, whether they be academics, librarians, museum professionals, or journalists.
Chris, your experiences with the bad data on http://www.the-athenaeum.org is a really good example of why I do think that this is more complicated than Bruce and Rich both seem to. The fact that something appears in lots of places does often seem to lend it credibility – rightly or wrongly – when you are trying to see whether something is true or not. It’s that idea of triangulating your information from multiple sources. But even when you know the subject area well, it is possible to be fooled (as the Sokal Affair demonstrates http://bigthink.com/neurobonkers/why-the-number-29013-will-go-down-in-the-history-of-bad-science), much less when you aren’t an expert. And as you say, malicious fabrication is easy. So is non-malicious fabrication; the sort of exploration of a subject done out of eagerness but not expertise.
Robert’s point above is also really important and interesting, that relating the right data to the right question is hard and may be critical. Of course, how that is defined or articulated becomes a really challenging thing as well. I am sure that sometimes data used to make a point is not directly relevant, but can still lead to critical insight that can lead to a break through in research or an argument. But who is to define when and how the use of data is appropriate? It’s all quite complex.
PS – sorry that this wasn’t approved straight away. It somehow ended up in the spam folder, and I only just found it.
Suse,
I wrote about something similar not long ago. There is a lot of clutter out on the ‘Net there trying to masquerade as expertise. Much of it is anonymous and the sources can’t be vetted.
Museums, as experts in their field, have an opportunity to stand out and provide deep, reputable knowledge:
http://www.pleinairinteractive.com/blog/2013/04/09/curating-clutter/
A good book on the nature of the Web, and what it’s done to the quality of information, is The Cult of the Amateur:
Hmmm . . . Andrew Keen . . . don’t hear much about him anymore. I have a far different take on his Cult of the Amateur book: http://wp.me/pJf2X-Aq
I just got finished with a lecture this morning in the Honors Forum that discussed how bad scholarship is not a result of media type, but in the scholarship. I used as examples erroneous archaeological data published in the Smithsonian Contributions to Anthropology that is more accurately reported in Wikipedia. So it goes.