(by Eileen Dombrowski) Who needs critical thinking when we have Google? A team of computer scientists working for Google has proposed an improvement on what comes up when we enter our terms in its search window. They suggest a method of calculating a “trustworthiness score” for webpages based on their factual content: “We call the trustworthiness score we computed Knowledge-Based Trust (KBT).” An avid Googler myself, I am awash with both admiration and amusement. What would our students, many of them also consummate Googlers in face of essay assignments, make of the knowledge questions that instantly arise about the nature of facts, truth, and reliable sources?
Reliability: consistency of knowledge claims within the Knowledge Vault
The proposed system, explained briefly in NewScientist (“Google wants to rank websites based on facts not links”) pushes web sources up the Google rankings no longer by counting the number of links to it within the interconnected Web, but instead by internally checking its facts and bumping down the “trustworthiness score” of those pages whose information contradicts what is otherwise established within the web.
But how can Google rate the factual accuracy of the information and knowledge claims of websites? Google doesn’t do independent research, nor does it have a method of establishing a Reliable Source independent of the knowledge claims repeated throughout the internet.
It can only check inside the web, within what it calls its Knowledge Vault, an immense compilation of information extracted from the internet by bots and algorithms. And it is this data store that the proposed software will use to establish consistency between sources of information. It will establish trustworthiness through coherence – that is, internal consistency and freedom from contradiction. (See the coherence check for truth, IB TOK Course Companion, chapter 3.)
To what extent will you grant Google your own KBT (Knowledge-Based Trust)?
We can imagine the vast compilation of electronic information using many different metaphors: the web, the Vault, or a self-contained bubble, for instance. Perhaps the metaphor we choose might affect what knowledge questions we pose about the Google measure of trustworthiness – and using the metaphor of “an echo chamber” ought to provoke a few!
I suggest sharing the short article from NewScientist with your students to find out what questions they might want to ask before they grant Google their own KBT (Knowledge-Based Trust). I’ll share a few knowledge questions of my own – ones I consider extremely important as, like so many students, I hunt down information on the web. (Google is one of my most active verbs.)
A few knowledge questions — on coherence, reliable sources, and circular reasoning
First, I’m inclined to step back from coherence as a way of establishing truth:
- If all sources agree on information, is it true?
- If the information provided by a minority of sources contradicts the information provided by the majority, is it false?
- Is the probability of accuracy of a source reliably measured by the degree of accord with other sources?
- What is a “fact”?
As students could well point out, we often use coherence as a way of checking the truth of knowledge claims, not just in everyday knowledge but also in the methodologies of our areas of knowledge. Mathematical proofs aim for freedom from internal contradiction and perfect logical consistency within bodies of statements. History, working with very different subject matter, still uses consistency as a test for truth: it aims, as one measure of accuracy, to find accord between documents to establish what really happened in the past. (Historians, of course, expect to find human inconsistencies and all the variability of our limitations and perspectives.)
Given that internal consistency is a widely used, and often appropriate, check for truth, then, we might be pushed to tackle a few more knowledge questions as we withhold from Google, for the moment, our KBT. These questions close in more tightly on reliable sources, and are ones to which our students have assuredly already given some thought:
- Are all sources of information equally reliable? Why or why not? Do all websites contribute equally to accurate understanding of an issue?
- What is an “expert”? To what extent do our expectations of an expert source vary according to the area of expertise?
- If a source is expert on one topic, is that same source necessarily reliable on unrelated topics?
As students are likely to point out, we should choose whom we ask depending on the topic, and we should weight our trust according to the knowledge of our sources. If we want to know whether a medical procedure is safe, we should turn to doctors, not to a democratic vote of our friends. We’d be wise, similarly, to listen to the consensus of climate scientists regarding climate change rather than looking for the most frequent assertions in the media.
(Or…at least we probably acknowledge that we should trust appropriate experts. In practice, according to cognitive scientists, we are appallingly inclined to be convinced by anecdotal evidence – including the stories of our friends! And we are also sadly inclined to be subject to confirmation bias, filtering information to accept only what confirms what we want to believe. )
The Google research team has proposed a way of dealing with this problem of numerous unreliable sources. They’ll weight a source more heavily as reliable in their scoring system if it has a higher score for factual accuracy. Ah. Sounds like a good idea, Google. But if students don’t instantly raise knowledge questions about circular reasoning, you might want to prompt them to do so:
- To what extent can we evaluate the trustworthiness of a source by the trustworthiness of its information?
- To what extent can we evaluate the trustworthiness of information by the trustworthiness of its source?
- Can we simultaneously evaluate the quality of the source according to the accuracy its information, and the accuracy of the information according to its quality of the source?
Am I being unfair to Google?
I fear I may have missed something truly significant in this proposal to change the methods of a Google search. I certainly don’t understand all the supporting methodology – and I encourage you to try to read the equations and diagrams yourself. This is not my field! I would welcome response from someone more knowledgeable to comment on my comments and what I’ve missed.
Altogether, though, I’m left with appreciation of what the Google aims to do in making this change. If it moves away from ranking websites on a search according to how many other sites refer to them with links, it moves away from ranking by popularity – popularity that can be gained through reasons that have nothing whatever to do with reliability. I’m concerned, though, that what it takes on in ranking for reliability is much more complex and may create trust – KBT! — that has a shaky foundation.
And so, despite my appreciation of Google’s goals, I’m also left with an even greater appreciation of what eludes data collection by bots and internal analysis by algorithms: that is, critical thinking and human judgment. We can be glad to have better information come up on a Google search, but we’ll still want to do the critical processing ourselves as we tackle a knowledge question that is extremely important within IB Theory of Knowledge and relevant to student research within the entire IB Diploma: How can we best determine the likelihood of a source being trustworthy?
Hal Hodson, “Google wants to rank websites based on fact not links”, New Scientist. February 25, 2015 http://www.newscientist.com/article/mg22530102.600-google-wants-to-rank-websites-based-on-facts-not-links.html#.VQD7qUJNy0v
Xin Luna Dong, Evgeniy Gabrilovich, Kevin Murphy, Van Dang Wilko Horn, Camillo Lugaresi, Shaohua Sun, Wei Zhang Google Inc. “Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources”, February 12, 2015. Cornell University Library. http://arxiv.org/abs/1502.03519v1