(Eileen Dombrowski from OUP blog) Could the development in artificial intelligence dubbed “deepfakes” really “trigger social unrest, political controversy, international tensions” and “even lead to war”? Have our previous methods of telling fact from fiction been irremediably undermined? As teachers, we’re careening down new paths in evaluation of knowledge claims, trying to learn to steer in time to teach our students to drive!
Technology just got even more amazing, and our everyday critical thinking just got even more challenging. “Deepfakes” are not merely a mini-advance in digital adjustment of images and videos. Instead, they are developments in machine learning, as artificial intelligence learns and applies the algorithms to enable users to replace elements of a video with other ones not part of the original. It is now possible for users to swap one person’s face with another’s, such as (in its early applications) replacing a porn performer’s face with a celebrity’s. It is now possible to create convincing videos of world leaders firmly saying things they did not say – in fact. In fact.
Your students will be quick, I’m sure, to imagine possible uses of this technology if directed against them or against others. Indeed, it’s been around for just long enough that they may have their own examples to offer, and may know that deepfake pornography has been banned from leading social media sites.
And you will be quick, I’m equally sure, to see the increased difficulties of distinguishing fact from fiction and evidence from fakery. On social media, the fog just thickened. Or, I should say, it just “deepened”: the term “deepfake” is a fusion of the deep learning of artificial intelligence and more familiar fakery.
Handy resources for class
To introduce this topic to class – or to respond to students who are introducing it already – one good explanation I’ve found is from about six months ago, from the British Broadcasting Company (BBC) programme Click: “Deepfakes and the technology behind it” is available on YouTube, and opens with an explanation of the new technology and a commentary on its significance. I recommend the first 7.5 minutes.
Click also makes short clips available on the BBC website, useful in class for being concise and effectively illustrated. In just 1:33 minutes Click demonstrates face-swapping software and raises issues of falsified representation, privacy and consent, and legality: “Deepfakes: the face-swapping software”
Click also shows the creative use of such software in films “War for the Planet of Apes visual effects”. Whether applied to deception or to more innocent storytelling, machine learning for image swapping has become highly sophisticated – so we have to become highly aware.
For text-based explanation and commentary, a useful article from the Associated Press at the beginning of last month appeared in numerous Canadian and American news sources”: “I never said that! High-tech deception of ‘deepfake’ videos”.
What NOT to do with “deepfakes” in class
Clearly, the whole topic of fake news and of technology for fakery is appallingly relevant to Theory of Knowledge, as we aim for critical thinking and evaluation of evidence. It shakes some of the guidance we gave fairly easily in the past. However, I hope that it reinforcessome of our determination as educators NOT to fall into the fog. I have three major resolutions. You’re with me in this, right?
Resolution 1: not to teach defeat
We’ve just been handed a troubling development in fabricating evidence, and it seems that some commentators feel overwhelmed. But in TOK we’re not.
Of the two extreme reactions to finding complexity and difficulties oppressive, we’re certainly unlikely in TOK to have the first one: that is, to reject complexity in favour of easy answers and pat generalizations. The whole support of our course encourages us to engage with multiple perspectives and ambiguities. So, first, we acknowledge the problems – and typically through posing questions about impact on knowledge.
- How might deepfakes affect our knowledge of present realities and records of the recent past? What areas of social exchange particularly use video? What areas of knowledge use video as evidence?
- Is it only sense perception as a way of knowingthat is affected by deepfakes? Or are other ways of knowing also affected?
- To what extent do deepfakes present genuinely new difficulties in assessing evidence?
- As media consumers, what adjustments should we make, if any, in our acceptance of circulated videos, and our further circulation of them?
- Do deepfakes play more strongly to our cognitive tendency toward confirmation bias than did the simpler fakes of the past? Or — given the predisposition to reinforce our past beliefs even with shoddy evidence — do they actually make any difference?
We’re not likely to have the other extreme reaction, either: that is, to treat conflicting versions as indistinguishably valid, or to entertain ambiguities to the point that we lose the boundaries of definitions and evidence. So, second,we don’t treat the problems as blurring truth beyond recognition! The job is tough. But we’re onto it!
To update our critical skills, we can work in partnership with other IB courses that are facing the same challenges in examining and developing student research skills. Our role is both reflective, in considering broad knowledge questions, and practical, in considering implications for the critical skills we need to teach:
- What role does awareness of a problem play in finding a solution?
- For a technological problem, to what extent do we seek technological solutions? Is it inevitable that we also need human judgment?
- To what extent do you think we will simply adapt to this latest technological simulation of reality, even when (or especially when) videos purport to show events with political implications? How do we corroborate or dismiss a video report at present?
- What counts as a reliable source? How do we know? Do “deepfakes” make the evaluation of the source even more significant than it was before? Does the nature of the video material circulated on social media — or indeed all material so circulated — make it increasingly important to value the quality journalism accessible to us?
We Theory of Knowledge teachers have an important role to play in education, conveying to our students a respect for truth as precious– precious for making sound personal decisions, understanding of other people and human interactions, and creating reliable shared knowledge. In treating topics such as deepfakes, and other unprecedented deceptions that come up on a changing horizon, we also convey to our students the need to keep developing awareness and thinking skills in a world that doesn’t stay still.
Resolution 2: not to retreat from the controversies of the world
We haven’t seen the last of deepfakes, and we can see that they offer new challenges to the methodologies of our areas of knowledge, as they do to our everyday exchange of knowledge. But that’s nothing new for TOK. We have long recognized that the knowledge we deal with is totally entwined with the real world and its complex issues. It would be so much easier for us if there were a simpler sphere, solely academic and calm, into which we could retreat. But that mythical ivory tower – well, it has always been built upon the ground!
We recognize that building reliable knowledge is a continuing human enterprise, conducted in the real world, with all its frequent messiness and duplicity.
Resolution 3: not just to groan but also to cheer
But surely, we can also enjoy the advances in knowledge that mess up the way we’ve hitherto dealt with knowledge! In the arms race between deceptions and methods of detecting them, we need — what? More knowledge!
“Deepfakes” are impressive breakthroughs in technology – amazing, at least for today. From detached discussion to hands-on playing with the technology, we have lots of different entry points to alerting our students to this recent development in videos circulated on their networks – and to marvel with them over what it’s possible now to do.
We might also appreciate some of the creative uses of technology akin to what is used in the deepfakes. It would be hard to resist the enthusiasm of Dan Lemmon, the Visual Effects Supervisor for the War for the Planet of the Apes, who is interviewed in a clip from Clickthat I cited earlier. He is concerned only with the “creative challenge”: “How can we take our technical tools and bend them to tell this story? Or what can we invent or make up to be able to tell this story?” In my opinion, his comments on his work could equally apply to ours as teachers: “One of the things that’s so great about our job is not knowing what the next thing is, and that for us is the thing that’s so much fun.”
Ariel Bogle, “’Deep fakes”: How to know what’s true in the fake-Obama video era”, ABC News (Australia). March 3, 2018. http://www.abc.net.au/news/science/2018-03-04/deep-fakes-and-obama-videos/9490614
“I never said that! High-tech deception of ‘deepfake’ videos” (Associated Press), Canadian Broadcasting Corporation (CBC), July 3, 2018. https://www.cbc.ca/news/technology/deepfake-politics-1.4731665
“The fight against ‘deepfake’ videos includes former U.S. ambassador to Russia Michael McFaul”, The Current, Canadian Broadcasting Corporation (CBC). July 20, 2018. Radio program (with transcript) available live streaming or podcast. http://www.cbc.ca/radio/thecurrent/the-fight-against-deepfake-videos-includes-former-u-s-ambassador-to-russia-michael-mcfaul-1.4754674
Rahsmee Roshan Lall, “Deepfake technology could create huge potential for social unrest and even trigger wars,” The National, July 31, 2018. https://www.thenational.ae/opinion/comment/deepfake-technology-could-create-huge-potential-for-social-unrest-and-even-trigger-wars-1.755842 (about The National https://www.thenational.ae/about-us)