Two pieces caught my eye, this morning, as I was trying to make sense of the strange world we all now inhabit. The first was by Douglas Murray, on the subject of Big Tech, and their encroachments into the realm of censorship. I'm a fan of Murray's. He is a rare example of a clear-eyed thinker in the complex and emotionally-charged times in which we live. In his piece - published here by the equally excellent UnHerd - he argues that the recent forays of Big Tech towards the censorship of opinions on their platforms that they find unacceptable sets a dangerous precedent.
His point of departure is the YouTube decision, this week, to axe the channel of the British station TalkRadio, which had featured a number of "lockdown-sceptics" on its shows, thereby (supposedly) contradicting the "approved" government narrative on Covid. Murray goes further, however - as one might expect. The main tech platforms, he argues, such as Facebook, Google or Twitter, wield a power that far outweighs that of any publisher the world has ever seen, exercising "more control over information than any group of people in history". With great power, of course, comes great responsibility, but one has to wonder if the Googles of this world are up to this complex task. Is it even possible for a tech platform to keep a check on the content that is being published on its pages? And, if so, is it ethically justifiable? I hate conspiracy theorists as much as any sane individual, but the idea of censoring ideas and discussion sits uncomfortably with me. And, on a deeper level, one must ask: who polices the internet police? Who should decide what is "unacceptable", and what isn't? A Google intern? Nick Clegg?
Of course, the counter argument is that those platforms are businesses, not a universal human right, and they have the power to decide what they publish and what they don't. But, given their enormous power, I wonder if that defence can be allowed to stand unchallenged? For those that would like to dismiss such concerns as merely peripheral, a glance at the coverage of the Hunter Biden story just prior to the US election last autumn might serve as a healthy corrective. I'm no fan of the oafish soon-to-be-ex-President Trump, but the active media and tech censorship of a news story that might have damaged his opponent stank to high-heaven, and is not the sort of thing that one would expect to see in a mature democracy.
Murray's conclusion to this conundrum, unsurprisingly, is freedom; that the debate should be as wide as possible. My argument on this has long been that crazy ideas should not be censored, and thereby driven underground to bask in the furtive glamour of the banned shadows, but instead should be exposed to debate, and if necessary ridicule. Isn't that the way that knowledge progresses - through ditching the bad ideas and adopting the good? Look at the example of Copernicus? Isn't "heresy" sometimes a healthy corrective?
But, then, as I read Bellingcat's investigation of the life arc of Ashli Babbitt - the protestor shot and killed in the storming of the Capitol in Washington this week - I checked my libertarian impulses. The article - which is here - charts the political journey of a young woman, a 35-year old Air Force veteran, from an Obama voter in 2012 to a pro-Trump protestor, who in 2021 paid for her protest with her life. By looking back at Babbitt's own Twitter postings, Bellingcat recreate her "journey": her rejection of Hillary Clinton in 2016, to her posting of more explicitly anti-establishment messages in 2019, to the rabbit-hole of conspiracy theories purveyed by the likes of QAnon. "Nothing can stop us" she wrote in her last Twitter post, "The storm is here".
It's hard to know what role Babbitt's environment, family and friends may have had in her radicalisation. Did they share her views? challenge them? or maybe just roll their eyes and move the conversation on? Perhaps time will tell, perhaps not. But that journey is clear to see on her Twitter feed, and it's hard not to conclude that it was primarily online that the ideas that spurred her on - the conspiracy theories about the stolen election - were amplified in a host of online echo-chambers.
Does this not change things? That an ordinary American can be so moved by conspiracist nonsense that she would storm her own parliament building and lose her life in the process? Shouldn't that make us pause? I don't know the answer, and - of course - as soon as we go down the road of censorship then all the ethical questions raised above very quickly apply. But, given the proliferation of online conspiracy theories, and of the platforms that share and amplify them - one has to wonder if the old idea of "publish and be damned" is still fit for purpose.