It would be easy to dismiss Elon Musk’s recent purchase of Twitter as just the latest whim of a notoriously mercurial billionaire. In fact, its significance is far greater.
The prospect of a Musk-owned Twitter has reignited the long-simmering debate over free speech and censorship online. And Musk, whose irreverent, scatter-gun use of Twitter is key to his public profile, was clear about which side of the debate he’s on. Immediately following the purchase, he tweeted that “free speech is the bedrock of a functioning democracy.”
What Musk will actually do as the owner of Twitter is unclear. Nevertheless, his commitment to limiting moderation on the platform has drawn polarised responses. While it’s easy to find both supporters and critics of his approach, others have stressed that trying to promote free speech on social media is easier said than done.
But the implications of this debate go further than whether Twitter can truly become the “digital town square” that Musk envisions it to be. In this blog, we’ll look at how the problem of moderation and censorship online is less about what content is deemed acceptable and more about who gets to decide. And this means that it’s a question of whether the web will continue to be dominated by a small number of highly centralised platforms, or whether a different future is possible.
As we’ll see, the proponents of Web3 are not just convinced that it’s possible – they’re working to bring it about.
Social media’s free speech problem
While Musk’s vocal commitment to free speech might be partly self-serving, he’s certainly not alone in arguing that social media platforms have become unduly prone to censorship. The ongoing controversies over “cancel culture” reflect the growing belief, particularly on the right, that social media platforms are unilaterally constraining free speech for manifestly political reasons. In a post made just days after he bought the platform, Musk seemed to echo these views.
For Musk, the choice facing Twitter is clear: free speech or censorship. And the truly democratic solution is unambiguous – the former wins every time. Musk even ran a Twitter poll to prove it.
But the reality is more complex than this simple opposition. Even Musk’s subsequent clarification that by “free speech” he means “that which matches the law” doesn’t help much. Laws vary between jurisdictions, which is a big problem for a global platform. What is more, laws around speech are notoriously tricky to apply in practice – especially when you’re dealing with hundreds of millions of posts per day.
Given this, it should come as no surprise that it’s not just Musk and his fellow free speech proponents who think social media companies are failing to fulfil their public responsibilities. In 2018, the term “techlash” gained currency as a way to describe the increasingly popular view that, far from becoming too censorious, tech companies were doing too little to protect users.
For some, social media is not an artificially created “safe space” lacking in countervailing views; instead, it’s a haven for hate speech, conspiracy theories, and harmful content. From the spread of vaccine misinformation to the riot at the US Capitol, the past year has provided significant support for this view – yet opponents would point to Donald Trump’s Twitter ban as yet another case of the policing of political speech online.
Amidst the complexity of the issues facing social media companies today, one fact must be acknowledged: nobody is happy. Facebook, Twitter, and others are accused of doing too little to combat hate speech and misinformation and acting as politicised tools of a widespread “cancel culture” that is damaging free speech.
When debates become so intractably polarised, it’s often worth taking a step back and looking at some of the assumptions that both sides make. As we’ll see, not only is the debate over free speech poorly framed, but it also fails to see beyond the narrow limits of the Web2 era.
Content moderation: An impossible task
The fundamental issue that the free speech debate tends to obscure is quite simple: every platform needs some form of moderation.
While this might seem controversial, on closer inspection, it’s inarguable. Even free speech maximalists will agree that no “digital town square” could survive an endless onslaught of spam. And once it’s been decided that some content should be disallowed, the choice between free speech or moderation is no longer binary. Instead, it’s about where and how you draw the line. And this, of course, is where things get complicated.
For most proponents of free speech, it’s not a question of being able to say anything at all. Instead, they’re concerned about the freedom to share contrasting viewpoints and engage in open debate. They inveigh against “cancel culture”, arguing that it has profoundly damaged such debate by positioning opposing viewpoints as offensive or harmful. Musk himself frames the issue in these exact terms, noting that he hopes even his “worst critics” stay on the platform because hearing dissenting views is what free speech means.
But this way of presenting things sidesteps the problem that spam brings to the fore: namely, that some types of speech can actively undermine open debate. In essence, spam is just an excess of speech that drowns out other voices. Allowing harassment and hate speech to spread unchecked will serve the same purpose. By driving people away from the platform or making them refrain from sharing their beliefs, it will limit the number of viewpoints that can be heard. Given this, as Mike Masnick has argued, content moderation can support free speech rather than limit it.
But it’s important not to undersell the challenges here. Masnick himself has even proposed what he calls Masnick’s Impossibility Theorem, which simply states that “content moderation at scale is impossible to do well.” Definitions of what constitutes harassment and hate speech are hard to agree upon, even in a legal context, and the reliance on automation exacerbates the problem. Manual moderation couldn’t possibly manage the traffic of a platform like Twitter, but having moderation rules be applied by automated analysis systems inevitably lead to huge numbers of false positives – posts that are taken down when they shouldn’t be, users banned without justification. This can lead to a mistaken perception that certain kinds of speech are unfairly targeted.
Indeed, this is a complicated issue to navigate. But the question of how to moderate content is perhaps no better than the question of whether to moderate in the first place. Rather than trying to decide who’s in the right in the ongoing debate over moderation, it’s worth noting what both sides share: fundamental powerlessness. Nobody on either side of the debate has a say in the decisions that are ultimately made on the biggest platforms – unless, of course, they can afford to buy one. And whether you would prefer more moderation or less, this should be a cause for concern.
So, what is the alternative?
Safeguarding free speech in Web3
As we’ve discussed previously, the Web2 era can be best understood as the era of platforms. Platforms act as intermediaries, their main value being to connect people. As a result, they rely heavily on network effects, with their value being proportional to the number of people using them. For the same reason, they also tend toward centralisation, seeking to maximise their user base at the expense of competitors. They build proprietary algorithms, implement addictive design choices, and limit direct interaction with other platforms, all in order to attract and retain users. They are walled gardens.
It should not be overlooked how much the free speech debate is shaped by this reality. Defenders of content moderation note that social media platforms are private companies and have no obligation to allow certain types of content; opponents, like Musk, argue that their outsized influence means they are responsible for avoiding censorship. The former argue that platforms should refine and strengthen moderation, the latter that they should limit it or remove it entirely. In both cases, they tend not to ask whether the owners and operators of highly centralised platforms should have this power in the first place.
The emergence of Web3 raises the possibility of a radical alternative. At the core of Web3 is the prospect of breaking big tech’s stranglehold over online activity – and this includes decisions about what content can and cannot be shared. This democratising potential is a core feature of blockchain technology. Public blockchains are permissionless and transparent. By design, they eliminate the position of centralised gatekeeper, opening up the potential for collective, bottom-up decision-making.
Though steps in this direction are in the earliest stages, the Web3 ecosystem will offer a range of significant alternatives to centralised moderation. These include:
- Governance tokens to facilitate collective moderation. Large-scale moderation decisions can be made collectively using on-chain governance. Tokens determining voting rights can be distributed based on activity or engagement, allowing users to shape how the platform develops.
- Reputation-based systems to incentivise positive contributions. By tying token rewards to high-quality contributions, pseudonymity can be maintained while trolling and harassment are discouraged, leveraging the inherently public nature of blockchain-based systems.
- Data portability and interoperability to prevent user lock-in. Maximising the ability of users to navigate between platforms means that they don’t need to tolerate content moderation practices they don’t agree with. Instead, they can easily move to a platform that better aligns with their preferences, taking their data with them.
Of course, there is no simple solution to such a complex problem, and there will be many challenges to face. It’s been noted, for instance, how the very immutability of the blockchain could cause problems for harassment and the sharing of harmful content. Nevertheless, these and other Web3 innovations will allow us to fundamentally reframe the debate about free speech online.
Instead of relying on individual platforms to align their decisions with our own preferences, a vibrant and user-controlled ecosystem of social DApps would make the web itself the “digital town square” that Musk is seeking. And all without needing the good graces of a single billionaire to defend it.
Help Cudos build the infrastructure for Web3
As exciting as the prospect of a truly open and decentralised future for social media may be, it will require a significant number of technical innovations to make it possible. And this is where Cudos comes in. We are committed to ensuring that Web3 is truly decentralised and our blockchain network is fully interoperable for this reason.
What is more, we’re working on an ambitious project to provide Web3 with an open and decentralised source of cloud computing. Social media platforms require huge amounts of data to function smoothly and efficiently, but continuing to rely on centralised cloud providers like AWS and Google would undermine any ambition toward true decentralisation.
If you’d like to support our push toward a decentralised future, there are many ways you can get involved. We recently launched the alpha version of Cudo Compute, and we’re offering the chance to win a £1000 Cudo Compute voucher and a £150 Amazon voucher for those who take part in our survey.
Cudos is powering the metaverse bringing together DeFi, NFTs, and gaming experiences to realise the vision of a decentralised Web3, enabling all users to benefit from the growth of the network. We’re an interoperable, open platform launchpad that will provide the infrastructure required to meet the 1000x higher computing needs for the creation of fully immersive, gamified digital realities. Cudos is a Layer 1 blockchain and Layer 2 community-governed compute network, designed to ensure decentralised, permissionless access to high-performance computing at scale. Our native utility token CUDOS is the lifeblood of our network and offers an attractive annual yield and liquidity for stakers and holders.