If it were not already apparent as a mob beset the US Capitol on January 6th – their anger inflamed by widespread, weaponised misinformation – then Frances Haugen’s recent revelations, shared with news organisations and lawmakers across the US and Europe, have made it almost impossible to deny: the major social media platforms, with Facebook first among them, are leading to increasingly dangerous and unpredictable social consequences.
What’s more, it is now fully evident that the companies responsible are unable – or, in some cases, unwilling – to tackle the social, political, and ethical problems that their platforms are generating. As Haugen herself put it, “I had such profound distress because I was seeing these things inside of Facebook, and I was certain it was not going to be fixed inside of Facebook.”
Of course, Facebook would like us to think otherwise. Facebook, Twitter, and others have spent the better part of a decade insisting that they are perfectly capable of regulating themselves – of fixing the problems they have created, of putting the genie back in the bottle. Yet, their actions have offered little to encourage trust, and lawmakers are becoming increasingly unwilling to accept their assurances. Proposals for significant, far-reaching legislation to curb the reach and influence of social media giants are being refined and debated by a range of governments, including the US Congress and the European Parliament. However, the form such legislation will take, and the impact it could have, remains a subject of intense debate.
Yet what can easily be lost from view amidst the ubiquitous news stories and thought-pieces on how Facebook can best be curbed is the question of whether the choice between two different forms of regulation is one we should be forced to make. That is, instead of asking whether governmental regulation is better than self-regulation, we should be asking whether our only option for controlling or mitigating the power of a small number of enormously influential, highly centralised organisations is to hope that we can trust another set of powerful, centralised organisations to do the job for us.
The latter question points us to another way out of this impasse. The vision of a decentralised, open future for the web – a web that would involve neither trusting in nor receiving permission from any centralised authority – is becoming increasingly prominent, as the platforms that will enable it begin to emerge and establish themselves. If we can make this imagined future a reality, then the question of how we rein in the disastrous social consequences of centralised, monopolistic social media platforms will become moot.
In this post, we’ll discuss how Frances Haugen’s recent revelations have sounded the death knell for big tech’s attempts at self-regulation and how the apparent alternative of greater government intervention may be a case of “out of the frying pan, into the fire”. Finally, we’ll consider how Cudos establishes itself as the foundation of an emerging decentralised alternative, increasingly referred to as Web3.
“The Facebook Files” and the end of transparency
On September 13th, the Wall Street Journal published the first in a series of articles it called “The Facebook Files”. The pieces were based on a cache of more than 10,000 internal documents shared by former Facebook employee Frances Haugen – documents that reveal just how much Facebook knows about the harm its platform does and how little the company is willing or able to do about it.
Unsurprisingly, the headlines were at best deeply troubling, at worst outright damning. Facebook, the WSJ reported, had secretly exempted VIP users from its rules, allowing them to post potentially harmful content with impunity. Facebook’s own research showed that Instagram was a toxic environment for teen girls, linked to increased body image issues and eating disorders. Changes to Facebook’s ranking algorithm designed to increase user engagement had resulted in the amplification of divisive and harmful content. And, finally, Facebook was being used to facilitate human trafficking and the incitement of ethnic violence across the developing world.
Of course, in some respects, these revelations are nothing new. Facebook has had its public image tarnished by a whole host of troubling revelations in recent years – with the Cambridge Analytica scandal being only the most visible – and other tech giants have not been immune to similar issues. By 2018, the sense that big tech was running rampant was so widespread that the OED included “techlash” on its shortlist for word of the year. The term is defined as “a strong and widespread negative reaction to the growing power and influence of large technology companies, particularly those based in Silicon Valley.” That year, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey appeared before the US Senate to answer questions about their platforms. At the same time, Google was empty-chaired after they failed to send a representative – but not much ultimately changed.
Nevertheless, there is an essential difference with Haugen’s revelations. While public opinion has been gradually turning against Facebook over the past few years, Facebook has robustly defended itself. It used a myriad of public statements seeking to downplay or diffuse the growing sense that its platforms have a damaging impact on users and society more generally. Key to the importance of the documents Haugen exposed is that they make clear Facebook’s own internal findings have directly contradicted these statements.
In March 2021, for instance, Zuckerberg, Dorsey, and now-cooperative Google CEO Sundar Pichai testified before a US congressional hearing on their efforts to combat misinformation on the platforms they operate. During the hearing, Zuckerberg was asked about the potentially damaging effects of social media on children’s mental health. Zuckerberg responded: “I don’t think that the research is conclusive on that.” He went on to highlight the positive mental health benefits of using social media to connect with others.
Yet the documents released by Haugen indicate that, at the time, Facebook’s own internal research was painting a very different picture. A slide from a 2019 presentation by Facebook researchers stated their findings in stark terms: “We make body image issues worse for one in three teen girls.” Another slide read: “Teens blame Instagram for increases in the rate of anxiety and depression […] This reaction was unprompted and consistent across all groups.” In even more extreme cases, Facebook’s researchers found that 6% of US teens who reported struggling with suicidal thoughts directly attributed this to Instagram. Among British teens, the number was 13%.
Following Haugen’s revelations, we can see just how consistently unforthcoming Facebook has been about its own internal research – even when specifically asked to share it. In August this year, US senators asked Facebook to send over its internal findings on Instagram’s impact on children’s mental health. The company sent a six-page letter – but did not include any of its own research on the topic.
This raises a fundamental question: if Facebook is unwilling to be transparent about its platform’s effects, how can we expect it to take steps to limit or moderate them? And if it does claim to be taking such steps, how can we trust that they are being adequately implemented, if at all?
But this is not just a question of the secretive nature of Facebook’s practices and unwillingness to accept external oversight – it also raises the question of how their business model disincentivises precisely the changes that might benefit users and address the harm the platform does.
Attention, algorithms, and the impediments to change
Following the initial revelations published by the Wall Street Journal, Haugen began a still-ongoing tour of major legislative bodies currently considering new regulations for big tech. Significantly, much of Haugen’s subsequent testimony before politicians in the US and Europe stressed that, if Facebook’s public denials were combined with personal reluctance to change, this was not simply due to an objection to external scrutiny – though that was undoubtedly part of it. It was also because doing so would undermine its business model, with potentially fatal consequences.
Put otherwise, it’s not that Facebook simply didn’t want to help ensure its users were safe – it’s that doing so would put in question the way it generates revenue.
Core to the problems that Haugen identified at Facebook was its reliance on engagement-boosting algorithms that prioritise serving users content that keeps them scrolling, commenting, liking, and sharing. As has been increasingly stressed in recent years, the social media space is fundamentally an attention economy. That is, it operates by capturing the attention of its users and selling this attention to advertisers. Thus, maximising the time users spend on the platform is vital. The more time users spend scrolling, swiping, and engaging, the more adverts they will see, and the larger Facebook’s revenue.
In this sense, social media is not much different from commercial television, which is similarly reliant on serving up viewer attention to advertisers during breaks in and between programmes. However, what is entirely new is the range of complex – and often inscrutable – tools that social media platforms have at their disposal to capture attention and serve up the exact right advertisement at the perfect moment. The engagement-based ranking algorithms frequently criticised by Haugen are one of the most valuable such tools. But how do they work in practice?
Of course, Facebook is highly secretive about the precise nature of the algorithms it uses, making oversight even more difficult and understanding elusive. But, in simple terms, we know that Facebook uses a set of algorithms to populate users’ feeds with content that has been determined to have the strongest likelihood of engaging them and inducing them to engage with it in turn. This determination is based on vast swathes of data amassed from both the user’s previous behaviour on the platform and from various aspects of the content itself, including the earlier activities of the user who posted it and the reactions and comments it has already amassed.
Notably, such algorithms are agnostic about the nature of the content they are evaluating – they cannot distinguish between innocuous posts about healthy recipes and meal plans, on the one hand, and actively harmful posts promoting eating disorders on the other. They will simply evaluate how effective the content is at keeping users interacting with the platform. According to this elementary criterion, if a certain kind of content is “working”, the algorithm will try to serve up more of it. As former Google employee and tech campaigner Tristan Harris notes, the algorithm doesn’t know what “anorexia” means. It simply knows that in specific cases, content using this term promotes engagement on the part of a given user – and so it offers them more.
Just as these algorithms are unaware of the nature of the content they rank, they are similarly indifferent to the type of reaction it generates – interactions are construed as positive, regardless of their nature or source, because they correlate with time spent on the platform and the likelihood of engaging with adverts. Thus, an interaction provoked by anxiety, anger, or despair is just as valid and as valuable as one provoked by happiness or gratitude. In fact, as stronger and more intense reactions are liable to generate higher engagement, there is a tendency to prioritise the former over the latter.
The result of this situation is an extremely powerful disincentive for Facebook to make significant changes to its platform, even if the current situation is leading to harm for users. Haugen noted in her testimony before MPs in the UK that suggestions such as “slowing down” the platform and adding “selective friction” to limit the spread of harmful material are not palatable because they would result in a loss of revenue. As Haugen puts it, “They don’t want to lose that growth. They don’t want 1% shorter sessions because that’s 1% less revenue. They’re not willing to sacrifice little slivers of profit.”
The result, in stark terms, is a platform that “amplifies polarising content” because “anger and hate is the easiest way to grow.” Given that this same platform can boast nearly 2 billion active daily users, the implications are deeply troubling.
The death knell for self-regulation
If it is becoming difficult to deny that platforms like Facebook are having an increasingly deleterious social impact – though Mark Zuckerberg, of course, continues to do his best on this score – the solution is perhaps less clear than ever.
Over the past few years, tech giants – Facebook foremost among them – have sought to head off the growing calls for increased regulation and oversight by promoting the idea that they are willing and able to regulate themselves. As recently as last year, Facebook launched its semi-independent Oversight Board. Zuckerberg described it as a kind of “supreme court” for Facebook, evaluating and overturning content moderation decisions and influencing future policy developments.
Yet, such self-regulation practices have only served to highlight the lack of transparency that Haugen’s revelations brought so clearly to light. Indeed, in its first transparency report, released last month, Facebook’s Oversight Board criticised the company for not being “fully forthcoming” about one of its key programs. Haugen, for her part, was even more damning: she claims that Facebook repeatedly lied to the Oversight Board and sought to “actively mislead” them.
Facebook is not alone in this combination of strong public commitments and private reluctance regarding self-regulation. Twitter CEO Jack Dorsey acknowledged publicly in 2018 that the platform had a problem with “toxic” content and committed to being more “transparent” about the decisions it made about how the platform operated. His remarks were surprisingly open and direct, but the consequences were not as far-reaching as his words might suggest. While Twitter made some adjustments to its platform over subsequent years – such as allowing users to limit who can reply to their tweets – progress has been slow. An external researcher brought in to help Twitter’s efforts noted that “[t]he impression I came away with from this experience is that [Twitter was] more sensitive to deflecting criticism than in solving the problem of harassment.”
As the issues generated by Facebook and Twitter grow in magnitude, the continually deferred promise of meaningful change from within becomes less convincing – and lawmakers are getting ready to step in. But is this truly the change we’ve been hoping for?
Centralised solutions to centralised problems
As we’ve seen, the contrast between Facebook’s public pronouncements and private practices – and recall that Zuckerberg boldly affirmed the company’s commitment to “providing the [oversight] board with the information and resources it needs to make informed decisions” – is at the heart of Haugen’s revelations. This general sense that Facebook simply cannot be trusted to be open and transparent about its operations has accelerated plans in many countries – particularly the US, UK and EU – to impose much more stringent external regulations on Facebook.
At present there are, in The Guardian’s words, a “slew of bills” being discussed by US Congress, each taking a different approach to reining in Facebook’s power, and there is little agreement over how effective any of the bills will be. In commenting on a Democrat-supported bill that would hold companies responsible for amplifying harmful content, for instance, digital rights campaigner Evan Greer described the bill as “well-intentioned but […] a total mess”, arguing that it is “playing right into Facebook’s hands.”
Part of the challenge in crafting a bill to curb Facebook and other tech giants are tied to the very issue that has provoked the current sense of alacrity: their complete failure to be open and transparent about how they operate. As a recent article in The Atlantic put it, Facebook is a “black box” – that is, an entity “whose inner workings are virtually unknowable to people on the outside.” While this remains the case, external authorities trying to design workable regulations will be at a pronounced and perhaps insurmountable disadvantage. Ultimately, they will be trying to solve a problem the scope and parameters of which they cannot fully see.
And this is, of course, assuming that legislation can successfully be passed in an increasingly polarised political environment – polarisation that, research suggests, has been driven by social media itself. While some bills before US Congress have bipartisan support, translating this into workable majorities sufficient to pass them is easier said than done.
The EU and UK, meanwhile, are both working on ambitious, sweeping new pieces of legislation to police online content. Still, there is little consensus on how effective these will be, as well as significant concerns about unintended consequences. The free speech organisation Article 19, for instance, has sharply criticised the proposed UK bill, calling it a “deeply disquieting […] attempt at regulating the totality of human communications and interactions online.”
This final point reveals one of the fundamental problems with expecting governments to solve big tech issues, forcing us to choose between two types of regulation. Ultimately, we are looking for centralised solutions to problems fundamentally caused by centralisation – that is, by the concentration of immense economic and social power in the hands of a small number of companies. To have this power offset, limited, or overseen by political institutions, even (nominally) democratic ones is hardly a reliable solution – and certainly not one that matches the truly radical vision that inspired the web’s creation.
Imagining an alternative – a trustless, permissionless web
The World Wide Web Foundation, founded by Sir Tim Berners-Lee, offers an account of the “revolutionary” ideas that emerged from the early web community – ideas that those whose web experience has been confined to the era of the tech giants may find unrecognisable.
This includes the idea that the web should be decentralised. That is that we should build a web in which “no permission is needed from a central authority to post anything,” where “there is no central controlling node,” and there is “freedom from indiscriminate censorship and surveillance.”
While this certainly does not describe a world in which Facebook’s algorithm determines, in some mysterious and inscrutable way, what content does or does not appear on your news feed, nor does it point to an alternative where governments, instead of companies, decide on these matters.
There is a growing recognition of the false choice between self-regulation in the tech sector, on the one hand, or elaborate, externally imposed legal frameworks, on the other. Consequently, an alternative has slowly begun to emerge and gain visibility under various names – with Web3 being the most recognisable of them at present.
Web3 is envisioned as a wholesale transformation of the infrastructure of the web. Using blockchain technology would produce a trustless, permissionless space in which no central authority – be it Facebook, Twitter, US Congress, or the European Parliament – sets the terms of engagement or acts as a gatekeeper for the ways people can interact and the content they can share.
Such a concept remains in the early stages of implementation, of course – but the technologies and platforms that will underpin it are already in place and gaining traction. With the recent launch of phase two of our incentivised testnet, we are taking significant steps toward establishing a fully decentralised cloud computing platform. Without it, a range of widely-touted Web3 developments – including the much-discussed prospect of the metaverse – will either remain unachievable or, more likely, be co-opted by precisely the same tech giants that set the terms for Web 2.0, albeit perhaps under new names.
Instead of deciding, then, whether we’d prefer self-regulation or government intervention, let’s continue to imagine and pursue a future for the web that these apparent alternatives only serve to obscure.
How you can support our alternative
The great thing about decentralisation is that everyone has a role to play. You can contribute towards the effort to create an ecosystem for a decentralised metaverse by partnering with us.
We need data centres and cloud service providers. If you can contribute to this goal, please reach out to us now for collaboration.
If you’ve missed our latest announcements, here are some of the recent partnerships we are excited about.
Lastly, if you already have your CUDOS tokens, you can make the most of them by staking them on our platform and securing our network.
Let us create a computing ecosystem that is decentralised, transparent, and responsible!
About Cudos
The Cudos Network is a layer one blockchain and layer two computation and oracle network designed to ensure decentralised, permissionless access to high-performance computing at scale and enable scaling of computing resources to 100,000’s of nodes. Once bridged onto Ethereum, Algorand, Polkadot, and Cosmos, Cudos will enable scalable compute and Layer 2 Oracles on all of the bridged blockchains.
Learn more: Website, Twitter, Telegram, YouTube, Discord, Medium