Understanding Web3 Content Moderation

While many hope that Web3 technologies will transform the internet by decentralizing and redistributing control of essential services and infrastructures, the problems of moderating user generated content aren’t going anywhere. Content moderation has been a major concern for Web2 platforms and will almost certainly be one for Web3 apps as well.  Those running NFT marketplaces, decentralized social media apps, file storage nodes, and anything else that involves user generated content will have to make decisions about which content, users, and behaviors to permit and which to remove.

Examining Web3 content moderation, this blog post is the first in a series exploring Web3 Trust & Safety issues. Content moderation presents a series of difficult questions and problems. Rather than offering a comprehensive or definitive take on the matter, this post opens the conversation with a 40,000-foot view of Web3 content moderation, presenting a few thoughts on what might be new about Web3 content moderation and what is not.

1. Web3 is and will likely be a diverse set of apps, communities, and technologies; there will be no one-size-fits-all solutions to content moderation.

Amid the explosion of interest in Web3 over the past year, discussion often treats Web3 as though it is a singular technology. But Web3 is still in the process of being imagined, designed, and built. Much of Web3 remains hypothetical.

But also, whatever it will be, it will not be singular. If the core goal is to build a more decentralized internet, almost by definition this means there will likely be many different types of apps, infrastructures, and communities that embrace different technological designs, norms, structures, and governance systems. As concerns content moderation, there is both risk and opportunity in this diversity: while there are unlikely to be any one-size-fits-all solutions, there will be plenty of space for creative experimentation with new moderation approaches.

2. Web3 apps will face significant pressure from countries, partners, and users to have detailed content policies and enforcement systems.

Broadly, there are three types of pressures on Web2 companies to develop detailed content policies and moderation practices. These pressures will also likely encourage Web3 developers to develop content moderation plans and tools.

First, many countries have laws prohibiting certain types of speech. Platforms must be able to remove this illegal material. For example, while the US government imposes few restrictions on speech, some things, including child pornography, terrorism, and fraud are prohibited. Section 230 of the Communication Decency Act cannot be used as a defense for intermediary liability regarding content that violates federal criminal law. Intermediaries must answer for the illegal content they knowingly leave up.

Second, many companies encourage or require partners to restrict and moderate content. Recently, AWS and the Apple and Google app stores all banned Parler for deficiencies in its content moderation efforts. These actions are occasionally referenced by Web3 proponents as examples of the too-powerful companies Web3 will ultimately replace. But while decentralization might mean there are more companies, communities, and infrastructure providers, it won’t change the desire of (some) to only work with partners that prevent illegal and illicit material.

Third, many of us would rather be in online communities free of illegal or problematic content, spam, fraud, and harassment. In being based on open protocols, Web3 technology may allow users more ownership and control over their data and greater ability to switch between apps that provide the services and environments they want.

3. There are already a range of approaches to Web3 content moderation.

Some Web3 apps and communities have already begun crafting content policies and enforcement systems. For example, Coinbase recently released a statement on their account removal and content moderation policy (see figure below) that offers a minimal approach rooted in removing illegal content and activity.

In contrast, the Aragon DAO Network has released a DAO Charter with community guidelines that lays out a more comprehensive content policy. For example, it specifies “examples of unacceptable behaviour” such as illegal activity, hate speech, harassment, but also, oddly, “Misleading or passive-aggressive comments.”  

Taking a different sort of approach, Murmuration Labs, working for Filecoin, a decentralized storage provider (and funder of the Decentralized Future Council), released guidance for Filecoin node operators. While it specifies that operators have legal requirements to report illegal content, it ultimately observes that the burden is largely on node operators to decide which content they wish to host.

4. Web3 apps and communities may employ community-led content moderation. Tokenization could provide new incentives for wider community participation.

Many of the biggest Web2 platforms employ hundreds or thousands of human content moderators across the world. Frankly, there is little stopping a Web3 app or community from also employing permanent human content moderators.

A solution perhaps more in line with the Web3 ethos of tokenized incentives and community participation would be for apps to rely on community members to moderate content. Many prominent Web2 platforms, from Wikipedia and Reddit to Facebook, employ some form of community-led content moderation. However, the Web3 promise of smaller, more decentralized communities lends itself well to community-led content moderation.

Perhaps most notably, blockchain technologies may provide a way to incentivize and reward the labor of content moderation. It takes a great deal of work to moderate an online discussion. By granting tokens for those who contribute to moderation, Web3 platforms could provide a way of moving beyond the volunteer labor that has defined many Web2 platforms. Of course, with little precedent, it remains to be seen if this sort of incentivization is financially sustainable or will result in successful content moderation.

5. Open protocols may allow users to easily switch between platforms/interfaces or integrate third-party moderation solutions

Building new Web3 platforms around open protocols would likely permit users to switch between interfaces more easily, bringing along their data and networks. This could mean that users have more agency to support interfaces that provide desired forms of content moderation. However, it remains unclear if content moderation is a high enough priority for most users that they would switch interfaces on its account.

Protocol-based environments could also permit users to choose and integrate third-party content moderation filters or solutions. Companies, non-profits, or other organizations could offer to plug in between platforms and users and filter or moderate content. This could allow users to select and support organizations that they trust to apply content policies with which they agree. This flexibility (which already exists for some existing platforms, such as Twitter and Mastodon) could result in a large market for content moderation service providers, who may offer some mix of automated and human curation.

6. Automated content moderation is unlikely to work better for Web3 than it does for Web2.

Major Web2 platforms have invested heavily in automated tools for content detection and moderation. While these tools are good at detecting certain types of illegal or illicit content (such as nudity) they struggle to identify and action others (such as hate speech). Meaning is often deeply contextual: context can justify otherwise problematic content (like graphic images from a war) or make otherwise innocuous language deeply problematic (like some forms of harassment).

At the same time, expression on and offline changes rapidly, and AI systems may struggle to keep up, especially as some users will work hard to skirt automated content moderation systems. While most current platforms use AI-based moderation, they usually supplement it with some form of human moderation. Web3 is defined by the effort to use code to ensure trust and compliance. It would be a mistake, however, for Web3 developers to assume that technical means alone can solve the problems of content moderation.

There is little to suggest AI-based content moderation will work any better for decentralized, blockchain-based apps than it does for Web2 platforms. In fact, smaller, more decentralized apps may have more difficulty accessing the training data needed to develop sophisticated and highly accurate models.