A proposal for opt-in content moderation in Nostr

What is Nostr & User Generated Content and the Fediverse (A Legal Primer)

The problem(s)

Every social media eventually has to reckon with bad actors: human traffickers, terrorists, those trying to share and consume child porn, etc. Any sufficiently powerful social media will attract those users, and the best place for these bad actors to fly under the radar is where they'll congregate.

Decentralized social media, like Nostr, has looser (currently nearly non-existent) content moderation. We cannot ignore the fact that Nostr will become a hotbed for content most of the community would despise. I want to start this conversation so we can think through the problem and come up with strategies before it's irreversible.

In many countries it's difficult to operate an internet service that allows communication and/or sharing files because they often get used to share music, movies, and other copyrighted material.

A relay operator will probably be subject to the same difficulties. Relay operators will get take-down notices and if they don't make a best effort at proactively blocking the sharing of copyrighted material, there may be legal actions anyway.

Human Trafficking / Child Porn

Helping human traffickers and spreading child porn is not at the top of the list of reasons why the Nostr community is creating the tech it's creating. But we don't have great tools to prevent it.

It can't just be on the user and client side that this content gets blocked, because people that want to use Nostr for these purposes will just use clients that don't block it. There must a relay-level solution as well, so that relay operators can do their best to prevent their relay for being used for bad purposes.

Violent or other unsavory content

There's always going to be content that's not necessarily illegal but just not something people want to see. Think about the situation where there are adolescents on Nostr eventually, their parents may want some control over what their kids see.

The problems cannot be ignored

If we don't solve these problems, the only people that will be able to run relays will be those able to operate in jurisdictions that don't prosecute companies who ignore laws around the aforementioned types of content.

Most of the western world, and China would definitely be out of the realm of possibility at that point which severely restricts the future of Nostr.

Proposal: Subscribe-able Blocklists

NIP 51 already outlines a Nostr-native way to publish lists of users (by pubkey). There's even a carveout for the concept of a "mute list." I think that this could be expanded so that there are "topical" mute lists that are used to block content publishers. The topics could be things like: "known sources of violent content," or "known sources of child porn," etc instead of just general mute lists. I'd propose we call this a blocklist.

Since these kinds of lists are replaceable, the whoever manages the list can continuously update it with the pubkeys of accounts that they've determined deserve the topical tag (porn, violence, copyright violation, etc).

That way users, clients and relays could select the blocklists that they want to use to mute or block content (on their feed, on their trending page, saved on their relays).

This would be consent based so that users, clients, and relays have to opt in to the lists they want to use to filter their events.

An economy of paid blocklist providers

Maintaining blocklists is valuable labor, and in order for this system to work, it can't be based on good will, volunteers, and donations. Providers should pay for use of the blocklists. Since this business is very scaleable, the costs to the end users could be very low once Nostr has hit critical mass.

On top of that: different people look at the same content and would classify it completely different ways. Instead of trying to come to one consensus of how to tag things like porn, violence, and copyright violations (like Facebook, Twitter, TikTok attempt to) we can let the market create different methodologies for different people.

People will vote with their money and the blocklists that do the best job (not letting much bad stuff through, but not being overzealous in blocking) will naturally make more money.

Those that are able to add more technology to the tagging process without compromising quality will reap handsome rewards.

This proposal is not about shoving blocklists on users. Users, clients and relays will be able to opt-in to using these block lists. This proposal is about giving people tools to moderate their own domain.

Ideally only users would need to subscribe to these blocklists, but in order for relays to not be taken down by law enforcement they may also need to subscribe to some blocklists. And if users don't like the way a relay is moderating content on the relay then they can move to a more permissive one. Nostr allows this!

I don't like clients using blocklists without the user choosing them, but people will be free to move clients if they don't like what the client is doing. So none of this is irreverssible.

Evolution of blocklists

Expiration of entries: some blocklist providers may less pubkeys fall off the list over time. This would save space in the blocklist (improving the performance of clients and relays) as well as allow accounts to recover their reputation instead of being permanently irrecoverable.

Blocking by more than just pubkeys of accounts: The simplest way to build a system like this is to put the pubkeys of accounts sharing illegal or unsavory content. We may discover other things to add to these blocklists: note IDs, hashtags, etc to make the lists more robust in achieving their goals.

Appeals system: Any system for blocking, muting, or shadow-banning folks at scale needs an appeals system to be fair. I'd imagine a large part of the list providers' time and tech will be dedicated to processing requests to be re-evaluated and removed from any of the lists they've landed on.

Decentralized reputation system

How email reputation works

Everyone has issues with how email reputation is handled, but it is pretty decentralized, it operates at large scale, and it's been evolving towards being more effective over time.

Blocklist providers: since the dawn of email there have been folks that maintained a record of "known spammers" or senders that were suspected of sending spam. Over the decades this has evolved to hundreds of such providers will their own methodology and tech for tagging spammers and publishing those lists to email providers (gmail, hotmail, yahoo, etc)

Inbox placement logic: Each email provider (gmail, hotmail, yahoo, etc) has their own list of Blocklists they utilize to inform their algorithm that marks emails as spam or not. Each email provider has incentives to make their spam filter as accurate as possible and so they are constantly working to improve the blocklist providers and build their own.

Appealing being on blocklists: If you are an email sender that was wrongfully put on a blocklist you can usually go to the blocklist provider and appeal that decision.

Nostr equivalent

Sounds a lot like what could evolve if this proposal gains traction.

Sender reputation: Nostr blocklist providers will be helping to generate data points on the reputation of Nostr accounts.

Inbox placement logic: Users, clients, and relays can utilize that sender reputation to help inform the content they want to allow (on their feed, on their trending page, on their relay).

Relay reputation: As this system develops, we'll be able to tell which relays are using which blocklists. That should inform a future reputation for relays. If a relay isn't blocking content you think should be blocked that's useful information.

If you're a client or a relay operator and you notice relays that aren't blocking content that needs to be blocked in your jurisdiction, you can cordon off those overly permissive relays.

Over time groups will self separate. Where folks whose goals are nefarious will have to operate in a separate set of relays and clients from those who just want to use social media to connect, share, converse and speak truth to power.

It'll make the job of law enforcement easier, and squash any notion that all Nostr usage is contaminated by the small group of bad actors.

Future feed algorithms: Over time some Nostr users will want more algorithmic feeds. Those algorithms will benefit from the combo of sender and relay reputation to help make a feed that's non-offensive to the taste of users.

This will also help for a future when Nostr clients want to attract advertisers to offer a cheaper experience to users who can't or don't want to pay directly for using Nostr. Algorithmic feeds that proactively sift out content offensive to advertisers will be able to attract those advertisers much more easily.

Technical Challenges

Paywall for the blocklist

I don't think that NIP 51 alone can be right architecture for publishing blocklists in a way that can have a paywall, because these events are unencrypted and public.

Creating a way to manage and publish the blocklists will keep the community plenty busy for some time, but without a way to technically require payment before utilizing the blocklist the system won't be self-sustaining. I'd love ideas if you have any!

Client and Relay performance with large lists

Clients and Relays attempting to respect these blocklists would have to cross reference the events they're processing with the blocklists to ensure that no unwanted content moves forward.

If this lists get long, as they will if Nostr (and this system) scales, is there enough performanc optimization possible for this system to allow the user experience to stay usably fast.

Can a blacklist provider run profitably?

Can human labor update the blacklists accurately and quickly enough to make it useful at scale? If so can they do it at a price that users, clients, and relays are willing to pay?

Do the economics require machine learning or AI agents to assist humans in tagging accounts and updating the blacklists?

No way to tell until we start trying, but that'll be an ongoing technical challenge.

How to get started

Blocklist manager

I'm planning to build an open source "blocklist manager" where a "blocklist provider" as described above could start adding pubkeys to topical blocklists and publishing these lists via Nostr as NIP 51 lists.

The manager would likely connect to various relays and comb through Nostr events with the "user reported content" kind to find potential users that should be added to their blocklist(s).

The manager should also make it easy to publish multiple lists so the provider can use their unique logic for adding/removing folks from the blocklists on various topics. There are economies of scale for blocklist providers, once they are set up to maintin one list, it's a lot less effort to maintain multiple lists. Imagine tagging a pub key with several tags and they now show up in each of the blocklists corresponding with that tag. Much easier than having each blocklist provider focused on one kind of tagging (porn, violence, copyright violation, etc)

Client and Relay support

Once there are blocklists to subscribe to, clients and relays would need to implement ways to utilize the blocklists to reject events from being stored in the relay / hide notes from users' feeds.

This isn't something one person can build, it would need to be a compelling enough offering that the major clients and relays add support for it.

Support for paying for ban lists

As stated above, blocklists are the valuable results of specialized labor. Paying for this product will allow the system to be self-sustaining.

Before that's possible we'll need to make it easy for users, clients, and/or relays to pay for use of the blocklists.

Feedback

Let me know what you think! If you have questions, ideas, or just want to tell me how terrible the idea is, please do! My Nostr NIP-05 address is gregwhite@nostrplebs.com.

PS Child-safe Nostr

If you invert this idea so that list providers are creating allowlists instead of blocklists and they're only added only trusted content providers that are "child friendly" to their allowlists: You can open social media to children in a safe way as described in this other post of mine.