How to safely open the internet to children

How it works now

Currently, content providers and publishers specifically tag content as child-friendly and when a user is designated as a child, the only content that user can see is the child-friendly.

Problems

This system is better than no moderation, but it has some glaring flaws.

What's ok for your kid isn't right for mine

There are some generally accepted principles for what is kid-friendly or age-appropriate, but each platform has their one standard.

Some parents may want to lock down the content their child sees until they're fully 18, while others may want to expose their children to the realities of the internet earlier than that. What about those with a religious preference? Or an anti-religious preference?

Parents currently have very little choice over what is deemed "child friendly" for their child.

Content generated by the platform

The best child-friendly platforms are Youtube, Netflix, and Disney+ and for the latter two it's because they've generated content for kids specifically. If they've published the content to their kids section, it's because they created it to be kid friendly.

That's not possible for every platform, especially not with enough content to compete with these bigger platforms.

Reviewing content before it's viewable by children

Youtube does this. They have armies of bots and content reviewers tagging content as "child friendly" and that makes the content available for kids using Youtube.

This is an incredibly expensive operation that only an organization the size of Youtube can afford. Youtube has a unique critical mass of AI expertise, mass adoption for content creation, and consumption that allows them to do this.

Youtube also has a semi-official certification process for channels that want to have their content shown to children, but it is capricious and opaque, and only applies to content published to Youtube.

This level of sophistication isn't feasible for new entrants to the market who may have a better offering.

Nothing for preteens and teens

Preteens and teens are not watching the kids section of Netflix anymore, so they get none of the protections. It's a binary: kid or not kid. Access to only items rated G or violence, serial killers, and rape (these are links to some Netflix's most popular shows).

There needs to be something that allow parents to expose older kids to the internet without throwing them to the wolves.

An option for parent's choice moderation

We could have cross-platform, multi-media certifications for content creators. Content creators would opt-in to getting certified under a given certification standard so that their account can be verified on platforms that support that certification and publish to the users whose parents want content to follow that standard.

Opt-in Certifications

Imagine the MPAA (who does the movie rating: G, PG, PG-13, R, etc) wanted to generalize their ratings system to work for content platforms like Youtube and TikTok.

They could create a set of standards that govern what decides what rating a given piece of content would get. They could then offer a certification where content creators can pay the MPAA to review their channel to see if the content the channel offers meets a certain rating.

Example: Imagine if Mark Rober submitted his science channel on Youtube for review to certify they're all PG rated.

Once a channel is certified then platforms can assume that content coming from that channel will be PG rated (if not the MPAA would de-certify them, and then the platform will stop surfacing their content to children).

Media apps adopting certifications

If companies like Youtube, TikTok, Twitter, Facebook, etc adopted a set of certifications (maybe the MPAA does one, some religious groups get together to offer another certification, etc), then content creators can choose a few standards to get certified under, which will help them reach the largest audience.

Parents setting up their kid's account would choose which certifications the channels have to comply with to show up on their kid's feed. This takes the burden of deciding what's kid-friendly off the media platform and gives choice to the parent.

A varied market of certifications

This system would probably start with a few certifications from existing players (MPAA, or platforms like Youtube offering comprehensive options). Over time smaller players would come up with better, more nuanced standards and work to get support for those standards in the big media platforms.

I imagine that popular parenting podcasts or authors would come up with certifications that support their philosophy on parenting, and parents can adopt the standards that help them with their parenting goals.

We could also compare the health, happiness, and intelligence of kids based on the moderation regime their parents enforce to see what is best for kids. That would allow market forces to help improve the health and happiness of our kids, instead of the incentives (as they are now) being aligned to do the opposite.

Content creators' incentives

Content creators will be incentivized to make content that adheres to standards that are most popular with parents and not just most addicting to children. They'll also be incentivized to change their content with the evolving standards of the parents so they can remain in front of their intended audience.

Why is this better?

This version of content moderation would align the incentives of content creators, parents, kids, and platforms much more closely than the current system.

This version of content moderation would help young kids be protected from anything not rated G, but it'll also help preteens and teens be exposed to an appropriate maturity level of content.

This version of content moderation gives parents choice, based on what they think is best for their child, not what the content moderators at Facebook or Youtube think is best.

What's needed

Honestly I think the idea needs to spread and be refined first. Then I think there's an opportunity for parents to lobby child-friendly platforms (smaller ones at first and larger ones later) to add support for parent's choice moderation.

Then it's a matter of parents choosing platforms that support this model so that others are forced to add support. It'll be better for them than the future we're headed for: a future where parents just limit screen time and device access without any nuance because they have no other choice.