Fri. Apr 10th, 2026

Forget AI slop. We’re entering our generative smog era


Confusion

Inauthentic content is eroding the fabric of online communities, warns Marie Boran

Blogs

Image: Tara Winstead via Pexels


I think I’ll have to coin a term for what we’re seeing more of in online communities like Reddit. When AI slop is pervasive enough that it is degrading the experience of users on these platforms it seems to have progressed beyond AI slop and become full on ‘generative smog’, if you will.

Here’s the research: A study from Cornell describes AI-generated content as a serious moderation problem on Reddit, based on interviews with 15 moderators overseeing over one hundred subreddits. Those moderators warned of a “triple threat”: declining content quality, disrupted social dynamics, and governance that becomes difficult to enforce at scale. This matters because there are more than 110 million people active on Reddit each day.

Apparently AI content causes a 50% quality drop and 74% are seeing a decline in trust.

 
advertisement


 

And honestly, we don’t need a precise percentage to name what’s happening. We can feel it. The smog isn’t one obviously fake post; it’s the accumulated haze of maybe-real content. The kind that forces you to read everything twice: Is this a person with a life, or a machine with a prompt?

The Cornell researchers’ triple threat framing gets at why this is so corrosive. It’s not just that AI text can be low-quality. It’s that it is cheap, it floods, and it changes norms. Moderators told the researchers they fear AI will “squeeze some of the humanity out” of a site that sells itself as “the most human place on the Internet”.  That’s the smog problem: even the posts that aren’t AI start to feel like they might be, because the atmosphere has changed.

If you want a live example of a community trying to cough its way back to clear air, look at what happened this month on r/programming. This subreddit temporarily banned all LLM-related content, with moderators citing falling discussion quality and an influx of repetitive, low-signal posting. That’s not anti-AI so much as an emergency ventilation measure.

Wired magazine reported similar complaints across Reddit communities, including users and moderators saying suspected AI-written posts are showing up broadly and degrading participation. It feels as if the problem of “impossible moderation” is becoming a daily part of the job when moderation used to be mostly about removing bad behaviour. Now a part of the job is verifying human-ness while knowing that this isn’t an exact science – and that false accusations can blow up a community’s social fabric.

Slop as a taste problem

Generative smog also makes everything easier for the worst people. The Internet already had spam, scams, and coordinated manipulation; cheap generation just gives it more volume and more believable variation. The hardest thing for communities to defend is not a single lie, it’s the steady erosion of confidence in what’s actually authentic. Once that goes, you don’t just lose content quality, you lose the reason people showed up: to talk to other human beings.

And there’s an economic angle hiding in plain sight. Slop is often framed as a taste problem: robotic writing, lifeless posts, uncanny images. But at scale it’s an incentives problem. Platforms reward frequency and engagement. AI generation makes frequency a sinch. Meanwhile the cost of reading stays the same for all of us. So users pay the price: more time spent filtering, more suspicion, more scrolling to find something real.

What does a sane response look like? It’s not a blanket ban on generative AI because that’s not realistic, and it’s not even desirable in every context. But the Cornell moderators’ comments point to a blunt truth: rules are only as good as enforcement, and enforcement is only as good as what tools they’re working with and the level of support they get from the platform itself. Volunteer moderators can’t be expected to ‘hold the door’ for platforms worth billions.

If these social media companies want to keep online communities worth visiting, they need to treat generative smog like pollution: something you don’t solve with personal responsibility speeches. You solve it with enforceable solutions like default limits on industrial-scale posting. You make it easier to be human than to fake being one.

Because the bleakest outcome isn’t just the enshittification of the Internet, it’s that us real actual humans quietly leave these valuable communities because it stops being worth the effort to breathe.

Read More: AI Artificial Intelligence Blog Blogs Marie Boran


Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *