Skip to Content

Badges, Bans and Building Better Behavior

02-04-2026

When it comes to fostering more civil, community-minded posting on social media, user bans are a powerful but blunt tool. Social media moderation and content generation research from Zaiyan Wei and coauthors Xiaohui Zhang, Qianzhou Du and Zhongju Zhang shows they often spur more posting, but not necessarily better conversations — and that carefully designed “in‑group” recognition can flip bans from punitive shocks into catalysts for healthier behavior.

Social platforms have leaned heavily on bans to tackle misinformation, harassment and spam, yet most evidence has focused on deleting content or shutting down entire communities, not individual users. The research team saw a gap: platforms were taking away people’s freedom to participate without really understanding what that does to their behavior, their emotions or the health of the broader ecosystem. Reactance theory, which looks at how people push back when they feel their freedom is restricted, offers a powerful lens for unpacking those consequences.

Zhihu — China’s leading Q&A platform, similar to a hybrid of Quora and Stack Overflow — provided unusually rich, user-level data on both bans and contributions. For a large random sample of 34,258 active users, the platform records when bans start and end, how long they last, and crucially, the labeled reason for each ban (unfriendly content, illegal content, advertisements, or other). Unlike Twitter or Reddit, ban status on Zhihu is publicly visible, allowing researchers to observe both the timing and cause of bans and then follow what users actually do afterward: how much they post, how others respond, and how the linguistic and topical quality of their answers change over time.

Because context matters, reviewers pushed hard on this point. To address it, the authors removed politically motivated bans from the data; the core patterns — more posting but mixed quality, and strong differences by recognition status — remained as strong or stronger, suggesting the behavioral mechanisms are not purely artifacts of China’s political environment.

The study tracks users before and after their first temporary ban. Researchers found that after a temporary ban ends, output quantity increased. On average, banned users post about 12.7 more answers in the six weeks after their first ban than they were expected to post absent a ban. Those answers were shorter, more subjective, more negative and less logically structured; they also drifted away from knowledge topics toward more social, expressive content and became less closely matched to the questions they answered. In other words, bans can “wake up” users, but the response often looks more like venting than value-adding knowledge sharing.

Recognition changes everything. Users who have earned platform recognition — badges or highlights on their answers — react very differently. They reduce their posting volume after a ban and improve the quality and appropriateness of what they do share, suggesting that feeling part of the “in‑group” makes them treat bans more as tough love than personal attacks.

Effects are dynamic, not permanent. The surge in posting and the dip in quality are strongest after the first ban and fade with subsequent bans. By the third or later bans, the effects are diminished significantly.

Taken together, the study points toward a rebalanced, user-level moderation playbook:

Use bans sparingly — and pair them with education. Platforms should treat bans as a last resort and always follow them with clear, specific feedback on what triggered the ban and how to participate constructively to avoid future sanctions.

Build “in‑group” connection. Recognition tools like badges and highlighted answers reward positive activity, fostering a sense of belonging and shared norms. Expanding inclusive, tiered recognition systems can turn later sanctions into nudges toward higher standards rather than triggers for toxic blowback.

Tailor bans by violation type. Zhihu’s labeled bans reveal that advertisers, users posting illegal content and others respond differently from those flagged for being “unfriendly.” Short, harsh bans work better for spam-like behavior that simply exploits reach, while harassment and unfriendly speech seem to require a mix of temporary restriction plus education — surfacing rules, explanations and perhaps structured “re‑entry” prompts that reinforce community norms.

Design for dynamism. Because reactance and behavior change over time, a static “one-size-fits-all” ban policy is a recipe for either escalating toxicity or quiet disengagement. Monitoring post‑ban behavior in windows (for example, the first 2–6 weeks) will allow for adaptive responses.

The Zhihu setting — large, data‑rich and transparent — makes it an ideal starting point, but it is only one culture, one platform and one type of community (public Q&A). The authors envision future research that includes cross‑country comparisons (high‑ vs. low‑censorship environments), studies of user‑level bans in marketplaces like Amazon, Etsy or eBay where livelihoods are at stake. By understanding and harnessing reactance, platforms can design moderation that promotes both community health and meaningful freedom of expression.