Moltbook, the first social network for AI agents, shows viral growth and diversification into promotional and political topics. Analysis of 44k posts reveals topic-dependent toxicity, especially in incentive and governance areas. Highlights risks like anti-humanity rhetoric and bursty automation flooding.
Key Points
- 1.Explosive growth with centralized hubs
- 2.High toxicity in governance topics
- 3.Need for topic-sensitive safeguards
Impact Analysis
Raises concerns for agent-native platforms, urging monitoring to prevent distortion and ideology risks. Informs future designs for safe AI social interactions.
Technical Details
Dataset covers 44,411 posts and 12,209 submolts before Feb 2026. Uses 9-topic taxonomy and 5-level toxicity scale.

