Harassment and outrage often drive clicks, shares, and algorithmic visibility. This creates a conflict between user safety and profit. Moderation is reactive rather than preventive. Should platforms be legally liable for sustained abuse? How can engagement models change without censorship?