What strategies use social media to moderate content and with what implications?
The public debate over content moderation has focused on removal: social media platforms delete content and suspend users, or choose not to. But removal is not the only remedy available. Reducing the visibility of problematic content is becoming a common part of platform governance. These platforms use machine-learning classifiers to identify sufficiently misleading, harmful and offensive content that, although they do not justify removal in accordance with the site’s guidelines, they justify the reduction of its visibility through its demobilization in algorithmic classifications and recommendations, or its total exclusion. This conversation reflects on this shift and explains how reduction works.