Online harms: Content moderation 


and models of regulation 

Many forms of online harm are perceived, and remedies are sought from platform providers and from the state. The harms range from online bullying and intellectual property violations to incitement to, or facilitation of, violence.

Because platforms enable extremely rapid and articulated dissemination of user-generated content (UGC), remedies obtained through court orders or administrative actions fail to be fully responsive to the aggrieved parties and state authorities. Therefore, the responsibility for remedial action to remove or reduce the reach of UGC perceived as causing harm tends to fall on the platform providers.

Most platform providers have therefore put in place various modalities of content moderation, ranging from algorithmic takedowns and de-prioritization through moderation by human agents to bans and suspensions of those deemed to be repeat offenders. This may be in the form of “private regulation” and soft or hard co-regulation whereby the state requires the platform providers to act in specified ways.

The state also engages in efforts to directly control UGC, usually in the form of ex-post prosecution of content originators and disseminators deemed to have committed an offense set out in a law. While the Virtual Dialogue sought to focus attention on content moderation and the regulation thereof, it was found in the course of the Dialogue that stakeholder positions on the regulation of content moderation practices were influenced by the actions or plans of state authorities with regard to direct control of UGC, therefore it will be mentioned as relevant in the report.

The report has been published by LIRNEasia. To access the report, kindly click on the link provided below.