“Can anyone explain my ban?” – The moderation transparency mandatory heatwave should be cooled

IP Society Blog Series

Submission by IP LLM student Eason Y. Zhai

 

According to Bernholz et al., platforms have become “private governments” because they can set their own rules and force users to follow them. The ECtHR even ruled that removing anti-religious remarks from a religious-operation platform does not violate free speech (see here). Making transparent moderation mandatory becomes a global issue because platforms and users would have unequal power. This is considerably more apparent on automatic algorithmic-moderation platforms. The algorithm ignores human environment, subtle emotional differences and cultural background (here), blocking, downgrading, deleting, etc. The “black box” makes content moderation more ambiguous and creates a risk of over-censorship. Venus of Willendorf may be banned for “pornographic” content.

Compared to the eCommerce Directive, the Digital Services Act (“DSA”) has abandoned the requirement for intermediaries to play only a “passive” role with regard to the “mere conduit”, “catching” and “hosting” conditions in order to have secondary liability immunity. This was achieved by the adding of Art.7 to support “good faith” measures, and of Art.14, which requires platforms to provide information for users regarding “any policies, procedures, measures and tools…of content moderation”. Also, Art.15 requires transparency of “a qualitative description, a specification of the precise purposes, indicators of the accuracy and the possible rate of error of algorithmic automatic moderation”. However, Art.15(2) only applies to Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), and Art.8 emphasises that intermediaries are not obligated to carry out “general monitoring”. However, non-VOLPs/VLOSEs that use algorithmic review may hide essential facts like the algorithm’s goal and accuracy, which does not then guarantee user knowledge. Additionally, Art.14 does not specify the quality of transparency and the users’ rights to modify the rules, so to contain “any” information only requiring them to provide “low quality” transparencies will also qualify, eg. rules are inherently discriminatory. How about making moderation transparency mandatory?

Admittedly, making transparency a legal requirement would benefit users by alerting them to potential penalties. Additionally, transparency rules will promote user trust and safety, which will boost platform revenue since platforms are “private governments”. Second, mandated transparency can be better implement under Art.17 of the DSA, which requires platforms to notify users whose content has been removed or disabled with detailed information and reasons. Furthermore, this enables users to appeal such decisions if they have this clear and specific information. Thirdly, mandated openness prevents unfair filtering because everyone can observe if others are treated equally.

However, transparency as an obligation has clear risks. Transparency alone will not alleviate Art.14’s difficulties without changing rights; it will just make a group of people (the recipients of the service) aware of discrimination. Second, the “black box” cannot be ignored simply by implementing obligatory openness, as it cannot change the fact that some algorithmic conclusions cannot be explained. Thus, the concealed scope stays the same and there will be certain information that individuals will never know.

One type of discrimination cannot be known from transparency because it is “embedded” in algorithms, possibly due to the “Parroting” characteristic of LLMs, where their logic mimics and creates social discrimination rather than understanding it.

Furthermore, obligatory transparency will increase small and medium digital intermediaries’ costs, especially in the US. “Transparency” presupposes “moderation,” but some small platforms only have a five-person studio and can use the Communications Decency Act section 230 to obtain immunity when they encounter bad information (“knowledge” does not influence immunity). In addition, small platforms must sometimes spend 20-minutes manually translating foreign content. Making transparency mandatory adds a “disclosure” burden to them, requiring costly moderation. This boosts VLOP’s monopolistic potential.

Thus, if the legal status quo is to be changed, the way to increase transparency is to not only make it mandatory, but to also consider the financial health of small and medium-sized platforms, the algorithmic limitations of automatic review, the wider society, and whether users have the right to set rules.