A New Approach to Engagement: Top Priority for Digital Publishers

While most publishers are focused on harnessing the value of audience engagement, many are wrestling with how best to manage growing and established communities. Communities have long been plagued by harassing trolls, armed with abusive language, and spam filling in the comment sections — all of which drives away online audiences and stops user engagement in its tracks. This is why some major content publishers have turned off their commenting feature entirely.

As we’ve established, the Business Case for Engagement is strong: active, engaged users spend more time onsite, and are more likely to convert into loyal readers. And one of the primary ways users engage onsite is in the comments section, even those who don't contribute to user generated content spend their time reading the comments section, after all 68 per cent of online audiences spend more than 15 per cent of their time reading comments.

Human Moderation: Problems With The Status Quo

The groundswell of problems indicates the current commenting and moderation model is broken: trolls leave negative comments, bully community members, and engage in uncivil discord.

So how have digital publishers contended with this problem? Up until recently, human moderation was the only option other than throwing in the towel and turning off commenting altogether. But as digital publishers have come to discover, human moderation is time consuming and while very costly, not online 24/7. In a lot of cases, publishers do not have dedicated moderators who can vet every comment, so the task falls to editors and/or journalists. And when audience engagement becomes moderation for the newsroom, turning engagement into a garbage removal exercise is a laborious and thankless chore.

Pre-Moderation: Killing Engagement

If you’re a publication with a large community, you’ll need a team of moderators to sift through all the user-generated content. Not only is manual human moderation labour intensive (and expensive), but it also creates a poor user experience. Users see a delay between submitting comments and watching them go live, which discourages them from engaging at all.

Post-Moderation: Window of Exposure

Moderating after comments are published opens publishers up to having potentially libellous, harmful and irrelevant material on-site (even if it’s only temporarily live). And while human moderators can account for context while approving and rejecting comments, their intrinsic biases may impact which comments are filtered — this can lead to a frustrated audience. As a result, human moderation is not sustainable or scalable as a community grows.

Social Moderation: The Limitations

Current moderation tools also fall short when it comes to filtering out negative comments.

Tools like Facebook Commenting simply don’t offer strong filters to catch all the variations on profanity, abusive comments, and bullying that often plague comment sections. With approximately 6.5 million variations of a single English word, it’s easy for trolls to hide profanity and abuse in the context of their comments. Facebook limits the number of characters in their banned words list, which makes it even more difficult for moderators to filter out abusive language.

Did you know:

  • Each word in English language has about 6.5 M variations
  • Facebook Page Moderation only allows 10,000 characters
  • The average word is 5.1 characters in length, which caps Facebook moderation at approx. 1961 words
  • Facebook moderation profanity filter is reactive, not customizable and built on community complaints

The Solution: Viafoura Automated Moderation

Then what’s the solution to empower digital publishers to stop online harassment, trolls and abuse? Viafoura Automated Moderation.

Leveraging the algorithm to automate comment moderation across owned and social channels, SaaM protects digital publishers, journalists and community members while increasing engagement.

SaaM moderates comments in real-time, and learns from post-moderation changes. Through automated moderation, comments are parsed as they’re made, and publishes or flags them based on your community guidelines.  SaaM uses natural language processing and machine learning to automatically detect and hide inappropriate comments replete with personal attacks, foul language, political hostility and spam before they’re seen by online audiences.

Once flagged, these comments can either be deleted automatically or reviewed and approved/deleted by a publisher’s in-house moderator or community manager. As a result, digital publishers can foster real-time dialogue between users, since the automated system monitors every post to ensure community standards are upheld.

Digital publishers can leverage SaaM to do the heavy lifting when it comes to comment moderation and instead focus on engagement.