Paddington, London

2018 European Roundtable

Register Now
May 23, 2018
13:00:00
 - 
17:00:00
Speakers
Agenda
Media
Paddington, London

Overview

May 23, 2018
13:00:00
 - 
17:00:00
Paddington, London

FOSI’s 2018 European Roundtable explored the challenge of moderating problematic online content from both a technological and community perspective. Experts discussed the many ways that industry is tackling the complex issue of controversial content online, a growing area of focus in online safety as it encompasses everything from nudity and sexual content to hate speech and harassment. Technology companies have come under pressure to both improve their reporting systems and provide greater transparency about how moderation decisions are reached, making this a timely topic for exploration.

To start, a clip from the documentary film The Moderators was introduced, providing insight into the ways that moderation staff around the world are trained to make decisions about content that has been flagged or reported as inappropriate. As the film shows, it is imperative that moderators understand the policies, definitions and criteria for unacceptable content on individual platforms to be effective. Forum attendees agreed that the need to make nuanced, culturally appropriate decisions about content illustrates the importance of humans in the moderation process. The film also highlights the need to hire staff who are geographically and linguistically native to certain areas, giving them the ability to better moderate content based on trends, pop culture, and other criteria that technical tools would be far less effective at judging.

On the technical front, artificial intelligence (AI) and algorithms will increasingly assist in the future of moderation. AI was viewed with optimism as the fastest growing tool for industry. On platforms with hundreds of millions of users, AI can process potentially negative content faster and in much greater volume than human moderation teams. However, when it comes to making decisions about content in certain sensitive contexts, it was generally agreed that human judgment remains a necessity. There was commentary that some bad actors will attempt to manipulate technical tools, and that AI could be fooled by altered audio or video content. For algorithms, some have concerns about how they created and the potential for bias.

The concept of hashing images and videos was also explored as a technical way that some are expediting moderation for content that is frequently and repeatedly reported. Downsides to automated systems were the potential for false negatives, and the obstacle of determining and enforcing policies for content that may be suitable for different age groups. Due to the global use of social media, automated moderation also runs into challenges when it comes to the different geographical meanings of some images, symbols, and speech.

Philosophically, stakeholders agreed that technology is “human” and powered by human interaction. This led to discussion on community moderation and the ways that companies can appeal to users to act positively on their platforms. Some called this “behavior by design” – offering incentives for better behavior, or simply eliminating aspects of the platform that can lead to negative interactions. (For example, eliminating text functionality in a kid’s game to avoid the potential for bullying via the chat feature). Observations included that some users view moderation as punishment, whereas clearly and positively stating expectations creates an atmosphere of pro-social behavior instead of antisocial behavior.

The importance of the role of parents was discussed, many felt that there is still a critical knowledge gap around parental controls, safety practices on social media, and how to handle bullying or have tough conversations with kids about sensitive content online. Without parental guidance, experts say that children do not receive the most effective “onboarding” to social media, which lowers their aptitude for not only safety, but good digital citizenship. While there are many programs and awareness campaigns on these topics, supporting and increasing the engagement of parents was identified as a top priority for all stakeholders.

FOSI will be following the development of these issues as experts and industry continue their work to make social media and the digital world a better, safer place for users.

For further reading on these topics, see these FOSI Policy Briefs:
Technical Solutions to Problematic Content Online
Community Solutions to Problematic Content Online

Hosted By

No items found.

Thank you to our sponsors:

Champion of Corporate Responsibility
Diamond
Platinum
Gold
Silver
Supported By

Speakers

No items found.

Agenda

Reception
Community Solutions to Problematic Content
Break
Technology Solutions to Problematic Content
Welcome

Agenda

Media

No items found.

Other Upcoming Events