It’s no secret that these days people hide behind anonymous accounts on social media to post hate.

 

It’s a problem that is getting worse, especially as more and more people join social media.

 

Facebook welcomes approximately 500,000 new accounts per day.

 

But it doesn’t have to be a problem.

 

Researchers believe this problem could actually be solved if Facebook stepped up.

 

A team of Australian researchers have been looking more closely at Facebook’s hate speech moderation.

 

The team looked at Facebook’s current definition of hate speech, and how this has impacted, specifically, LGBTQI+ community pages from five countries across the Asian Pacific region: Australia, India, the Philippines, Myanmar, and Indonesia.

 

The study was actually funded by Facebook, which is a positive sign that Facebook acknowledges the problem.

 

We interviewed Fiona Martin from The University of Sydney, who co-authored the study.

 

“Part of the problem with the way that Facebook monitors and filters hate speech is that it cannot automatically, or with its human moderators, capture all the hate that gets posted on its platform.”

 

“So, though it’s been improving its AI (artificial intelligence) or machine learning moderation for years, but it still misses out on a small percentage of the sorts of really violent and discriminatory speech that people post.”

 

Identifying what exactly is “hate speech” has been an ongoing concern. While Facebook recently adjusted its definition to be more nuanced to capture more hate, perpetrators are inventive, and often find ways around language filters and bans. For example, the team came across an incident where a vomit emoji was commented below a gay couple’s wedding photos. This went undetected by the algorithms. And the potential impact of this negative sentiment is damaging, not just for the couple, but the wider LGBTQI+ community.

 

“…Hate speech is powerful because it targets people who are already marginalised in their communities, people who are already subjected to nasty comments on the bus or the train, who are already bullied in the schoolground, who are already discriminated against in legislation. That’s why it has that power against people.”

 

While we can’t make people accept everyone, we can do more to make sure there are safe places for marginalised individuals to go, even as simple as a welcoming and supportive online community. And it starts by making social media companies accountable for what they let pass on their platforms.

 

“The thing with Silicon Valley platforms, with Facebook, Insta and Twitter, is that they promote “frictionless” experience. They don’t want anything to slow you down. This is one of the difficulties with trying to suggest recommendations to Facebook that might actually mean that its users have a more challenging experience of using the platform. They don’t want to do that.”

 

Even when content is reported sometimes the moderators themselves, who are often poorly paid and not well-trained, are not equipped to identify or combat it.

 

“Long-term it is problem that we have a private company that is regulating speech in national contexts, because as we have discovered in our report, there are different cultural nuances to speech that aren’t necessarily captured by Facebook’s universal community standards.”

 

The team believe that mandatory training for content moderators and page admins is the first step towards eliminating hateful content. Even then there is more that can be done.

 

“I’d love to see Facebook consult more. It consults a lot already with academics and with what it calls “experts in the field” but I’m not convinced it’s getting to the people on the ground who are actually experiencing this hate.”

 

“So, you really need to keep this dialogue going with marginalised groups, all the team, and it has to be an ongoing process, you can’t just consult two or three groups once and then never come back to it. We would love to see an annual round table in the region where some of the bigger groups get together and exchange details on what sort of hate speech is emerging, what sort of trends are emerging.”

 

In fact, the team still have not heard back from Facebook about whether the company is going to action any recommendations. So, if Facebook doesn’t make these changes then it comes down to what we can do.

 

“There’s a lot that everyday Australians can do. If you spot what you think is hate speech on a Facebook page, report it. Get familiar with the reporting mechanism and get familiar with what hate speech is. It’s a real responsibility for all of us to try and, where you can, minimise this discriminatory speech.”

 

Imogen Brooks reporting for Brisbane Line on 4zzz radio.

 

 

Live content
false
Editorial Approval
Air Date
Content Source
On-Air Story Introduction

Facebook has stepped up its hate speech moderation in recent years. But has it done enough? Australian researcher, Fiona Martin, was part of a Facebook-funded investigation into the effectiveness of the platform’s online moderation and control in the Asia Pacific Region. Imogen Brooks reports.

 

 

Type of Article
On-Air Story Conclusion

 

 

 

On-Air Story Further Information