Online abuse against women

Women online experience a disproportionate amount of harm and abuse, but it’s not all based on their gender. This “cyberviolence” is also shaped by a range of other intersecting factors such as race, religion, social class, caste and disability.

Our ongoing research involves collecting case studies from India and Australia to understand how various marginalized identities can impact young women’s experiences of online violence and how social media companies including Facebook, Twitter and Instagram , do not do enough to stop it.

India is a rich case study for this research, as it is a country where women have many different expressions of identity in large numbers and where there are still many racial, religious and social tensions in society.

However, although Australia and India have very different cultures, women in both countries are victims of online crimes, including cyberstalking and cyberstalking. And those with marginalized identities face more stigma and targeting.

Worse still, the platform’s content moderators don’t acknowledge this cyber-violence, often because they don’t understand the nuances and contexts in which stigma operates.

What is cyber-violence?

Cyberviolence can be understood as harm and abuse facilitated by digital and technological means.

In 2019, there was a 63.5% increase in the number of cyberviolence cases reported in India, compared to 2018. There has since been a further increase in cases against women from marginalized communities, including women. Muslim and Dalit women.

A striking example is the “Bulli Bai” application, which appeared on GitHub in July last year. The app’s developers used the images of hundreds of Muslim women without their permission to put them “for sale” in a fake auction. The aim was to denigrate and humiliate Muslim women in particular.

This is reflected in Australia. Young Indigenous women are vulnerable to being the victims of cyberviolence that targets them not only by gender, but also by race.

A 2021 research report from eSafety found that Aboriginal and Torres Strait Islander women felt victimized by racist and threatening comments made online, typically in public Facebook groups. They also said they felt unsafe and had a significant impact on their mental health.

Another example comes from New South Wales Greens Senator Mehreen Faruqi, who suffered an extremely high level of online abuse as Australia’s first female Muslim senator. Speaking on behalf of women from marginalized backgrounds, Faruqi said, “It depends on where I come from, how I look, what my religion is.”

Young women with marginalized identities

To research on Cyber ​​Violence Against Women in India reveals how hatred towards certain religions, races and sexual orientations can make gender-based violence even more harmful.

When women voice their opinions or post photos online, they are targeted because of their marginalized identity. For example, Kiruba Munusamy, a practicing barrister at the Supreme Court of India, received racial and caste-based slurs for speaking out against online sexual violence.

And women with marginalized identities continue to be victimized online, despite attempts at control.

Consider Australia’s “Safety by Design” framework, developed by the eSafety Commissioner. Although it has gained traction in recent years, it remains a “voluntary” code that encourages tech companies to prevent harm online through product design.

In India, hate speech against Muslims in particular has increased. India has laws (though imperfect) that can be used to tackle online abuse, but better implementation is needed.

With a Hindu majority and radicalization, it can be difficult to report incidents. Victims are concerned about safety and secondary victimization, in which they may face further abuse as a result of reporting a crime.

It is difficult to know the exact amount of cyberviolence perpetrated against women with marginalized identities. Yet it is clear that these identities are linked to the amount and type of abuse women face online.

A study by Amnesty International found that female Indian Muslim politicians faced 94.1% more ethnic or religious insults than female politicians of other religions, and women from marginalized castes received 59% more insults. caste-based insults than women of more general castes.

Recognition in platform design

Five years ago, Amnesty International submitted a report at the United Nations highlighting the need for moderators to be trained in identifying gender and identity-related abuse on platforms.

Similarly, in 2019, Equality Labs in India released an advocacy report detailing how Facebook failed to protect people from marginalized Indian communities. This is despite Facebook having caste, religion and gender as “protected” categories under hate speech guidelines.

Yet in 2022, social media companies and moderators still need to do more to address cyberviolence through an intersectional lens. While platforms have country-specific moderation teams, moderators often lack cultural competency and knowledge about issues of caste, religion, sexuality, disability, and race. There can be a variety of reasons for this, including a lack of diversity among staff and contractors.

In a 2020 report from Mint, a moderator working for Facebook India said she would need to achieve a minimum 85% accuracy report to keep her job. In practice, that means she can’t spend more than 4.5 seconds on the content being reviewed. These structural issues can also contribute to the problem.

The path to follow

In March 2022, the Australia eSafety Commission joined a global partnership to end cyber-violence against women. But a lot of work remains to be done.

Content moderation can be complex and requires the collective expertise of communities and advocates. One way forward is to increase transparency, accountability and resource allocation to create solutions within social media companies.

In November last year, the Australian government released the Bill to hold social media companies accountable for content posted on their platforms and protect people from trolls. These regulations are expected to ensure that platforms are held accountable for harmful content that affects users.

-The conversation

Read the original article

Comments are closed.