Section 6: Who is doing what to respond to online harassment, and with what efficacy?

    Responses by social media platforms

    Social media companies have been developing community guidelines to regulate content and provide responses to online harassment. Whether through reporting fake profiles or requesting the removal of harmful and defaming content, social media companies have provided a number of potential solutions for their users, the efficacy of which will be discussed further below. To understand the current proposed solutions, however, it is important to first provide a brief historical context of the policies or criteria used by tech companies to regulate content.

    The below timeline documents significant events and critical moments in the history of social media platforms in responding to online harassment, either in the form of guidelines or technology. The timeline also includes contentious moments that triggered conversations about the role of companies in combating harassment. The timeline was created through the collection of various online sources, including official announcements by companies, news reports on an event or opinion pieces written by advocacy and women rights groups.7

    The landscape of available responses by tech companies changes quite rapidly based on critical political events and we would very much like to keep this resource alive and updated. If you think this timeline should include additional resources please email us at ttc@tacticaltech.org.

    Please see the timeline here

    Tech companies have been developing criteria to regulate content since the early 2000s. The earliest examples are policies and administrative guides developed by Wikipedia and Livejournal.8 In 2006, YouTube developed its policy on content moderation, which received widespread global attention. Following this, the timeline shows that 2014 marks significant momentum in the number of dedicated announcements by the companies on harassment and abuse. This is the year that Twitter was criticized for online harassment with regards to the Gamergate scandal 9 , which led the company to undertake actions such as suspending user accounts and to establish a partnership with Women Action, and the Media (WAM).

    Another significant moment occurred in 2016 when Facebook, Google and Twitter announced that they would be working with women’s rights groups and NGOs globally to fight harassment and hate speech. Historically, tech companies have only responded to online harassment when faced with public pressure, such as following scandals, during which time their responses have primarily been simple public relation fixes as opposed to structural shifts.

    The 2016 decision to liaise with women’s rights groups is important because it is a public acknowledgement of the need for tech companies to act in conjunction with civil society to design solutions to online harassment. It also signified an acknowledgement that actions pertaining to the governance of social media need to be informed by the lived experiences of women at the receiving end of violence.

    Understanding the historical development of the attitude of social media platforms toward online harassment helps us to contextualise and evaluate the two prime avenues proposed today. Each of the following two approaches developed by social media companies will be presented and evaluated below:

    • Human review process
    • Technical solutions

    A. Human review process or moderation As described above in the timeline, the use of content moderators was one of the earliest mechanisms implemented by tech companies to flag and take down content deemed to be in violation of platform rules. Earlier examples of content moderation include standards and guidelines, such as moderation and copyright regulation criteria and rules developed by Fark.com, AOL/Yahoo, Flickr and large online segmented communities, such as Something Awful9, in response to malicious editing, trolling and copyright issues. However, while formative, none of these attempts enjoyed the social reach that YouTube acquired through its decision to hire content moderators in 2006. In 2006, YouTube proceeded to write its of rules, while in 2007 they introduced the Content Verification Program to flag copyrighted content.

    There have, however, been a number of issues raised about the efficacy and ethical implications of using human content moderators to address online harassment.

    Firstly, although human content moderation was deployed as an early mechanism, it is only recently that we, as the general public, have been exposed to the world of content moderators. Media reports have started to emerge about who they are, how much they are paid, how they conduct their jobs and ultimately, the emotional effects this job has upon them. The 2017 Facebook Files revealed the working conditions of Facebook moderators as well as the inner workings of Facebook’s Community Standards, which includes moderating content such as underage sexual abuse, suicide and beheadings. Facebook currently has 7500 content moderators. While these moderators are offered training and counselling, Facebook has not been transparent regarding the types of support offered to the moderators to cope with their daily exposure to violence and abuse. It is at the discretion of moderators working for these companies to speak up, sue the company, or become whistleblowers. And ultimately, it is left to an undertrained, overworked, and perhaps inadequately-supported moderator to draw the line between free speech and violence. In April 2018, following years of advocacy by activists and users for the first time, Facebook publicly released a version of its Community Guidelines which describe what is prohibited to say on the platform.

    Secondly, there are significant issues with using a global workforce of content moderators who are not attuned to the cultural or contextual specificity of the content or simply incapable of responding to the amount of requests that they receive on a global level. Given the magnitude of challenges, it is unrealistic to assume that a global workforce of content moderators can adequately respond to online harassment.

    Thirdly, the mandate of moderators to remove content has been the subject of much criticism from freedom of expression advocates due to its ethical implications. Free speech defenders have criticised the monitoring of content for being an infringement of free speech and a form of censorship. By regulating content on their platforms, companies exert substantial power over shaping free speech. A recent report by Pro Publica exposed the poor standards implemented by tech companies to distinguish between hate speech and political expression. ProPublica’s research also demonstrated large inconsistencies within the Facebook Community Guidelines, especially in relation to protected groups. However, lawmakers remind critics that a private platform cannot in fact violate the freedom of expression as per the definition, the freedom of expression protects individuals from government censorship and not censorship by private entities.

    Relatedly, there have been issues with how Facebook defines “protected categories” of people with direct implications on who is protected. As revealed by [The Guardian Files](https://www.theguardian.com/news/series/facebook-files), Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at protected categories, which are determined based on overarching categories of race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. In other words, the Facebook Community Guidelines protect broader categories of individuals, rather than subsets. The implication of this is that individuals at the intersection of historically-excluded identities are not sufficiently protected, despite being subject to further vulnerabilities, as the broader power dynamics found within societies are left untouched. Accordingly, “white men” are considered a group because both traits are protected, whereas “female drivers” and “black children,” like “radicalised Muslims,” are subsets, because one of their characteristics is not protected. Legal scholars have pointed out that this practice is arguably in contradiction with international law and policies pertaining to the protection of minorities and protected groups. They have similarly pointed out that the criteria are arguably in contradiction with US law, which permits preferences such as affirmative action for racial minorities and women, for ensuring diversity or for redressing discrimination.[11]

    Finally, localised contexts and attempts by tech companies to apply a set of rules on a global community will likely open the door for misinterpretation and moreover, they are anti-democratic at heart. The task of addressing harassment online is left up to the companies, who are empowered to determine which content is considered violent, abusive or constitutes harassment, and similarly which content qualifies as free speech. In the absence of adequate oversight by public institutions, tech companies use content moderation rules and community guidelines to decide which content stays and which is removed. It is dangerous for companies to set the policy agenda in relation to free speech, when this ought to be determined by public institutions that represent the will of the people.

    B. Technical solutions On the other end of the spectrum from human moderation is the use of algorithms – or a combination of algorithms with minimal human moderation - by social media companies to regulate speech and curb online harassment. These technological solutions, introduced by tech companies without public oversight, seek to pre-emptively determine and prevent online harassment.

    One such tech solution was announced by Facebook in November 2017 as a new approach to tackling revenge porn.13 The approach would involve Facebook digitally hashing 14 sexually explicit images of women that had been submitted by women in anticipation of having this form of violence used against them; Facebook would then use the digital footprint to prevent the image(s) from being further uploaded to the platform. Although primarily relying on an algorithm, the hashing process also includes a human review process, through which a system’s engineer or a content moderator reviews the content to determine whether or not it constitutes revenge porn.

    In response to this approach, the company received caution and mixed reviews from women’s rights advocates. Several women’s rights activists, unnerved by this approach, expressed that the suggested response requires women who are anticipating violence to compromise their data security and privacy to a stranger. As expressed by one interviewee: “I shouldn't have to expose more of my intimate and vulnerable data in order to be protected from online abuse. They need to come up with an approach that doesn’t further violate my data privacy.” (Interview, December 2017) Others pointed out that the technology cannot be fully trusted to respond to abuse. Also, like all Facebook processes, this process is largely non-transparent. Experts state that there is no telling whether the system can be easily tricked by altering aspects of the photo, sometimes in subtle and even imperceptible ways, to trick Facebook’s filters. Successful attempts at tricking machine vision systems are well documented through the use of “adversarial images,” thereby raising questions as to the efficacy of this approach.15

    Facebook’s approach described here directly relates to the debate on Platform solutionism, or the idea that social media platforms can effectively solve all problems pertaining to online harassment if they desired, through the use of algorithms and code. The long history of companies developing technological solutions to address unwanted or harmful content would seem to support this. More recently, Google and its offshoot Jigsaw announced that they had developed an API called Perspective that “uses machine learning to spot abuse and harassment online.”. These tools usually rely on automation run by algorithms, whereas reporting tools content often require the additional layer of human moderation.

    Yet many disagree with Platform solutionism, and do not believe that tech solutions, like algorithms, alone will resolve the issue of online harassment. Instead they call for a more multidimensional approach – namely one that incorporates policy, technology and public education. We agree with the viewpoint that relying solely on tech solutions or quick techno fixes is inadequate to ensure a holistic response to online harassment. This is because – as pointed out by many critics - quick techno fixes are still products of wider societal power imbalances themselves, due to their paternalistic, top down nature and they fail to give people real tools to protect themselves.16 An effective human review process is integral to providing comprehensive and long-lasting solutions to online harassment, as well as key to the intersectional approach that is needed.

    Should tech companies have the power and responsibility and address online harassment?

    Regardless of whether human moderation or tech solutions are used, the debatable efficacy and ethics involved in tech companies addressing online harassment has raised the question as to whether social media platforms ought to have the primary responsibility for tackling online harassment and whether they have the will and capacity to do so.

    With regard to the ethical implications of tech companies regulating content, it is important to note that in April 2016, The Guardian examined 1.4 million blocked comments by moderators since 1999 and discovered that the 10 writers whose comments were blocked the most were women; feminist writer Jessica Valenti was the number 1 in terms of experiencing blocked content. As US-based law professor Jeffrey Rosen observed, social media platforms have "more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president." With the power to shape whose voice is heard, the ethics implicated in social media platforms navigating the issue of online harassment are of a high concern. Yet their imbalanced response toward blocking women writers does not provide a convincing argument that tech companies ought to be the first port of call for online harassment. This imbalanced response also raises questions as to whether tech companies have the will to comprehensively address online harassment. Jessica Valenti argues , “If Twitter, Facebook or Google wanted to stop their users from receiving online harassment, they could do it tomorrow.”

    Addressing gendered online harassment would first require tech companies to centre the experiences of their users and use that as a pivoting point through which to develop solutions and build trust – something which they have yet to do and which severely limits their efficacy. This point was reiterated in our research and the majority of women who participated in our interviews stated that the remedies put forth by tech companies were ineffective in responding to widespread gendered online harassment. In particular, these women cited the guidelines as ineffective in maintaining their safety and wellbeing online. Twenty two out of 25 of the interviewees expressed that they distrust social media platforms for not doing more. One of them expressed, “Social media platforms are not even doing the bare minimum. I agree that online harassment is a social problem that requires a social response. But social media platforms need to do more.”

    In fact, there are even concerns that the current mechanisms used by social media platforms to regulate speech have strengthened the hand of authoritarian governments, rather than provided safeguards for women and other historically-excluded groups (such as religious minorities, women with different abilities, queer women of colour and others, who are at the receiving end of this particular form of violence). Such was the case in Indonesia and Cambodia, where the available content regulation mechanisms were utilised by pro-government actors to suppress dissenting voices by reporting en masse webpages of the opposition figures to the social media platforms. 16

    It is clear that use of technology itself mirrors broader societal inequities with regards to power and political agency, due to the way that technology is designed and applied and also the degree to which users are equipped to modify their own experiences via code. Given the extent of power embedded within social media platforms, combined with the complexity of the sources of gendered online harassment, and the questionable efficacy, will, and capacity of tech companies to address this, we should be hesitant to overemphasise the capacity of tech solutions to provide quick fixes to what are deeply complicated social problems. Online harassment is a large-scale social problem and the solutions put forth by tech companies are just one potential approach within the broader debate of how to address online harassment holistically.

    Responses by states and international governance and legal structures

    For many politically-active women, the ability to enjoy the freedom of expression is inherently tied to safeguards that ensure they can express themselves without being attacked or intimidated. For example, one activist interviewed for this research expressed “I am a young black woman. I am harassed online constantly as a direct consequence of my gender and racial identity and political work. My right to political participation is attacked, and my freedom of expression is stifled.” Another interviewee added “Why I, as a trans woman, have less of a right to exist politically in online platforms? Why do their right to attack me with words have more of a priority than my right to express myself?” (Interview, November 2017) Therefore, arguably, the lack of a rights-based approach privileges those who, as a result of their identity, are not likely to be targets of gendered online harassment and will thus enjoy a relative greater degree of free expression.

    While one might expect that the prime response to the gendered unequal realisation of the freedom of expression within national and international levels to be of a legal nature, as of yet, there are no clear regulations or safeguards for addressing online abuse. Some countries such as Canada, Kenya, the Philippines, South Africa and Brazil have adopted legislation focused on various forms of online abuse, such as non-consensual pornography or cyber stalking. However, there is still no clear overarching international standard or exemplary national level legislation to manage the various forms of online abuse that target women, whether flaming,name calling, impersonation or another form of abuse.

    A. International responses or frameworks

    In light of the lack of comprehensive regulation, international law provides some grounds for responding to online harassment within the framework of gender-based violence and can be build upon to develop a globally agreed-upon framework. For example, international law sets clear safeguards for freedom of expression through Article 19 of the International Covenant on Civil and Political Rights.

    Article 19

    1. Everyone shall have the right to hold opinions without interference.
    2. Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.
    3. The exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities. It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary: (a) For respect of the rights or reputations of others; (b) For the protection of national security or of public order (ordre public), or of public health or morals.

    Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which is further elucidated by the UN Human Rights Committee in General Comment 34, makes clear that the right to freedom of expression is a key right that can only be restricted under a limited set of circumstances. It is both an individual right of personal self-fulfilment and a collective right, allowing all members of society to receive information and ideas and inform themselves on matters of public interest. Within the framework of this report, women journalists have a special role to play in this democratic process. As the UN Human Rights Committee, which oversees compliance with the ICCPR, frames it: “A free, uncensored and unhindered press or other media is essential in any society to ensure freedom of opinion and expression … It constitutes one of the cornerstones of a democratic society.” For the states that are party to the ICCPR, or one of its regional counterparts, the European Convention on Human Rights, American Convention on Human Rights or African Charter on Human and Peoples’ Rights, this also entails an obligation to ensure a diverse media landscape, both online and offline.

    The Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), is also key in that it sets up an agenda and national action to end all forms of discrimination targeting women. The CEDAW Committee, which oversees states’ compliance with the Convention, recently updated by General Recommendation 35, explicitly states that gender-based violence against women constitutes discrimination against women. Online harassment of politically-active women falls within this international norm. The General Recommendation states that “Harmful practices and crimes against women human rights defenders, politicians, activists or journalists are also forms of gender-based violence against women.” Accordingly, it is the obligation of the state to put into place a domestic legal system that is capable of responding adequately to threats, ensuring that perpetrators are prosecuted.

    B. National responses or frameworks National level responses to address online harassment are severely lacking and they possess a number of issues. Institutionalising and implementing international norms on harassment or freedom of expression at the national level has been most effectively done within the existing legal frameworks that address offline harassment. However, the problem then largely lies in how law enforcement can apply those frameworks to the online context. It can be argued that it is not specific legislation that is needed, but rather the training of law enforcement to apply existing frameworks to the online context.

    In cases where national level legislation or mechanisms do exist, an issue lies in making people aware of the potential mechanisms for complaints or redress. 18 out of 25 of the women interviewed in our research expressed that they are unaware if there is a legal remedy in their national contexts to respond to online harassment. Eighty percent of all women interviewed expressed they are unaware of any support mechanisms other than several reporting options run by social media platforms, which they deemed ineffective.

    Limits of legal responses

    Legal remedies to online harassment have limitations that must also be considered. Soraya Chemaly from the Women’s Media Center argues that legal solutions on their own will be inadequate to respond to online harassment. In 2017, she wrote that online harassment is a “social problem that requires a social solution.” This is because “Laws, globally, are woefully behind technology. In addition to which, the murky question of jurisdiction alone impedes justice. However, the law won’t fix these problems, which require ”cultural change. The politically-active women we interviewed agree that relying on a single government institution or technology company is inadequate for providing a holistic response to online harassment. The feminist critique of linear and one-sided remedies also highlights a lack of clarity and a general lack of trust with regards to the existing legal framework that is established and controlled by the values inherent within patriarchy.patriarchy.

    Yet for many politically-active women, the reality is more nuanced as their ability to enjoy the freedom of expression is inherently tied to safeguards which ensure that they can express themselves without being attacked or intimidated. For example, one activist interviewed for this research expressed “I am a young black woman. I am harassed online constantly as a direct consequence of my gender and racial identity and political work. My right to political participation is attacked, and my freedom of expression is stifled.” Another interviewee added “Why I, as a trans woman, have less of a right to exist politically in online platforms? Why does their right to attack me with words have more of a priority than my right to express myself?” (Interview, December 2017) Therefore, arguably, the lack of a rights-based approach privileges those who, as a result of their identity, are not likely to be targets of gendered online harassment and will thus enjoy a relative greater degree of free expression.

    The feminist critique of linear and one-sided remedies also highlights a lack of clarity and a general lack of trust with regards to the existing legal framework that is established and controlled by the values inherent within patriarchy. Technology itself mirrors broader societal inequities with regards to power and political agency, due to the way that technology is designed and applied.

    The limits of US/Euro-centrism in developing legal remedies

    One large obstacle in developing a global understanding in addressing online harassment is that currently much of the debate around the root causes and remedies of online harassment place disproportionate emphasis on US and European legal frameworks. Although the American Convention on Human Rights or African Charter on Human and Peoples’ Rights sets standards for region specific debates, there needs to be more emphasis which goes to rendering visible harassment and ‘cyberbullying’ outside of the US or European contexts. What is urgently needed is a context-specific debate that can guide us on the role non state actors as well as state actors. The US-centric legal framework that informs the (de-)regulation of online harassment is further empowered by the fact that three of the largest social media companies – Facebook, Twitter, and several Alphabet initiatives - are physically headquartered in the US and therefore defer to US law. One journalist who participated in our research rightfully noted that “Social media platforms cite First Amendment protections to justify their unwillingness to respond to online harassment. However, we must be mindful that even though their servers are US-based, the effects of their technology including online harassment is global.” (Interview November 2017)

    The prominence of the US context lies in how it has centred the global debate on online harassment around the US’s First Amendment rights. With the US providing the most protective free speech framework in the world, the US-based American Civil Liberties Union (ACLU) has used its position as a free speech advocate to become a pioneering voice in countering the efforts for regulating online speech. They argue that online speech regulation would crackdown on opposing and unfavourable views. They cite Ashcroft v. ACLU and Reno v. ACLU to highlight that the First Amendment does not permit targeting speech merely because it is offensive, reprehensible, or even harmful to the unsuspecting listener, and therefore online speech ought to be similarly protected – even if it risks constituting cyberbullying. The ACLU states that it is a parent’s responsibility to regulate access to content and not the government’s, adding that “… the only way for the Internet to remain a true marketplace of ideas is to push for the free exchange of information and speech, with the understanding that online speech can be as beneficial or as hurtful as speech occuring offline" However, in August 2017, the ACLU made a statement specifying that it would not defend groups that incite violence or march “armed to teeth.”

    One clear case that illustrates the limits of deferring disproportionately to Global North contexts is that of Colombia. Between 2009 and 2012, the Colombian feminist organisation, Mujeres Insumisas, received a series of online threats in response to their campaigns for women’s rights. Paramilitary groups were among the harassers, which sent the NGO 12 threatening emails urging them to stop their women’s rights campaigning. One email stated that “we will not be responsible for what might happen to the leaders of these organisations ... we have begun to exterminate each one of them without mercy”. It would be difficult to discuss the Colombian context of such harassment through the US-centric legal lens, without taking into account the context-specific historical realities of the country – namely a lens that puts emphasis on the role of non-state actors as well as state actors in facilitating gender based violence and discrimination. Therefore, when contemplating a universal, comprehensive legal definition and approach to addressing online harassment faced by politically-active women, it is essential to include a context-specific approach.

    Given the global nature of the problem, the discussion also needs to be diversified, incorporating the actual lived realities of women in the Global South, with legal and political solutions suited to these contexts. Limiting the discussion to US and/or Europe-based contexts - with a specific focus on Facebook, Twitter and Google - renders invisible the global implications of digitally-facilitated online gender-based harassment.

    7 Resources on responding to Online Harassment on Social Media: Links to support centres of the Platforms: Twitter Facebook

    8 Wikipedia Content Policies LiveJournal

    9 Gamergate is used as a blanket term for the harassment campaign and actions of those participating in it. Beginning in August 2014, the harassment campaign targeted several women in the video game industry, including game developers Zoë Quinn and Brianna Wu, as well as feminist media critic Anita Sarkeesian. For more information on Gamergate please visit here: Gawker, 2014,WashingtonPost, 2014

    10 SomethingAwful.com is a platform which offers daily internet news, reviews of movies, games, and social networking, anime and adult parody, and it is one of the internet's largest forums covering games, movies, computers, sports, anime, cars.

    11 The Guardian, 2017, and The Guardian, 2017

    12 Pro Publica, 2017 The Guardian, 2017 The Guardian, 2017 Fortune, 2017

    13 At Tactical Tech, we prefer not to use the term revenge porn and instead use „non consensual image sharing“. We believe the term “revenge porn” trivializes experiences of survivors, while rendering invisible the many different tactics used by adversaries in disseminating non consensual images of women.

    14 Hashing is a technique where a specific algorithm is applied to an arbitrary amount of input data to generate a fixed-size output data called the hash.

    15 Adversarial images is the use of digital manipulations to trick AI algorithms, either by making a facial recognition system think someone looks like someone else, or by forcing a piece of image recognition software to think it’s looking at an one object that is in reality just a noisy mess of geometric shapes. The Verge, 2017

    16Efficiency and Madness - Using Data and Technology to Solve Social, Environmental and Political Problems

    17 New York Times, 2018 New York Times, 2018

    18 General recommendation No. 35 on gender-based violence against women, updating general recommendation No. 19 The Committee on the Elimination of Discrimination against Women (CEDAW)

    Facebook is hiring moderators

    The Guardian

    www.theguardian.com

    More than words. Complexities of Platforms in Responding to Online Harassment

    Jillian York

    https://xyz.informationactivism.org/en/

    Insults and rape threats, writers shouldn’t have to deal with this

    The Guardian

    www.theguardian.com

    If tech companies wanted to end online harassment, they could do it tomorrow

    The Guardian

    www.theguardian.com

    CEDAW

    United Nations

    www.un.org

    Online Harassment Is a Social Problem That Requires a Social Response

    Soraya Chemaly

    www.huffingtonpost.com

    Einige Gedanken zum Prinzip der Rechtsstaatlichkeit

    Antje Schrupp

    www.antjeschrupp.com

    Free Speech and Cyber-bullying

    ACLU

    www.aclu.org

    Revenge porn

    The Guardian

    www.theguardian.com