REPORT ON THE CENTER FOR MEDIA AT RISK’S 2021 ANNUAL SYMPOSIUM
This report by Steering Committee Member and Annenberg doctoral student Sophie Maddocks presents findings from the Center for Media at Risk’s 2021 annual symposium on image-based abuse, which Maddocks co-organized. The event was the first of its kind, bringing together experts from around the world in focused discussion of one of the media’s most pressing issues. This report offers recommendations and shared priorities for policymakers, technologists, scholars and activists.
What is image-based abuse and why does it matter?
Drawing from Mary Anne Franks’ keynote speech, this section defines image-based abuse and explains why it is an urgent threat to democratic life.
You may not have heard the term “image-based abuse,” but as Mary Anne Franks explained in her keynote address, you are probably familiar with its fallout. Image-based abuse refers to the non-consensual creation and distribution of intimate images, and it is becoming an ever-present threat to privacy, speech and democracy. In 2014, a nude photo leak first put image-based abuse in U.S. headlines when 500 explicit pictures of celebrities, mostly women, were stolen and disseminated on public social networks. The hack’s most high-profile victim, Jennifer Lawrence, described it as “so unbelievably violating that you can’t even put it into words.” A few years later, a secret Facebook page called “Marines United” came to light. This all-male group of Marines and Navy corpsmen traded hundreds of naked images of their women colleagues, including nude images taken while women were unconscious, identifying them by their full name and rank. As Franks explains, these examples, while shocking, are becoming increasingly commonplace. A recent multi-country study found that one in three people report being victimized by image-based abuse.
These are not ‘one-off’ attacks that cause hurt feelings; they are broad efforts to silence and shame women in public space. In her remarks, Franks describes image-based abuse as a warning to girls and women not to “trip the wire of peoples’ misogyny.” It is also a severe invasion of sexual privacy that decimates the lives of its victims. Just the threat of this harm actively stops women from entering public-facing professions like politics, the military or the law. How does an elementary school teacher keep her job after her ex-boyfriend sends her intimate images to students’ parents? How does a young activist continue to speak out when deep fake technology is used to digitally strip her without her consent? What is it like for girls growing up today, when adult men regularly message them on social media platforms coercing them to share nudes? Altogether, these individual realities add up to serious personal and communal harm.
According to Franks, by allowing image-based abuse to continue unchecked, we are normalizing sexual surveillance, the use of sexuality as a punishment and the use of people sexually without their consent. Historical indifference to violence against women combined with the new features of digital technology create the conditions for a much broader resurgent misogyny that silences women.
But the communal harms of image-based abuse are not a foregone conclusion. Franks’ advocacy over the last ten years has led to huge strides against this harm. After a traumatic personal experience of image-based abuse during her graduate studies, Dr. Holly Jacobs united colleagues from across her university in 2012, including Mary Anne Franks, to form a website and online petition to criminalize what was then called “revenge porn.” The following year, she founded the non-profit Cyber Civil Rights Initiative. As one of the organization’s original members, Mary Anne Franks played a central role in changing legal and public attitudes towards this harm. First, she and her colleagues pushed back against initial news coverage of image-based abuse, which trivialized its consequences and blamed its victims. These activists rejected the inaccurate and victim-blaming term “revenge porn,” replacing it with the term “non-consensual pornography,” and later “image-based abuse.” As well as public advocacy and individual victim-support services, the Cyber Civil Rights Initiative single-handedly succeeded in criminalizing this harm in almost every state and conducted the first representative study of its traumatic impacts on victims.
Almost ten years after they began their advocacy work, everyone else is finally catching up. As Center Director Barbie Zelizer explained in her symposium introduction, we are now at the center of a storm. New laws against image-based abuse are being passed worldwide; the United Nations launched “body right,” a copyright system for the body; Facebook launched a new tool for removing image-based abuse; and Tik-Tok explicitly banned misogyny from its platform. This is the biggest lesson we take from Mary Anne Franks’ opening remarks: Law, technology, society, the way a platform operates and how people think are all things we can change. And we do this by listening to women. Franks describes women as “canaries in the coalmine.” By listening and responding to their experiences online, we can prevent and anticipate future harm. Beyond this, Franks also demands a change in our thinking. Instead of considering what we should do to protect or regulate speech, we should instead focus on the conditions that will allow free speech to flourish. When someone worries that their most private moments will be exposed and traded, leading to potential stalking, rape or death threats, their privacy, speech and ability to participate in society are eviscerated. This report presents an alternative vision. Through the findings and recommendations of image-based abuse experts from various contexts, new possibilities emerge for a safer and more accessible internet.
Key Findings: An emergent threat to speech, privacy and democracy
Incorporating Asia Eaton and Karuna Nain’s closing remarks, this section summarizes the symposium’s central themes.
A crime of misogynistic entitlement. The central theme echoed in every panelist’s remarks was the sexist and heterosexist nature of this harm. Although men can be victims and women can be perpetrators, image-based abuse is a regulatory practice rooted in misogynistic and heterosexist norms. Panelist Mikiba Morehead, reporting from the United States, noted that women are more likely to be targeted, especially bisexual women, younger women, women of African descent and Native American women. Her research also finds that African American men are more vulnerable to this harm. Morehead encourages us to look at image-based abuse within the context of other forms of sexual violence. She uses the term “poly victimization” to explain how adult victims of image-based abuse also typically experience multiple forms of violence.
Many panelists talked about how violence against women is normalized across on- and offline spaces. Reporting from Nigeria, panelist Richard Aborisade said that victims of image-based abuse are automatically blamed for the creation and distribution of their private images, and they are typically perceived as perpetrators, not victims. Panelist Nighat Dad, reporting from Pakistan, also spoke of the rape supportive culture that undermines women’s bodily consent and blames them for their victimization. In response, Panelist Paz Peña, reporting from a coalition of organizations in Latin America, explained that we must first denaturalize gender-based violence online, by taking it seriously as a life-ruining harm. In her study of perpetrators in the United States and Europe, panelist Amy Hasinoff found that many feel regret. Two-thirds of her sample said they would not have distributed images if they had known how much they harm the victim. By denaturalizing this harm in the minds of policymakers and the public, we can prevent its spread.
Severe, wide-ranging and lifelong impacts. Another concern shared across all panelists was the severity of this harm. Victims experience similar harms to sexual assault survivors: PTSD, anxiety, depression and suicidal thoughts. Image-based abuse is also a direct threat to physical safety, leading to stalking, sexual assault and even murder. Victims report experiencing financial, emotional and interpersonal harms for many years after their images were shared. Panelist Annie Schmutz Seifullah, reporting from the United States, shared her own experience of image-based abuse, which led to myriad harms, including the loss of her career as an educator, despite being one of the most high-performing school leaders in her State.
A growing global problem.Panelists across the nine countries represented in this symposium spoke of image-based abuse as a rapidly growing problem. Henry Ajder, reporting from the UK, explained that over the last 18 months, the prevalence of pornographic deep fakes has significantly increased. Deep fake pornography involves using AI face-swapping technology to create fake nude images or videos. While deep fake pornography was already a growing harm that almost exclusively targets women, Ajder attributes its recent rise to the conditions of the COVID-19 pandemic: During this period, people have more time to develop these technologies. Panelist Jinsook Kim saw a similar pattern in Korea. The use of hidden spy cameras to capture explicit images and videos of women in bathrooms or bedrooms is a common form of image-based abuse in this context. By using these cameras or hacking into existing household security cameras, perpetrators make money from selling footage online. In recent years, this hidden camera technology has also become more advanced. Panelist Paz Peña also raised the issue of image-based abuse in remote workplaces, which have become commonplace during the COVID-19 pandemic.
Limited Legal Solutions.Legal systems consistently fail victims despite successful attempts to criminalize image-based abuse across contexts. Reporting from the UK, Panelist Clare McGlynn observed that legal categories don’t fit women’s experiences. Even new offenses like image-based sexual abuse can’t easily be reflected in legal frameworks that don’t center consent. McGlynn raises several examples: downblousing (the act of taking images down a woman’s top without her consent), breastfeeding images consensually posted by parents to social networking platforms and the distribution of images from someone’s OnlyFans account without their consent. To protect the consensual distribution of breastfeeding images and prevent the non-consensual distribution of downblousing or OnlyFans images, lawmakers must center consent. Panelist Nighat Dad talked about the consequences of laws that don’t account for context. She argues that bad laws are being made everywhere, but it is particularly dangerous when they are made by countries in the Global North, because it gives countries in the Global South precedent to implement similar laws. Dad expands on the terrible impacts of poorly made laws from the Global North being applied in her region of Pakistan. In many cases, legislation claiming to promote online safety is used against victims and weaponized to silence women journalists and human rights defenders.
Insufficient Funding.One of the most common challenges cited by panelists was a lack of funding. They critiqued the reality that there is barely any money in this space, even though these advocates serve an enormous global population of internet users. Reporting from the UK, panelist Seyi Akiwowo spoke about the dangers of being underfunded and under-resourced. It creates silo working, in which advocates can’t develop infrastructures and connect with each other because they don’t have the money. She argues that organizations run by survivors that seek to provide resources for victims shouldn’t be made to jump through complex funding hoops or given small amounts from big corporations. Many of the panelists represented organizations that do the work of social networks: supporting victims, helping them to get their private images removed from sites and providing education about online safety. These advocacy groups with tiny budgets are doing the work of the world’s most profitable companies. Social networks should be investing in what is needed, not in “exciting” technological quick fixes. Instead of funding AI tools that claim to remove abusive content, they should be investing in the victim-support services and helplines that are actually helping victims. This funding should include care networks and support systems for advocates, who are educating lawmakers, judges, law enforcement officers, schools, social media companies and the public – as well as supporting individual victims. They are doing the work of social networks, and they are burning out.
Think Big, Think Together, Think Fast: Recommendations for policymakers
Drawing from all three panel discussions, this section offers five recommendations for those in positions to shape policy, regulations and norms around image-based abuse.
Our panelists urge policymakers to think big. Instead of minimizing individual harms, they should focus on changing the platform architectures and societal biases that enable image-based abuse. Panelists also urge policymakers to think fast. Image-based abuse spreads in seconds, and, in some contexts, it is a death sentence for the victim. As Nighat Dad warns, a slow response from a social network can be life-threatening. As well as thinking big and thinking fast, policymakers must think together. This harm requires collective solutions at each part of the process: from challenging social norms to funding victim-support services. The following recommendations offer concrete ways to shift our thinking.
1. Center Survivors
Policy responses must be survivor-informed. As Panelist Annie Schmutz Seifullah explains, many advocates in this space are also survivors of image-based abuse. Because the abuse is so public, they are often empowered to challenge it as part of their healing process. Victims of image-based abuse are different from other victims of sexual violence because the harms they experience are often ongoing, with photos continually re-posted over years and anonymous cyber mobs taking over the harassment. For example, images from the Marines UnitedFacebook group continue to be actively circulated today. To center survivors and account for trauma, policymakers should take an intersectional approach. Solutions must work across groups, from cis male victims in one country to trans sex workers in another. Simply by partnering with the advocates in this space, social networks can begin to adjust their approaches. For example, the dating app Bumble recently partnered with the survivor-led organization Chayn to provide on-platform trauma support for users. By listening, partnering and power-sharing with survivors, policymakers could significantly improve the design of new tools and regulations.
2. Imagine Rights Differently
As Mary Anne Franks explained in her opening address, we need to focus on creating the best conditions for speech to flourish. Expanding this idea, panelist Paz Peña challenges us to use the creative power of feminism to imagine our online spaces differently. What would it look like to shift the libertarian ideologies of “Big Tech”? Peña describes how technology has historically extracted, colonized and oppressed, but it also offers new ways to challenge these norms through feminist internet infrastructures, feminist digital security methodologies and collective solutions. The idea that information “wants” to be free and cannot be controlled is a fallacy. Social networks closely monitor everything from spam to child pornography on their platforms. By reimagining our rights to speech and privacy online, these panelists encourage platforms to prioritize the needs of minoritized groups and reject the techno-libertarianism that has historically harmed them.
3. Think Globally
Panelist Seyi Akiwowo urges policymakers to reject a “splinter internet.” Inconsistency across contexts creates spaces where online harms are more prevalent and help is harder to access. Although enormous progress has been made in a range of countries with small numbers of advocates and few resources to foster healthy public spaces across contexts, we must strive for global solutions. This begins with listening to advocates in the Global South. Panelist Paz Peña finds that social networks have little interest in addressing image-based outside of the United States, even though Latin America is a fundamental part of their market: “these platforms have left us to our own devices on issues such as local content moderation, which is nonexistent, which has led to the rise of misogyny and hate speech.” Peña points out that many platforms do not even translate their terms of service into Spanish and calls on those in the Global North to hold platforms accountable to their users in other regions. She warns against “gender washing,” in which technology companies claim to work with victims in the Global South but don’t do concrete things to help them. Panelist Seyi Akiwowo points out that the technology industry is underregulated compared to almost any other industry. Without regulation, innovation becomes a race to the bottom, as new technologies like deep faking further threaten the human rights of minoritized groups. Akiwowo suggests a global ranking for social media companies that incentivizes them to innovate in ways that contribute towarddemocracy, safety and speech.
4. Communicate Thoughtfully
As Panelist Henry Ajder warns, if companies communicate about deepfakes incorrectly, they end up spreading the abuse further and minimizing its harms. Much early communication around deep fakes focused only on political forgeries, even though over 90% of deep fakes are pornographic. This completely misrepresented a harm that is predominantly sexual and almost exclusively targets women. Communication about deepfakes has also fallen into the opposite trap. By publishing the names of deep faking apps, outlets have inadvertently given publicity to bad actors and spread this harm even further. When policymakers communicate about image-based sexual abuse, they must ensure their content is accurate, provides anonymity for victims and communicates this harm as a serious form of abuse.
5. Develop Shared Research Priorities
In addition to working with survivors and advocates, policymakers can also benefit from the wealth of knowledge developed in the image-based abuse research community. Spanning Law, Criminology, Psychology, Communication and Public Health, research on image-based abuse has already revealed the impact, scope and prevalence of its harm. To ensure that policy responses are intersectional, further research is needed on the experiences of trans people and individuals with physical and mental disabilities. Research should be conducted in partnership with these individuals, centering their needs and expertise. Research is also urgently needed on the efficacy of new approaches to platform regulation. This will provide essential evidence that victim-centered policies and practices promote speech, democracy and healthy online spaces.
Stay connected:
Follow the Center for Media at Risk and the Cyber Civil Rights Initiative on Twitter.
Contact Sophie Maddocks to join the image-based abuse research list.
Five ways to help: Here are some actions you can take to address image-based abuse.
Five-minute self-care:Discussing abuse can be traumatic and triggering; here are some exercises to take time and space to care for yourself.
Resources: If you or someone you know is worried about image-based abuse, follow this link to find help.