Surveillance Capitalism: How Targeted Ads Solidify the Gender Binary

PART I: THE PROBLEM

Using Gender As a Proxy

“Unequal knowledge about us produces unequal power over us, and so epistemic inequality widens to include the distance between what we can do and what can be done to us.”

-Shoshana Zuboff, Surveillance Capitalism1

I signed up for a Facebook page in 2008, when I was fourteen. I was presented with two gender options: “male” or “female.” I chose “female”—a decision that, at the time, seemed to carry no weight or implication. It was automatic, a simple piece of information needed in order to access friends outside of school and post pictures of my new bangs. But nearly half my life later, that information seems to define every interaction I have with the internet. It is used to send me—and countless others—constant advertisements for bras, dresses, jewelry, and diarrhea-inducing “detox” smoothies.2

This is one effect of “surveillance capitalism,” a term which has come to be synonymous with tech companies’ invasion of privacy. It is well known that social media companies collect information about us, including our demographics and purchasing history, in order to allow advertisers to target us based on that information. Social media companies justify this invasion of privacy by telling us that they rely on it to survive. They’re not exaggerating: In 2019, Facebook alone made $69.7 billion from advertising, amounting to more than 98% of its total yearly revenue.3

For most social media users, including myself, these ads are helpful at best and annoying at worst—an inconvenience we put up with in order to reap the benefits of connecting with others on the internet. But for others, including nonbinary, gender nonconforming, and trans folks, receiving gendered ads like these can be harmful if the ads don’t match their own gender identity. “I do absolutely think there are ways in which non-affirming advertising can really fuck with people,” says Kendra Albert, lecturer on Women, Gender, and Sexuality at Harvard and an instructor at the Harvard Law School Cyberlaw Clinic.4 If social media companies like Facebook, Twitter, and Google are supposed to “know” you, and yet they serve you ads that misgender you, what does that mean about how the world sees you?

If social media companies like Facebook, Twitter, and Google are supposed to “know” you, and yet they serve you ads that misgender you, what does that mean about how the world sees you?

In this way, surveillance capitalism is so much more than just an invasion of privacy; it is a way of creating and maintaining power, shaping how we see ourselves by telling us how others see us, and solidifying pre-existing dynamics—including the gender binary. Because there is little regulation to stop them, tech companies freely harvest and sell our personal information. In doing so, they legitimate and perpetuate a major power imbalance that favors their own profit over the individuals whose data they exploit.

Data and Behavioral Targeting

Explaining the harms of gendered ads requires us to first take a step back and examine surveillance capitalism and advertising more generally. Shoshana Zuboff, professor emeritus at Harvard Business School, delivered a powerful critique of surveillance capitalism in her 2019 book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.5 At the outset, Zuboff defines “surveillance capitalism” as “[a] new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales” and “[a] movement that aims to impose a new collective order based on total certainty.”6 To a layperson, perhaps the most familiar mechanism by which surveillance capitalism works is through ad targeting. In basic terms, ad targeting is the intentional directing of ads to a user based on data collected about that user. Advertisers who buy ad space from Google or Facebook can ask those sites to display ads to users who fit a certain profile, and the site will take what they know about their users to figure out who fits that profile.

Targeted advertising may be based on data collected directly from users (e.g., gender, age, occupation) or on information collected when you browse the internet. Most users are familiar, at least in the abstract, with cookies. You may see an advertisement on your Facebook page for a pair of shoes you were looking at yesterday on a different website; this is made possible by Facebook reading a cookie, or small data file placed on your device by the online shoe store you visited. However, you could also see that same shoe ad on your phone later, even though there’s no cookie stored on your phone. In that case, advertisers are using “probabilistic matching,” which uses a number of metrics, like IP address and browser, to match the same user across all of their devices with high accuracy.7

Targeted ads may also be based on behavioral information that sites have collected about you. The most insidious form of behavioral targeting was highlighted in the Facebook-Cambridge Analytica scandal following the 2016 United States presidential election. Cambridge Analytica, a British consulting firm, used raw data from more than 50 million Facebook profiles to develop models that could predict user attributes including political views, IQ, neuroticism, and life satisfaction. Cambridge Analytica used these psychographic profiles to target voters with political ads for Donald Trump that were highly-tailored to appeal to specific users.8

Since each trait presents a targeting opportunity for advertisers, targeted advertising thrives on categorizing users by their demographic and personal information. Certainty in categorization is a goal of surveillance capitalism, since the most effective surveillance gets to “know” people at the individual level. As Zuboff writes,

The aim [of surveillance capitalism] is the comprehensive visibility, coordination, confluence, control, and harmonization of social processes in the pursuit of scale, scope, and action… The result is the application of … power to societal optimization for the sake of market objectives: a utopia of certainty.9

Social media platforms supply this certainty—the fruits of their surveillance—in exchange for advertisers’ capital.

Given the need for certainty, surveillance capitalism runs into problems when users cannot be put into a fixed category. Gender presents one such complication. In the world of targeted advertising, “[g]ender has emerged as one of the defining demographics of focus.” It “is a recurrent determinant in devising marketing and advertising strategies, with electronic commerce research indicating that gender is a key attribute and predictor of intent to purchase.”10 Yet, companies do not seem to know what to do with users who identify as neither male nor female; with such a key piece of information “missing,” targeting becomes more difficult.

“Part of the problem is that gender is used as a proxy for a lot of different stuff when it comes to advertising,” says Kendra. “So sometimes, when you’re asking the question about what [you want] to be served ads for, the question you’re actually asking is, ‘Do you have boobs and wear bras?’ And it turns out the best way some people think to ask that question is, ‘What gender are you?’ But there isn’t a one-to-one correlation between even the people that answer ‘woman’ and the people that want to buy or need bras.”

The inability (read: unwillingness) of surveillance capitalists to understand gender as a spectrum instead of a binary is illustrated by the ways in which most social media sites handle ad targeting based on gender. In response to criticisms of signup pages with only “male” or “female,” options, more and more social media sites are allowing users to sign up with an “unspecified” or “other” gender.11 The ability to choose a gender outside of the binary options, however, is a double-edged sword for gender nonconforming individuals. According to research conducted in 2016 by Rena Bivens and Oliver Haimson, every one of the ten most popular English-speaking social media sites allow advertisers to target users based on gender. Half of those sites (Google, YouTube, Blogspot, Yahoo, and Pinterest) allow advertisers to target some combination of male, female, or “other” users.12  In practice, this means that these sites allow for companies to cut nonbinary individuals out of their advertising targeting—even if they’re advertising jobs, housing, or other essential amenities where gender discrimination is otherwise outlawed.

Shadow Gendering and its Harmful Effects

Bivens and Haimson’s research uncovered another disturbing—and even more insidious—trend. As mentioned, five of the ten social media sites allow users to target male, female, or “other” users. The other five—LinkedIn, Twitter, Instagram, Facebook, and VK—allow only a binary gender categorization in their advertising portals.20 As of 2015, Facebook’s signup page allows new users to customize their gender; existing users may also revise their gender.21 Though Instagram’s signup page does not include gender information, its connection to Facebook means that data is shared between the sites. Twitter and LinkedIn also have genderless signup and profile pages. Nonetheless, each of the sites “use user data and actions to algorithmically infer a binary gender category to satisfy their advertising and marketing clients.”22 In other words, Facebook, Instagram, Twitter, and LinkedIn assign a shadow binary gender for users who don’t identify their own gender, without that user’s input.23

While not as illegal—and not as outright damning—as allowing job advertisers to opt-out of showing ads to users of “unknown” gender, assigning a “shadow” binary gender can have insidious consequences for genderqueer users. As Kendra told me: “Shadow gendering replicates a lot of ways in which transphobia manifests in technology [and real life]—specifically against nonbinary people, there’s this idea that you can never actually be what you say you are—there’s something about your gender that people actually need to get to the bottom of.”24

“Shadow gendering replicates a lot of ways in which transphobia manifests in technology [and real life]—specifically against nonbinary people, there’s this idea that you can never actually be what you say you are—there’s something about your gender that people actually need to get to the bottom of.”

– Kendra Albert

Indeed, the targeted advertising facilitated by shadow gendering can have harmful real-life consequences. Marketing experts and sociologists have long known that targeted ads can shape our understanding of ourselves and our place in the world. Behavioral targeting especially changes the way we relate to the world. For example, a series of studies in 2016 showed that users who believed they were behaviorally targeted with ads for a “sophisticated” restaurant began to label themselves as “sophisticated.” 25 According to the study, “participants saw the targeted ad as reflective of their own characteristics. The ad told them that, based on their browsing history, they had sophisticated tastes. They accepted this information, saw themselves as more sophisticated consumers, and this shift in how they saw themselves increased their interest in the sophisticated product.” The same was true of users who believed they were targeted for “green” products. Once they’d labeled themselves as “sophisticated” or “green,” the consumers were more likely to buy the “sophisticated” or “green” product.

But the implications of this self-labeling go beyond a consumer’s willingness to buy a specified product. According to the study, “behaviorally targeted ads lead consumers to make adjustments to their self-perceptions to match the implied label; these self-perceptions then impact behavior including purchase intentions for the advertised product and other behaviors related to the implied label.”26

No study has addressed the effect that labels may have on young people who are figuring out their own identities beyond the gender binary. But “[r]esearchers have… found links between gendered advertisements and increased development of highly gendered attitudes and beliefs, which supports the claim that advertisements do have an effect on gender identity development in children.”27

The label does need to be at least semi-accurate in order for it to influence self-perception. For example, “If you have never engaged in any behavior online that would suggest that you are interested in upscale dining… an ad for an upscale restaurant isn’t going to make you suddenly feel like someone with extremely sophisticated dining preferences.”28 That is, if you know that a label is wholly irrelevant to you and is not based on a category to which you belong, it will probably not affect your understanding of yourself.

Thus, it is not likely that nonbinary and gender nonconforming folks will begin to see themselves as either male or female because of targeted ads. However, knowing that the social media platforms you spend so much time on perceive you as fitting into an inaccurate category can affect your understanding of how the world sees you. This can be especially confusing when users are misgendered by a platform that allows them to customize their gender during signup. But, says Kendra, “When Facebook allows you to customize your gender, they’re really virtue signaling to the public that they understand that gender isn’t binary. But then they assign you a secret binary gender. And maybe you’re being targeted ads for bras, but that’s triggering for you because now you think others see you as someone who would buy or want a bra.”

This mismatch between your identity and others’ perceptions can lead to gender dysphoria. The DSM 5 defines gender dysphoria as “[a] marked incongruence between one’s experienced/expressed gender and assigned gender.”29 Gender dysphoria has been described by those experiencing it as being “like when you know to your core that something is true and everything else around you, including what people say and do and the feedback you get from the world, says otherwise.”30 Gender dysphoria often leads to anxiety, depression, and a host of other psychological harms.31

The fact that gendered advertisements create gendered beliefs implies that gender nonconforming individuals may be especially vulnerable to advertisements based on a shadow gender. As Bivens and Haimson point out, in a surveillance capitalist economy,

The very definition of gender is filtered through a “marketing logic of consumption” and the meaning of that category is often algorithmically determined, operating as a modulating force by constantly shifting in tune with an invisible feedback loop…This feedback loop has the effect of perpetually conditioning us via the suggestions and recommendations that populate as we surf and interact online, imperceptibly nudging us toward conformity.32

In other words, in catering to advertisers’ desire to target ads based on a gender binary, social media companies solidify that very binary. In a form of deep psychological capture, these companies take in data from nuanced individuals, categorize and compartmentalize it for their advertising clients, and send that flattened and manipulated portrait back to users, claiming it reflects their place in the world.

PART 2: CORPORATE PROMISES AND FAILURES

Corporate Control

Social media companies are able to collect so much information about us—to get to “know” us—because there are very few data collection laws in the United States, and no laws preventing targeted advertising. This lack of regulation has its roots in the beginnings of the internet, which was envisioned as a “new frontier” almost since its inception. As John Perry Barlow insisted in his influential 1996 “Declaration of the Independence of Cyberspace,” the internet is a sacred space devoid of government influence. “You [governments] claim there are problems among us that you need to solve,” he wrote.

You use this claim as an excuse to invade our precincts. Many of these problems don’t exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different.33

This libertarian ideal of the internet as an unregulated, collaboratively constructed, last refuge of freedom persists even now. But the internet of today is not the one Barlow imagined. Instead, its corners have been monopolized by a handful of large tech companies that capitalize on Barlow’s vision.

Through this “frontier” narrative and black-box decision-making, tech giants have been able to fend off most regulations for decades, all while appearing to be responsive to user concerns. In the political spotlight now is Section 230 of the Communications Decency Act, which protects social media platforms from liability for content posted on their sites.34 The invulnerability that Section 230 affords tech companies was threatened last year, when then-President Trump, made it one of his missions to reform the law, under the premise that social media platforms are unfairly biased against conservatives. 35 As a form of appeasement, Mark Zuckerberg suggested measures to change Section 230—although most of which, critics argue, would potentially hurt smaller tech companies while leaving Facebook intact.36

The general dearth of regulation in the tech space has the added advantage of making tech companies look responsible when they make voluntary changes. As Zuboff points out, “The public’s intolerable knowledge disadvantage is deepened by surveillance capitalists’ perfection of mass communications as gaslighting.”37 Facebook’s creation of a custom gender field on their signup page, for example, received positive attention from LGBTQ advocates.38 Their internal shadow gendering policy remained intact. Additionally, less than a year ago, Facebook voluntarily changed its practices with regard to targeting ads based on race after receiving an email from the U.S. Department of Housing and Urban Development.39 However, Facebook still allows for advertising based on categories that are often proxies for race: while they deleted the category for “African American Affinity” from their advertising platform, they continue to allow targeting based on interest in “African American Culture.”40 As Facebook itself stated in its announcement of the change, “when possible, we will guide advertisers to options that are similar to ones that have been removed and that should provide comparable performance.”41

For its part, Google has recently promised to end behavioral profile building—a move that’s also been criticized as being an appeasement to lawmakers instead of a good faith effort to protect consumer privacy and independence. Indeed, “Google isn’t changing any policies for how publishers collect or use data gathered directly from users. So, a publisher that uses Google’s ad tech will still be able to sell ads that are targeted based on the publisher’s first-party data,” including gender.42

The (In)efficacy of Ad Targeting

A growing body of evidence suggests that there may be a tragic irony to the invasive collection and weaponization of our data: ad targeting does not work as well as tech companies say it does. Tim Hwang, a former Google employee, and Sinan Aral, a tech entrepreneur and director of the MIT Initiative on the Digital Economy, have both written books about the myths of ad microtargeting.43 According to Aral, “it’s common for platforms and media agencies to triple (at least) its apparent value by wrongly crediting digital ads for purchases that consumers would have made anyway.”44

This is a major issue for companies that rely on advertising revenue to exist. The promise of microtargeting is that a social media company can categorize users with certainty, and thus know consumers in a way other companies cannot. It is on that basis that tech companies harvest so much data about us, and it is on that basis that advertisers pay them so much money. The lack of regulation in the data collection and targeted advertising space mean that social media companies can continue to profit off of what very well might be a big charade, while making users and advertisers pay the price.

CONCLUSION

In an Atlantic article published in early 2012, journalist Alexis C. Madrigal described his wariness about the increasingly fine-tuned capabilities of social media companies to target ads based on data and consumer behavior. “Perhaps there are natural limits to what data targeting can do for advertisers and when we look back in [ten] years at why data collection practices changed, it will not be because of regulation or self-regulation or a user uprising. No, it will be because the best ads could not be targeted. It will be because the whole idea did not work, and the best minds of the next generation will turn their attention to something else.”45

Reading that quote almost ten years later, it is hard to maintain its hopefulness. While Madrigal is likely right—the best ads cannot be targeted—there is no indication that data collection or targeted advertising practices are going to change drastically anytime soon. They certainly won’t change at the behest of social media companies that have the vast majority of their profits to lose. Lawmakers will need to pass legislation that actually protects consumers and their privacy.  However, because of the vast influence that social media companies have over their own domain, such legislation is unlikely to pass any time soon.

In the meantime, Kendra has a suggestion for social media companies that want to target ads without feeding into the gender binary or potentially triggering gender nonconforming users: stop shadow gendering and ask people what ads they want. “Asking is a more respectful way of engaging with people around these potential sensitive issues where you’re using things as proxies,” they say, “but also you probably shouldn’t be using it as a proxy in the first place. You should be able to opt in for ads with bras. But that’s not often how we think about these things.”46