In a moment of potent stress during winter break, I went on a brief frenzy searching for summer internships and research positions on Google. I clicked a few tabs and feverishly scrolled a few job-hunting sites, before ultimately deciding the effort was fruitless and exiting out of the tab. Within a few minutes, I’d all but forgotten the venture.
My browsing cookies, however, didn’t. I know because within a few hours, all my promoted ads on Twitter featured job-hunting websites, with taglines like “We are hiring a Compensation Analyst. Are you a good fit?” and references to various tech-based internship programs. The browsing history on my computer had been catalogued by targeted advertising agencies, which then promoted relevant ads on my Twitter feed. And this was odd, given that I’d already explicitly requested that Google not store and distribute my browsing history to targeted advertisement agencies.
When it comes to avoiding targeted advertisements, I’m a little bit on the paranoid side. I’ve been careful to delete cookies from my computer. I request that all my frequented social media websites do not distribute my history to advertising companies so I see more “relevant ads.” I use Google Chrome add-ons that protect me from snoopy code and tell me what outside sites are attempting to store my browsing data.
Consequently, it’s frustrating when despite all of these precautions, I still see advertisements that are relevant to my age bracket, location, gender and browsing history. There appear to be underlying methods through which my Internet activity is observed and categorized, which my attempts at achieving privacy have not eradicated. This is neither a new, nor a specific issue. Regularly, individuals experience similar infiltrations into privacy. Often, these are to much greater detrimental effects.
Political advertising is a relevant example. On platforms such as Facebook, campaigns are given options of target audiences to pursue with particular advertisements. A campaign could specify that they want to target women, who live in a particular region, and who have liked the ACLU Facebook page. Audiences are narrowed until they are a select cohort who is the most likely to respond well to particular advertisements. In these cases, targeted advertisements can become so narrow, and speak so particularly to individuals, that their voting preferences are swayed. This is relevant also to the increasing polarization of political attitudes also caused by the presentation of accounts and advertisement on Facebook. And, of course, there is the Cambridge Analytica data scandal, which opened up conversation around Facebook’s misuse of data, in which Cambridge Analytica illegally harvested the personal data of millions of people. These platforms are hunting grounds for data.
Other instances of targeted advertising can have more specific effects on mental and physical health. A Vice reporter noted that when she began receiving targeted advertisements about depression and bipolar disorder based on her recent Internet activity, her therapist explained that the ads were likely making her more depressed. Notably, this was exacerbated as the ads became more and more specific based on browsing history, and choices to click an advertisement. I, too, have experienced this; after looking up information about anxiety disorders on the Internet, I began receiving advertisements for mental health professionals and services on my Twitter feed. This kind of occurrence is not only invasive—it can be damaging. Seeing advertisements asserting one has an anxiety disorder can cause serious effects, notably convincing an individual that they have a mental health issue without having seen a mental health professional. Facebook asserts that they combat this with an advertising policy which does not allow ads to imply or assert individuals’ personal attributes. Yet it is hard to mitigate the prevalence of ads such as these, and said ads continue to circulate.
Another example comes from an New York Times article, which noted that it is possible for advertising agencies to reasonably guess when a woman is pregnant based on her shopping habits, and in an extreme case, recognize that she is pregnant before anyone else does. The Times reported that the purchase of certain lotions and supplements helped Target determine the likelihood that an individual is pregnant. As a result, one young woman began receiving information in the mail about pregnancy, without having told her family she was pregnant. And by extension, companies can use this information to target individuals with more pregnancy-related commodities and services. This extrapolation is not a far reach—it is done regularly.
There is another element to keep in mind. Social media platforms could not afford to be free to users unless they were selling some sort of product—which is you. You, and your shopping habits, are the product that social media platforms peddle, and as such it pays to have your data and interests available to advertisement companies. Recognizing this fact of social media platforms is vital in order to understand the way major companies, which make the entirety of their revenue on advertisements, function in regard to data and privacy. The more information about you is available, the more companies can figure out what you want, and sell it to you.
The old adage of social media platforms—which make the vast majority of their capital via advertising—is that users would prefer to receive advertisements for commodities that they’re already interested in, that they would find valuable, wouldn’t they? When an individual changes privacy settings on Twitter or Facebook, a small text box attempts to dissuade this choice. “Are you sure? Your advertisements may be less relevant to you.” Which is true—if changing the settings works, advertisements should be less relevant to you, and that is exactly the point. Helping to prevent online advertisers from formulating a profile of you as an online shopper means that you have more agency over what you purchase, and where, and when, and from whom.
It’s not all dire. There are a number of options to combat the pervasiveness of targeted advertisements. The first thing to do is to change advertisement settings on all relevant apps, notably Twitter, Facebook and Instagram. These will be available under settings, and then privacy settings. Ignore the attempts to convince you that seeing more relevant ads will be better for your experience of the app.
Next, change settings on your Google account. Under the privacy settings page, you can navigate to ad settings and turn ad personalization off. You can also take this opportunity to explore what information in particular Google is storing of yours. Is your YouTube watch history saved? Consider turning it off.
If you’re looking to go a step further, there are a couple of chrome add-ons that I’ve found useful. Setting the default browser to DuckDuckGo, which stores no search history information, is a way to ensure your search history is not saved. You can also download it as an app for smartphones. The Disconnect add-on is also useful. When you navigate to certain web pages, the add-on informs you which outside sites are attempting to garner information from you. One website that also is particularly interesting, and tailored to women and nonbinary folks, is Chupadados, which identifies how companies will specifically target specific individuals, such pregnant individuals, or people who regularly purchase period products. This website provides well-written and insightful information on how to keep on your toes on the web, especially as a woman or nonbinary person.
I use all of these services, and yet I still constantly receive advertisements relevant to my age, location, gender, and interests. I won’t assert that using these services will mean all relevant ads disappear. But it’s a beginning—a step in the direction of claiming your identity on the Internet, as a human, rather than a consumer, or worse: as the product itself.
Emmy Hughes is a member of the Class of 2020 and can be reached at ebhughes@wesleyan.edu.