Can Humans be Hacked?: A Semi-Technical Investigation Into Whether Artificial Intelligence Can Control Our Minds (Yet)

Image

A Semi-Technical Investigation Into Whether Artificial Intelligence Can Control Our Minds (Yet)

Image


Written by Stacey Svetlichnaya
Guest edited by Jonathan Stray

Illustration by Christina Lastovska

Published June 10, 2020

Image

Most of my casual discovery happens online. Spotify finds amazing artists I’d never hear otherwise. Medium recommends self-help articles perfectly balanced between tough and comforting. Facebook shows me vegan food delivery services and yoga pants with an illustrated mushroom motif. 

I feel seen, and sometimes spooked — did I post some relevant keywords recently, or am I just that predictable from demographics? Is it possible that I can’t tell the difference between what I really, genuinely want, and what an artificially-intelligent advertising platform would like me to buy? There’s a subtle hysteria around this question, but it deserves an earnest answer.

Image

Is it possible that I can’t tell the difference between what I really, genuinely want, and what an artificially-intelligent advertising platform would like me to buy? There’s a subtle hysteria around this question, but it deserves an earnest answer. 

Image

[[This article appears in Issue One of The New Modality. Buy your copy or subscribe here.]]

Yuval Noah Harari, historian and author of bestselling books on humanity’s evolution such as Sapiens, recently warned of the “ability to hack humans” at an event hosted by the Stanford Institute for Human-Centered Artificial Intelligence and documented by Wired. Harari says that AI, especially when linked with biological knowledge, may allow us to produce “an algorithm that understands me better than I understand myself, and can therefore manipulate me, enhance me, or replace me.” 

Every new technology can amplify power and control, and every new medium has carried a threat of malevolent persuasion, from print to radio to television. Yet data-driven, individually-targeted online messaging seems like something new. 

For many, this first got real when the news story about Facebook and Cambridge Analytica erupted in 2017. Recap: Cambridge Analytica (CA) compiled a database of tens of millions of Facebook profiles. They collected this data at a time when connecting Facebook to another app revealed not just your information, but information on all your friends. They used this data to build models to predict personality from Facebook profiles — or at least, to guess scores on a standard psychology test. The personality results were then used to customize Republican political messages in the 2016 election. CA claimed they could tell each individual prospective voter whatever would be most persuasive for that person. 

Alexander Nix, CEO of CA, described how this allegedly works in a January 2017 interview with Vice. To understand the methodology, let’s consider two strategies for marketing the concept of gun rights: 

  • One could use the threat of burglary, and a photo of a break-in, “for a highly neurotic and conscientious audience;” 
  • On the other hand, one could use an image of a father and son hunting together at sunset “for a closed and agreeable audience... who care about tradition, and habits, and family.” 

The idea is to figure out which formulation of the message would inspire each person, and then deliver the message with enough frequency or force — enough, but not too much, or the target audience might get suspicious and defensive — to make the audience act differently in the real world. This plausible-sounding approach is hard to test, and Cambridge Analytica has never offered public evidence that it works.

Despite this lack of evidence, many scary articles were published about CA’s “psychological warfare mindfuck tool” (The Guardian, 2018). After the scandal got kicked around for a while, David Karpf of the George Washington University School of Media and Public Affairs wrote a thoughtful review of the situation for Civic Hall, in which he argues against the idea of a “secret plan through weaponized online propaganda.” Karpf notes that CA’s outrageous development costs were typical of vaporware (i.e., software that is announced with great fanfare but never actually materializes); that most favorable testimony about CA’s effectiveness comes from experts with careers staked on psychometrics; and that the company wouldn’t resort to bribes and honeypots, as reported by The Atlantic, to tempt their customers if their tech worked. 

Even Alex Kogan, a data scientist who helped CA with initial data collection, “didn’t believe that Cambridge Analytica, or anyone else, could produce an algorithm that effectively classified people’s personality,” according to the book Outnumbered by math professor David Sumpter. The most charitable interpretation of the CA situation is a gap between theory and practice, which is often enormous in artificial intelligence. 

Still, Facebook data does say something about our personality. At least, it says something about our Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (a.k.a. the OCEAN or “Big 5” personality traits widely used in psychology research). In 2012, a team of Microsoft and Cambridge researchers tried to deduce personality traits from the Facebook data of 180,000 people. They found meaningful correlations between some OCEAN scores and user profile metrics, that is, a person’s number of Facebook friends, groups, likes, photos, statuses, and tags in others’ photos. 

The strongest correlation between Facebook stats and personality scores is intuitive: As a user’s relative number of friends increases, their relative Extraversion score does too. But the researchers learned that only Extraversion and Neuroticism could be predicted by an algorithm using Facebook data better than a human could guess. Plus, even when personality correlations are reliably detectable, they don’t necessarily give any persuasive power. What matters is not just what machines can learn about us, but what they can do with that knowledge

Most of what we know about online persuasion comes from advertising, such as the classic “banner ad” (the picture ads you see on most websites). In 2009, Jun Yan at Microsoft Research Asia led a study on maximizing effectiveness of banner ads with behavioral targeting and fine-grain user segmentation. Across 6 million search users and 18,000 real world ads, Yan’s team increased the probability of users clicking on the ads by 670%, and even up to 1,000% with more sophisticated techniques. These improvement rates may be less impressive than they seem, because the original click rates hover below 1% — i.e., at most, the ads got clicked 10 times out of every 1,000 times they were viewed — and few studies in the open literature report the absolute numbers.

Hence, tiny improvements can double or triple an advertiser’s sales, but it’s probably not the sort of revolution in persuasion worthy of a dystopian description like “psychological warfare mindfuck tool.”

Individualized targeting seems to be more powerful on social networks, perhaps because they know more about us, or perhaps because they are already more engaging spaces: Click rates on social networks are ten times higher than banner ads at baseline, even before research teams get involved. Personalization can increase click rates further. Pinterest — the third largest social network in the US, and one of the few that shares details about personalization strategies — reports a 48% jump in engagement from their latest effort to improve the relevance of recommended content. In other words, Pinterest reports a rise from 10 to 15 clicks out of 1,000 when the system guesses what would best suit that user. Interestingly, the effect is about the same for an improved version of the alleged Cambridge Analytica technique: A 2017 field study of 3.5 million users, published by Matz et al. in the Proceedings of the National Academy of Sciences and provocatively titled “Psychological targeting as an effective approach to digital mass persuasion,” simply matches Facebook ads to a user’s estimated Openness and Extraversion. This yields 40% more clicks (and 50% more purchases) compared to no personalization.

While advertising and shopping are the more lucrative applications of personalized persuasion, and headlines are full of lurid tales about geopolitical mind control, this type of technology can also be used to improve our health and habits. Customizing advice directly to individuals’ experience and preferences, the way a doctor or coach might, appears to be more persuasive than trying to sell them something. A 2013 “Meta-analysis of web-delivered tailored health behavior change interventions,” published by Lustria et al. in the Journal of Health Communication, found that 40 online programs (on physical activity, diet, quitting tobacco, etc.) showed “significantly greater” effects at post-testing and at follow-up when they used personalized interactions. When it comes to making positive behavior changes, willing human participation is likely to beat mere external “hacking” every time. 

So data-driven personalization is persuasive — to some extent, in some circumstances. Yet evidence on Cambridge Analytica’s core claim is sparse. There are few experiments testing the influence on voting outcomes. 

There is plenty of evidence that campaigns leverage data analytics to effectively target voters by demographics. According to a Guardian article exploring “Obama, Facebook, and the power of friendship: the 2012 data election,” Barack Obama’s Democratic presidential campaign customized a single donation request to 26 voter segments, while Republican Michele Bachmann showed slightly different online ads for Republican voters in each of 99 counties in Iowa, and Republican Rick Perry’s ads praising God only displayed to self-described “evangelical” Iowans on Facebook. 

On the foreign interference side, anthropologist Damian Ruck’s investigation of a Russian Twitter bot campaign in the 2016 election (published in the journal First Monday) finds a 1% poll increase for the Republican candidate for every 25,000 weekly re-tweets of their posts. So it appears that technology has optimized existing strategies to increase the volume, variety, and precision of advertising — but targeting an individual is still challenging, and of questionable value. A study by Lennart Krotzek called “Inside the Voter’s Mind” (International Journal of Communication, 2019) reports that, although personality-congruent ads significantly improve voters’ feelings towards a candidate, they don’t actually make someone more likely to change their vote. 

And yet I know from my own experience on the internet that there are marginal cases, especially where the stakes are lower. For example, I might see a banner ad for yoga pants similar to ones I’ve already shopped for, or be recommended those pants on Pinterest in the context of other beautiful and/or related photos, or see them on Facebook with the right text for my Openness or Extraversion level. Is this a special discovery, pants that deeply resonate with my unique aesthetic, or just a lucky calculation by the machine? If we assume they are using the best available persuasive strategies, then out of 1,000 random users in my pants-related situation, about 3 more would click on the banner ad and about 6 more would click on the Facebook ad (compared to a scenario without personalized advertising). 

Whether I’m persuaded by the technology to do something I wouldn’t have otherwise — to consider buying the pants — depends on whether or not I happen to be one of those extra 3 or 6 users. The effects of this “hack” are relatively small in an absolute sense, but there are nevertheless some people who never would have bought a product, formed a belief, or voted for a candidate were it not for personalized persuasion. 

Image

In this not-entirely-fictional story of an inhumanly persuasive AI brain with a hundred million voices, all posing as expert salesmen, what does self-determinism even mean?

Image

The ultimate manifestation of this technology might be an AI agent that talks to each voter, and uses the best persuasive strategies based on that person’s individual responses as the conversation unfolds. An automated system of customized canvassers could hold millions of conversations in parallel; develop personal relationships with each voter; aggregate feedback across entire states; and adapt — i.e., optimize on whatever works best for folks of a certain psychological profile — much faster than mere human pollsters. And this might happen soon. The latest research from Google and OpenAI on projects like the Meena chatbot and the GPT-2 text generator demonstrate that systems that learn from real-time distributed experience are already feasible, and conversational agents can already pass as humans in certain contexts. We don’t yet know how persuasive customized chatbots will be, but it’s possible that anthropomorphic agents will have a big advantage over mere advertisements. In this not-entirely-fictional story of an inhumanly persuasive AI brain with a hundred million voices, all posing as expert salesmen, what does self-determinism even mean?

One could argue that we should be protected from such possibilities, by professional standards around AI development, or by relevant law. Neither relevant professional standards nor relevant laws exist at present. 

Indeed, both the tech industry and its regulators are still struggling with the previous generation of tech-related challenges, though there are bright spots. For example, regulators in Europe managed to create the General Data Protection Regulation (GDPR) requirements for data privacy and clarity in communicating a company’s practices to users. Google is iterating on simpler privacy policies, and Facebook is adding explanations to its interface like “why am I seeing this ad?” — both increase transparency and give humans more control. 

In addition to government mandates, establishing stronger consensus and norms across research and industry on AI ethics, best practices, and metrics that prioritize user well-being would target the problem closer to the root (like the Center for Humane Technology’s idea of “Time Well Spent,” a campaign that encouraged tech companies to respect the time and attention of their users). In Wired, Yuval Harari even wished for an “AI sidekick” that would entirely belong to and serve an individual, not a corporation or government, and protect that individual from manipulation by all other AIs.  

Between the technological nudge and the human response, we make a choice. Awareness of the nudge — of the particular incentives of most social media and content-sharing platforms — may be enough to reground us in our sincere intentions. In his book The Power of Habit,  Charles Duhigg researches behavioral science and concludes that “simply understanding how habits work makes them easier to control.” Practically, this could mean adopting anticipatory behavioral countermeasures. For instance, we can set a designated time interval for using addictive apps; take a long pause to reflect before making a purchase that’s likely to be impulse-driven; and avoid Internet browsing late at night, when willpower is lowest.  

But if that’s all we have, the scariest possibility remains: We might simply manufacture retrospective stories about our choices, never quite becoming aware of the persuasive technology shaping our lives.

Our best hope may be, as Harari recommends, to get to know ourselves better than the algorithms can, so we can better distinguish what we want — and crucially why we want it — and then program our own nudges and enhancements. With luck, perhaps we’ll collaborate with AI to become the best versions of ourselves.

 

[[This article appears in Issue One of The New Modality. Buy your copy or subscribe here.]]

Image

Transparency Notes

This was written by Stacey Svetlichnaya, an AI engineer at Weights & Biases who is building developer tools for accessibility, transparency, and collaboration in deep learning. It was guest edited by Jonathan Stray, a researcher at the Partnership on AI, a collaboration of over 100 industry, civil society, and academic organizations dedicated to ensuring AI technology is beneficial for humanity. Finally, it was edited and lightly fact-checked by Lydia Laurenson.

There's more about our transparency process at our page about truth and transparency at The New Modality.

Image