When launching a digital campaign, online tracking and profiling can be a little like glimpsing grime beneath your fingernails: once you become aware of it, it’s irritating, difficult to ignore and a tad unsavoury. 

If you look at search trends for ‘ad blocking’ – a fairly good proxy for the online privacy vs. customer profiling debate – it looks like interest in tools for detecting this peaked in August 2015.

However, with recent advances in tracking and profiling technology, the debate is growing more complex. One reason for this is that there’s currently no way to block personalised ads that are delivered to you via media you don’t directly control (which is, I suppose, kinda obvious).

For example, as you stroll around a shopping centre, it won’t be long before CCTV facial recognition identifies you, and all the ads you walk past become tailored to your preferences. If this sounds like something straight out of Minority Report, think again. This isn’t far from reality.

In May 2017, The Australian reported that Facebook were touting their platform’s ability to detect mood of pre-teens during a pitch to a bank (which is against the law in Australia).

Just take a look at what was revealed on this Swedish pizzeria’s screen when the ad it was showing crashed. You can clearly see that facial recognition software is in action, building eerie profiles on customers. Data includes details such as how many people were smiling or wearing glasses, and how much attention they were paying to the ad before it crashed.

Swedish pizza screen facial regognition

Influencing emotion

Then, there’s Facebook. In May 2017, The Australian reported that Facebook were touting their platform’s ability to detect mood of pre-teens during a pitch to a bank (which is against the law in Australia).

This questionable behaviour is a return to form. Back in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service, concluding that: “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.”

The problem is that the data scientists forgot to ask permission. This finding is even more relevant, given their recent problems with fake news being promulgated across the platform.

And how about this report from the Washington Post? It details how a start-up business profits from strip-mining data from social media profiles to sell to landlords, employers and online dating sites:

However, not all profiling is nefarious and creepy. Examples of good profiling include:

  • Netflix’s recommendation engine
  • Spotify’s mood analysis
  • iTunes “for you” recommendations

Ultimately, though, if tracking and profiling works and it stays behind the scenes (as is the case with iTunes, Spotify, Netflix), no-one cares except privacy advocates.

Mood vs. sentiment

Mood analysis is the latest in new data technology, and is being incorporated into streaming music services right now. Mood analysis and sentiment analysis sound alike but, if you think about it, it’s not that simple. For example: I can state that I hate marmite, but that statement may not reflect my mood – I could actually be smiling as I lie about hating marmite, because I actually love it.

The statement (‘I hate marmite’) would be analysed as having a negative sentiment, but the sentiment of that single sentence (or tweet) is not enough to predict my mood. For that, you need a series of posts, tweets, or the fact that I’ve been listening to Leonard Cohen for the past 24 hours, to gauge my mood.

So, mood analysis needs a bit of context. As we’ve seen, Facebook has been actively contextualising posts in terms of mood since at least 2012. It doesn’t take a leap of logic to predict that they’ll be using mood for ad targeting.

Spotify laptop

Ultimately, though, if tracking and profiling works and it stays behind the scenes (as is the case with iTunes, Spotify, Netflix), no-one cares except privacy advocates. It’s only when profiling stops being invisible – such as when it advertises a TV to you that you’ve already bought, or you begin to notice the same ad following you around the internet – that you begin to get that uneasy feeling of being watched.

However, if perfectly relevant content is delivered to you when you are most receptive, you won’t notice that you’ve been expertly profiled. It’s only when the content is inopportune or irrelevant (i.e.: if I’m presented with Norwegian death metal when I’m in the mood for 70s novelty songs) that it obtrudes and, like dirty fingernails, becomes noticeably unpleasant. This is doubly true for adverts, as they need even greater permission to obtrude.

So, the fine line for marketers to negotiate is this: if you don’t go far enough in profiling your audience, you will deliver tone-deaf advertising. But, if you go too far, you’re likely to annoy, unnerve and – at worst – become deeply unethical.