OPMG Collective: Facebook and Cambridge Analytica

5 April 2018

OPMG Collective: Facebook and Cambridge Analytica

This is a new segment that we’ll be doing semi-regularly, where each of the editors will weigh in on a topic or story with their own thoughts and opinions. We hope that it will help to promote discussion around the various issues, as well as display the range of perspectives that there are on a topic, as well as within OPMG itself. Without further ado, our first collective article — on the story of Facebook and Cambridge Analytica.


Image credit: Chesnot (Getty Images)

To me, in one sense, this story wasn’t particularly surprising. How much data are we giving companies these days, without really thinking about how they might use it? How many apps require us to sign into Facebook, and allow them to access our public profile, or our friends list, or our likes? How many sites want access to our Google account, and our contacts? It made sense that sooner or later one of them would be shady. I’m more surprised that it’s taken this long for one to slip up, or be found out.

Of course, the problem is, most of us want to keep using sites like this. They allow us connectivity, and often have a lot of functionality for us in our day-to-day lives, even interacting with work more and more frequently. And so, for the most part, the question isn’t so much what data do we hand over; because it seems that we’re willing to hand over just about anything, in return for the services that we get. Rather, the question becomes — who can we trust? And that’s a much harder question.

Some people say that the issue is with Facebook specifically; and with more and more people deleting their Facebook accounts, and their stock market value plummeting, this seems to be most people’s quick analysis. But, chances are, any place that you give your data to is going to have a part of it that goes to somewhere that you don’t want it to. You can’t see where it goes! And the thing is, now that Facebook has had this issue, they’re probably going to tighten up a lot more around it, and perhaps be more trustworthy than some other companies or sites; or, at least, appear that way.

So what’s the answer? Well, perhaps there isn’t one. Perhaps this is our new reality that we need to get used to — that when we exchange data for services, that data can, and at some point probably will, be used against us. So we need to be ready for that; and not be steered so easily by companies thinking that they know us by our data points. Be more than the likes on your Facebook page.


I tend to agree that nowadays it isn’t a matter of what we hand over, but who we can trust. The reality is, these services make our lives easier, and most people aren’t willing to make their lives harder again. So while I personally don’t feel comfortable with my private information being stored and data mined and so on, it simply isn’t realistic to expect people to change, especially when services like Facebook play such a central role in our day-to-day lives.

To play the devil’s advocate for a moment, data collection has proven useful to both companies and consumers in marketing and advertising. It allows for targeted micro-marketing, which means consumers aren’t shown a bunch of random products or services, but things that are actually relevant to them. For some, they are happy with that. For others, myself included, we still don’t like our information being accessed without consent.

However, consent is also open to interpretation. Let’s face it, most of us don’t read the terms and conditions of an app before we download it and set up an account. If you ever have, you’ve likely noticed the language is often far more complicated than it needs to be. What companies like Facebook could do is write the terms and conditions using simple language so that everyone understands what it is they’re signing up for.

It isn’t a one-way street, though. Users need to step up and take an active interest in what they’re doing. Similarly to an article I wrote on fake news, the onus is on us to make informed decisions about what we use. I doubt many users will read the terms and conditions regardless of what kind of language is used, but there will be some who do read it.

Bringing it back to that point of who we can trust, this is a step in the right direction. Companies with greater transparency and clear ethical guidelines are ultimately more successful than organisations who don’t emphasise these things. And in the age of social media and the fragility of online reputations, once consumers feel duped, they become insatiable and it will cost the offender big time, as has been demonstrated with the #DeleteFacebook movement.


There is an understanding that users of a social media service or network receive access in return for their personal data. There is an expectation by the users that their data will be used for certain purposes, such as targeted advertising, and perhaps even sold in anonymised form for research purposes. This is the cost of using the service, and as long as this expectation continues to be met, all is well.

Users’ expectations of how their data will be used is encouraged by the benevolent public image carefully crafted by the providers of these services. Google’s motto ‘don’t be evil’ contrasts with Google’s poor track record of engaging in questionable activity, and users appear to be realising that service providers are businesses, and have spent considerable effort and money to attract users who are not in fact customers, but the product itself.

Expectations are changing as a result. It is no longer enough to be perceived as benevolent, there must actually be actions to support this. Users are starting to demand greater transparency and greater accountability, two features that have enormous public relations currency. The terms and conditions of websites are beginning to be viewed as the commercial arrangements they are, and users want more guarantees in return. They want to be kept informed of how their data is actually being used without vague provisions buried in legalese, and they want to see consequences for misrepresenting the nature of the arrangement.

Users must take an active and critical interest in the use of their data, and now they are. They are demanding transparency and accountability, and service providers can no longer respond by simply pointing to the terms and conditions.


The facts of the Cambridge Analytica story have been well litigated and analysed ever since the story broke in March, and rightly so – not only because it feeds into the ongoing narrative of the 2016 election, but because it touches on the foundational concern of privacy in our technological and increasingly interconnected society.

The issue seems simple enough: how did up to 87 million users get their data accessed without their consent? To answer that, and develop responses to it, requires us to put on two different hats.

The first angle is to examine the perpetrators. After all, Cambridge Analytica only got access to the vast amounts of user data by deliberately exploiting a loophole in Facebook’s third party API, the research from which (whilst obtained legally) they then onsold in direct contravention of Facebook policy. And this was with no indication to the 270,000 users that did use their quiz that their data would be used for anything other than ‘academic purposes’, or that friends’ details would be used at all. It cannot be overstated that this controversy would not have occurred if those right in the centre of it had steered their inquiries and research with a more honourable moral compass. And part of the solution may be to strengthen the external safeguards that stop people from veering off course when entrusted with the data of thousands of people.

But the second angle is a more complicated one, and puts the focus solely on Facebook. Firstly, their handling of the potential breach, once discovered, left much to be desired. If your solution to finding a loophole in your API is to quietly patch it and tell no one (as Facebook did in 2015), then you’ve probably missed a couple of steps, especially when you continue to sit on the problem for around three years. But in general, the Cambridge Analytica scandal goes to the root of psyche behind the operation of Facebook – the modus operandi that drove the actors in this story to their parts.  The experience of end users and the protection of their data will always be a secondary concern if your idea of responsibly connecting people includes allowing suicide bombers to coordinate. All Cambridge Analytica did was skim private data to get information in the same way that Netflix might use your browsing history to suggest movies. And they came up with a model that could predict your political preferences with 70% to 80% accuracy.

We can count our likes and delete our accounts all we want – in a world where big data is already being used to make a real impact in how we interact with each other and the world around us, the onus of taking the first steps towards improvement needs to fall on the social media giants themselves. And this won’t happen if your end goal is more users without any thought to their protection.