Marc Roger Gagné /Gagne Legal Services

Mark Zuckerberg is keeping privacy watchdogs busy these days…

Facebook thinks some of its users are gaming the system they’ve set up to combat fake news. This, they claim, warrants the use of a ‘Trustworthiness Score’ for users. Also called a ‘Reputation Score’, it sounds a lot like the Chinese government’s Social Scoring system or, to cite a more benign-sounding example, Uber’s ‘Passenger Rating’.

The problem is, Facebook has a lot more data on its users than say, Uber, or any other entity, for that matter. Therefore, with any scoring system they devise, the potential for harm is much greater.

Is Facebook stepping over the line again?

Social Scoring: What’s the Danger Here?

Social scoring is made possible by the millions of points of data that digital platforms like Facebook collect from users. They store the data, develop algorithms to extract insight, and create scoring systems like the Trustworthiness Score.

Facebook’s system is, essentially, a social scoring algorithm that claims to rate user ‘trustworthiness’. It’s designed to give credence to their fake news-fighting operations, whereby suspicious content is flagged and reported by users. Users are gaming that system by reporting as fake the posts they disagree with. Facebook looks at their behaviour on the platform to determine whether they’ve got a hidden agenda fueling their flagging activity.

Imagine hackers stealing this type of “behaviour data”. Imagine if it was used for anything other than ferreting out fake reporters of fake news. Does it violate GDPR? Most certainly.

The Right to Know: Facebook Trustworthiness Scores vs. Credit Scores

Facebook does not make it possible for users to find out their score. There’s one strike against them when it comes to complying with GDPR. Not knowing their score makes it impossible for users to dispute their scores like they can do with their credit scores.

Credit scores are heavily regulated. In the U.S., there’s the Fair Credit Reporting Act. There’s also the 2003 Fair and Accurate Credit Transactions Act (FACTA), which guarantees the right to receive a free copy.

Similarly, in Canada, there’s the Credit Reporting Act. Details vary by province but at the federal level, it’s a crime for anyone to access your credit data except for the purposes for which it was gathered. Users of the data must also inform people if their credit data has been used to deny services (such as a loan). They must also divulge the source of the data.

With multiple layers of protection in place for credit data, the behavioural data Facebook collects seems strikingly exposed and vulnerable… as if nobody’s tending the sheep. That’s problem number one. Problem number two is the flimsy defence Zuckerberg has come up with for his reputation scoring system.

The Fake News Explanation of Why FB Scores its Users

Facebook defends reputation scoring, which they say is necessary for combating fake news.

That’s a convenient explanation that dovetails nicely with current events. Because FB has been collecting this type of data since before the fake news problem emerged, it feels as if they’re leveraging current public interest to gain approval for something that has ramifications much deeper than fixing the fake news problem. And it should be noted that some are even questioning FB’s determination to combat fake news or even take the matter seriously to begin with.

Seen in this light, their practice of scoring their users based on behavioural data seems overly risky, privacy-wise. It’s a controversial solution, especially given their tepid response so far to the problem of fake news. Why dance on the edge of privacy protocol? Why put your users’ well-being at risk? Why tarnish the reputation of your platform when you’re not that concerned about the end results?

The answer is that fake news is only part of the reason Facebook is interested in online user behaviour. As we all know, this kind of data is useful to third-party advertisers for marketing purposes. It would also prove useful to insurance companies, government agencies, law enforcement, healthcare companies and more. Social signals, behavioural data need protection just like our Personally Identifiable Information (PII) and financial data need protection. And if they can get the public used to the idea of social scoring on this seemingly benign level (only those abusing the fake news reporting system need to worry about it), maybe they can edge us all close to full-on, Chinese style reputation scoring.

So why is nobody doing anything about it? After all, look at the protections we have for financial data.

Financial Data Protection vs. Behavioural Data Protection

Financial data has been fenced in for decades because politicians and voters understand its power. A bad credit score can have a rippling effect on someone’s life. From denial of credit to higher interest rates on loans to obstacles in finding housing, there are palpable effects.

But the impact of a bad behavioural score is harder to imagine. We have yet to see how social scoring will be used because it hasn’t yet reached maximum scale and no catastrophic events resulting from it have been reported. Most people don’t like to deal with abstractions… they need cold hard facts and clearly visible outcomes before they take action.

But while we wait for that big event to goad us into action, companies like Facebook are quietly churning away at their Big Data collection machines, storing not just our personal information but our behaviour as well. Twitter announced a reputation score eight years ago!

Once we realize that this type of data matters as much as our credit scores and our PII, we can begin to close the gap in the fence that encircles one of our most precious assets: our online privacy.

Marc-Roger Gagné CCIE, CCII, CIPP/G/C, MAPP

@OttLegalRebels

admin