Social networks are an incredible source of information and they connect the world in a click. But they can also be a means to spread hate, fake news and terrorist propaganda.
On March 15th, 2019, In Christchurch, third largest city of New Zealand, a white-supremacist killed 51 people and injured 49 in two mosques, and the shooting was live-streamed on Facebook.
In early 2018, a whistle-blower revealed to The Guardian and to The New York Times that Facebook allowed Cambridge Analytica, a political consulting firm, to obtain data of as much of 50 million users for electoral purposes. The ads for billions of Americans were pivoted towards the candidate they might like the most based on their personal habits and information.
These are just two examples that show how social networks have failed, at least partially, to self-regulate themselves.
Last February, Facebook CEO Mark Zuckerberg openly asked for more rules by governments in a letter to the Financial Times. He asked for the state to write the regulationand to take the restorability out of the hands of Big Tech. It is clear that the industry needs more control, but what road should government follow when it comes to such sensitive issues like freedom of speech and data protection?
Right now, the social media sector is extremely concentrated, with a few companies like Facebook and Google that hold 75% of market share. The result is less competition and less innovation. Let’s take, for example, Snapchat. The app was created in 2011 by three students at Stanford and it allows to share text and photo for a limited amount of time, after which they would automatically erase. In 2016, after Snapchat’s founder refused their multibillion offer, Facebook added the unique features of Snapchat (the most popular being the 24-hours stories) to its platforms. As a result, it was able to expand and consolidate its user base.
Since 2010, Facebook has bought up 78 companies, acquiring their technology and ensuring that the company’s platforms continue to attract users by removing any possible competitor. Ten years ago, the top ten of most popular social networks was made of names like MySpace, hi5, MyYearbook and Bebo. In 2019, five out of ten top used social networks are owned either by Facebook or Google, and the other 5 are Chinese (like WeChat and QQ). In 10 years, Facebook and Google were able to solidify their lead in the world of social media, while competing US companies were pushed to the side.
Given the role that social network play, a plurality of players is important in ensuring a healthy public conversation: more companies would mean different values and different approaches to self-regulation. Social networks have different targets and answer different needs: we do not share our thoughts on sport or politics on Instagram; instead, we use Twitter. For the same reason, we should welcome as many different approaches as possible when it comes to how our activity online is regulated.
Moreover, it would be beneficial to freedom of speech and allow a greater set of rules and values to apply to diverse virtual communities. Right now, most of our social interaction online happens on a handful of websites and apps and is therefore filtered and reorganised according to a limited set of values and priorities. Competition could avoid a centralized moderation of public discourse and reduce the influence big platforms like Facebook could have on users and governments. Creating antitrust and competition laws would also ensure that start-ups are not bought up too early by bigger companies, and in doing that foster innovation.
Many companies have shown to be unable to effectively and promptly remove harmful content, fake news or hate comments. In general, social networks lack the incentives to effectively regulate content: in fact, they have all the reasons to show what appeals to people the most in order to attract as many users as possible.
Moreover, when it comes to political ads, social networks are very slow to ensure false or misleading content is removed. Although Facebook announced that income from political ads amount to only 0,5% of their total revenues, they are still a great way for social networks to gain influence and visibility.
A key issue when deciding how to regulate a social network is liability. It should be clear that we are past the idea of social networks only as mere technology companies; instead, they should be considered key players in managing and curating the public sphere. Therefore, it is reasonable to think that they should take responsibility of what is posted on their platform to some extent. How much exactly is the tricky question. For example, governments could hold companies responsible when it comes to paid advertisement, for instance political ads, and when they fail to act in a reasonable time to remove determined content. In particular, the distributor liability on paid advertisement would offer a strong incentive to companies like Facebook to effectively check ads, both political and not, without risking collateral censorship, that is when the companies do not publish a lot of content for the fear to be held accountable.
Companies like Facebook or Twitter are already the arbiters of public discourse, with all the duties that come with the role. Their algorithms decide what we see first and what we never see at all. It is reasonable to ask that illegal content should be removed within a certain period or time or they should face fines. The control can of course be done only ex post, so a reasonable time to check the posts or comments is required. Nevertheless, this method would incentivise companies to monitor more efficiently the content on their website and enforce their own terms and conditions without imposing governmental standards.
Data and user privacy
Social networks like Facebook have failed in the past to deliver effective data protection strategies. The underlying problem is that the business model most tech companies use focuses on the user data as an asset, instead of the individuals behind those data. In fact, social networks make their money by analysing every possible piece of data they can obtain from users and selling advertising space, so they are driven to collect as much data as they can, often disregarding the user right to .
Theoretically, companies should be given more incentives to take fiduciary obligations towards their users, meaning hold to a higher standard when it comes to user data. People who use social network are fundamentally in a position of weakness to companies; they don’t attribute their data the right value and therefore use them to “pay” for access to websites without correctly understanding the exchange. This is easily understandable if we think that the benefits of the use of social network is immediate, while the cost of giving up personal information is very difficult to calculate. Moreover, a lot of data is produced without the users even knowing it, for example data on geo-localization, preferences, usual searches and so on. Social networks hold the trust of their users, and therefore should have special duties of loyalty and confidentiality towards them. If companies are accountable for their use of user data, they would have the incentives to be exercise care when dealing with them. The aim is to avoid scandals like the Cambridge Analytica one by pushing companies to reconsider their business model in dealing with user data.
Let’s look at the example of the EU General Data Protection Regulation, or GDPR. Since 2018, the regulation focuses on users consent as key element for companies to manipulate data. It addresses data privacy, requiring whoever collets sensitive data to acquire consent from the user, who can revoke it at any time. GDPR had also other consequences, notably unifying the regulation within the EU and designing a framework in which user privacy is at the centre (for example, data is not publicly visible by default). Under GDPR, every citizen has the right to data portability, i.e. moving the data to another service provider, and the right to have one’s data removed (right “to be forgotten”). On the other hand, GDPR has put a burden on many companies in terms of the costs involved. For example, in the two year prior to the coming into effect of the regulations (which passed in 2016), large U.S. companies spent 7.8 billion dollars on GDPR preparation.
So far GDPR has been an effective measure in tackling the important challenge of data protection and has shifted the focus on the individual and his rights.
Social networks are very young: it has only been 16 years since Facebook was founded, but the industry has come a long way since then. In this limited number of years, social networks have been steadily increasing their popularity around the world, and this trend will most likely continue as more people gain access to the internet. As the number of users increase, so does the power they can exercise over our privacy and our democracy. If we want to foster and protect these essential rights, it is important to act now and write the rules to do so.