Contenuto disponibile in Italiano

Privacy and the Internet, what does Big Tech know about us? Sorice (Luiss): “They can systematize our personal data”           

"Regulating such a complex system based on sharing and which – in ideological terms – was conceived for networking purposes - is extremely difficult”, said Michele Sorice, Professor of Democratic innovation and political sociology at LUISS “Guido Carli” University. “I am not an advocate of catastrophic forecasts: I don’t believe that technology could ever replace the human touch. However I am not hyper-optimistic either and I can’t deny the risks linked to invasive – in some cases pervasive - technology” , he pointed out.

“If we were to plan a development model different from the present one, we might be able to recover a certain amount of privacy, as global profiling requirements would become less important.” It is the view of Michele Sorice, Professor of Democratic innovation and political sociology at LUISS “Guido Carli” University, where he coordinates the “Massimo Baldini” Centre for Media and Democratic Innovations

 

The Facebook-Cambridge Analytica case drew global attention to the issue of protecting personal information on social networks and the problem of its applicability. Why wait so long?
Regulations and measures re a constantly evolving domain like the Internet, are bound to be marked by delays. In fact they picture a situation that in most cases has already changed when the rules have entered into force. On top of this, “governing” such a complex system that is based on sharing, and which was  –  ideologically – conceived for networking, is extremely difficult. Moreover,

It’s hard to establish rules on a “platform” that is primarily run by large economic groups,  

Whose turnovers often exceed national budgets. There is also a critical dimension, which could be partly described as apocalyptic, albeit marked by some interesting aspects: in some cases these aspects have emerged only now because it was deemed convenient. As for now, the outcome of the Cambridge Analytica controversy is that social media activity has become more complex, coupled by the adoption of rules meant to hide more than what they show.

What differentiates the United States from Europe in the protection of online personal data?

Europe’s position is more consumer oriented compared to the United States, which by tradition is more sensitive to the needs of corporations. However, it should be noted that the many public opinion movements actively engaged in the US make it easier to “counteract” through class actions.

To what extent is Big Tech’s use of personal data of out of control? What about users’ responsibility in unreservedly accepting seemingly free-of-charge services?
I don’t know whether it spiralled out of control. For sure, big tech companies’ massive know-how and skills – even greater than the “surveillance files” kept by the secret services of totalitarian regimes in the 1950s-60s – enable them to systematize our personal information, which would have been unconceivable a few decades ago. In fact, the focal point is not confined to the possibility of accessing information, it includes the ability to process data so as to create predictive or simply analytical models.

Users are accountable to a certain degree, not so much for accepting seemingly free-of-charge services, but inasmuch as they pour out personal information into public “repositories.”

We don’t stop at providing information on our political leanings – which could be partly be public in any case – or on our favourite pizza. We provide data that contributes to our self-profiling. In most cases we accept standardization methods, that range from our curricula vitae to the rules of communicative engagement. Whether or not this constitutes a risk should be the object of analysis and discussion. Whichever the case, it should prompt in-depth reflection.

What are the main sources of income of social networks and search engines, if not the “personalization” of advertisements thanks to users’ personal information?
Traditional TV broadcasting companies used to “sell” – and they still do – their share of viewers to advertising companies. Social networks sell profiles.  In other words, personalized digital advertising is an important vehicle for creating economic value. There are other business models based on registration or subscription, but they are obviously less “appealing”.

Apple and Goldman Sachs teamed up to release a credit card in partnership. Amazon patented a wristband that can track workers’ movements and knows all our buying preferences, Facebook is informed about our sexual and political orientations, robotics and artificial intelligence provide an important contribution (for example in the field of health), but they jeopardize the job market. What future should we expect?

It’s hard to tell. I am not an advocate of catastrophic forecasts: I don’t believe that technology will ever replace the human touch . But I am not hyper-optimistic either, and I can’t deny the risks linked to invasive technology, in some cases capable of pervasiveness. However, I consider these different phenomena. I am much less worried about Amazon knowing my consumption preferences than I am about an electronic wristband meant to “optimize” workers’ performance, ultimately amounting to labour exploitation. The fact that Facebook knows all about my political leanings is less scary than young workers forced to comply with digital platforms requirements organized by a algorithms, without trade-union rights.

People reduced to commodities are reason for greater concern than drug-dispenser robots.  

In essence, the future is in the making, and it partly depends on us all. However, it can be said that neo-liberalistic trends pose greater risks than technological innovation or the loss of a certain amount of privacy.

Altri articoli in Mondo

Mondo