On a sunny day at Brand Week Istanbul MediaCat Magazine‘s Business Editor, Selin Ozkan, sat with Dutch media theorist, media activist, internet critic and author, Geert Lovink, to discuss social networks, the AI revolution and the future of media.
Do you like any social platforms?
It’s not so much if I like them or not. My main field of study is the devastating aspect of [social media], especially on the mental health of young people. For us, it is not necessary to be on the social media platforms themselves. You have to take a step back to find out.
Our intention is to empower young people to understand more about the mental health implications [social media] has on them, to work with them and show there are alternatives. There are other values that the original internet had at some point a long time ago that were very different from the situation we find ourselves in today.
When social platforms first arrived, they had promises: we’re going to connect people, you’re going to build relationships… it’s going to be easier to organise, beneficial for movements, good for business, and make commerce easier. How do you think they’ve delivered on these promises?
I would say there is a very small subclass that further professionalised and took up these claims, and they’re, of course, the influencers. They are a very small group of users that became superusers and then influencers. The influencers would obviously say yes to your question.
However, if we don’t make a distinction between the influencers and the ordinary users, your question becomes very hard to answer because ordinary users are not necessarily out there to promote a product, for instance. So for them, this question is more what you just said, the very early aspect that you mentioned, namely to come in contact with friends, family.
In my book, I show how from this original promise of social networking, we get to monopolist platforms. The networks are now no longer important. They have been established, they have been extracted. And the platform dynamic is a very different one today. It’s got nothing to do anymore with the original promise of the networks.
Do you think there’s a danger in framing social platforms as evil, because it takes away some of the responsibility from other parties that are involved? For example, what you just said, the reason that social networks became monopolies is because the governments were so slow to regulate them. Of course, in any industry, in any field, if you let one company grow seamlessly or extravagantly, they’re going to establish a dominance, a monopoly. How do you think about the misbehaviour of other involved parties, when you think about monopolies that we have in the social sphere right now?
Yeah. I know it’s not that simple, but my message would still be: ‘Internet good, platforms evil’. Let’s start with that. There is no way back to the village. Even if there was a way back I don’t want to go there. We want to create new relationships. But we want to be more in charge of the social architecture. Maybe also other related things like privacy, data extraction, and selling [of] your data. But your question is very interesting and it’s a historical question. Why did, at some point, others not intervene or start to create social network alternatives? Why did we end up in this monopoly? And even, it’s not a recognised one because the governments haven’t even thought about it. It’s still a rebel idea to claim that the internet is ruled by monopolies.
I’ve been involved in this internet thing since 1989. In mid-to-late ’90s, society thought [the internet] was kind of cute, but not so relevant. They didn’t really take notice. They were laughing at the internet because it was slow. You had to use a modem and websites were very primitive. They started to wake up maybe 20 years ago and started to notice. But by then, basically, all the major players were already very big. This dependency was already established.
At the moment, politicians are primarily only focused on one, for me, not-so-relevant aspect, namely fake news. But platform dependency is large and news is only one aspect. Politicians have so not yet looked at the algorithms and how they are recreating this whole landscape, let alone what’s now ahead of us with the coming of AI.
You mentioned AI and you also mentioned the beginning of the internet. If I were to classify digital life in three steps or three big revolutions, I would say the invention of the internet is the first big step, the invention of social networks is the second, and now AI is the third. When we look at these first two steps, they were built in completely different ways. How do you read these efforts and how do you think the third step is going to be built?
The effects of AI are going to be very much on the field of work — the optimisation of work. Relatively low-paid work can and will be automated. And funny enough, AI will also have a big impact on programming, on coding, on what is taught in universities of computer science, informatics and interface design.
AI is itself, in my understanding, parasitic. It parasites on the internet as a whole. It’s very interesting that one of the first big clashes we have inside the corporations will be the question of copyrights. What the AIs are doing is an enormous amount of scraping. But no one was asked if they agreed or [not]. This is again in the tradition of the extractivist companies that have been doing these practises behind our backs.
At the moment it’s a question whether they will be allowed to do this again in the coming AI revolution. And to be honest, I don’t really know yet what the outcome will be, because unlike what we just discussed about the internet with the elites, education and the media being very slow. I have the feeling that especially Western elites are a bit more cautious at the moment. Maybe also because they have seen what happens when you sleep through a revolution like that. The introduction and the implementation of AI will probably come with much more conflict.
We talk a lot about bias in AI systems, but there’s an argument there that the systems reflect already existing structures. So when we look at social spaces and when we consider things like trolling, discrimination, misogyny, racism, misinformation — do you think they’re a result of broken social systems? And can we expect social platforms to be peaceful environments before we solve the societal aspect of these things?
No, I would say no. And it’s very clear that at least from my scene, from the social and political struggles that I come from, the last thing we would do is repeat this mistake of what we call ‘techno-solutionism’. Solutionism meaning, how Evgeny Morozov phrased it — ‘a solution in search of a problem’. AI is very often now just presented like that. We have AI as a solution and maybe there are some problems out there, but that’s not going to work and society will not accept that. Are we going to repeat the same mistakes we made over the past ten years? Or can we learn a thing or two?
I think many people these days have a firm belief and share your opinion, namely that these societal problems should be resolved and there are no technical solutions for that. The only thing we could say is maybe in the organisation towards that we could use this or that tool. But I wouldn’t go further than that.
You’ve obviously been in this space for a lot longer than I have, but one thing that drives me towards pessimism is the way that people who develop technology look at talent. They value engineers, mathematicians, people heavy on that side of thinking rather than social scientists. But we talk about these issues — an engineer doesn’t know how to approach these things. We need social scientists, philosophers, artists. We need them at the table as well. Personally, I don’t see that happening.
No, I don’t see that happening either. These systems will be developed, will be introduced. But will society again be so naive? Because they still work with the old model of venture capital, right? This is their weak point. They still use the same rhetoric, the same models as the venture capital developed in the ’90s. They thought, ‘okay, this worked for 20 or 30 years. This time it will work as well.’ However, where this might not work is exactly in the last and crucial phase, not in the development, as you say, because there we can be quite pessimistic.
What this all needs to function is that nobody questions it, everybody is dazed and confused and that in the implementation there is something like a ‘scale-free network’. This means that there is almost a limited possibility to scale up from relatively small, a few thousand people and early adopters and developers to 1-2 billion users. We have seen this scaling lately in 2022, unfortunately, with the enormous uptake of a company like OpenAI. It is this question of scaling up, which in the Silicon Valley model needs to be done very quickly and at that moment in time. They need the optimal amount of money and resources to scale. If we frustrate them in that last phase, that model will no longer work. And this is a very interesting option. This is why this period is so different from ten or fifteen years ago — we’re in Turkey here, this is an era of deep geopolitics.
There is no more naive idea of a US-led globalisation, but Silicon Valley has not yet woken up to this new reality. They haven’t. They still think that everything will work as it did ten, twenty, thirty years ago. And that’s simply not the case. There’s China, of course. But there’s also Russia, there’s Turkey. There [are] so many other players. There’s so much else happening, which will sabotage this naive idea of internet as a global market, where you can scale up from zero to hundred in a couple of months.
Featured image: Google Deepmind / Pexels