Generative AI, copyright and the deadlock dilemma
As presented at the Plenary Session 2, The ethics and governance of AI in future-proofing the CCS, at the 10th WORLD SUMMIT ON ARTS & CULTURE in Seoul, South Korea.
AI & Copyright: The Zero Sum Game
AI and Creative industries seem to be at a deadlock, fighting one against the other. Two opposite forces, with similar strength, entangled in an impossible knot.
And the choices seem binary and excluding:
-Regulation on one side, innovation on the other.
There is no middle ground: either we protect creative industries, or we allow tech industries to scale.
We see this clearly framed as the copyright vs AI narrative, a zero-sum game where concessions means that both sides lose.
Ethics: the elephant in the room of Generative AI
In this landscape, ethics is the elephant in the room.
AI ethics tries to deal with the devastating side effects of AI progress, in a very fragmented way:
We talk about Bias, Misinformation. Appropriation. Exploitation as if separate compartments, instead of different expressions of the root problem.
And……. like the parable of the blind men and the elephant, we’re each describing only the part that we can touch.
In creative sectors, by narrowing the debate to copyright vs AI, we focus on just one part of the AI Ethics elephant.
It’s a reductionist approach that misses the bigger picture. These issues are deeply interconnected,because AI involves not just intellectual property, but other types of data and intellectual labor.
(For example, think of ghost workers, sanitizing the systems that we use, so they are not harmful. They are paid 2 dollars an hour and deal with traumatizing images of violence and abuse, to make our systems, safe).
We need to consider the ethics of AI in a holistic way that reflects the structural, systemic and multidimensional nature of the problem.
A third way out: cutting the gordian knot
We can continue in this tug-o-war. Or we can proverbially cut the rope, thinking of a third way to get us out of this entanglement.
The real challenge is structural.
This isn’t just about artists. It’s about invisible labor, sustainability, the future of water and energy, data extractivism,and a long etc.
For artists it is about consent and attribution, but fundamentally it is an economic issue: a problem of fair redistribution of wealth.
The problem is not that AI is taking our jobs, the problem is that AI is taking our income.
So, being a structural problem, licensing alone won’t untie the knot.
For worse, expanding copyright law risks reinforcing data monopolies. It risks transforming BIG TECH into BIG AI, because only the biggest players can afford to license everything.
So, can answers come from outside the copyright framework? Yes! We can think of taxation and other policy interventions that will also have a beneficial impact on ethical problems.
We can think of Universal Basic Income, or Creative Basic Income for artists.
Because let’s be honest, any licensing money will go straight into shareholders pockets, and not to the cultural workers producing knowledge for them. I don’t see The New York Times distributing any license money they can get. Do you?
The alternative is to think of AI as a public good, open source and transparent. Like the replicator machines in Star Trek. Generative AI could be a technology of abundance, allowing us to overcome the scarcity of the physical world.
And as food for thought, I will close with my favorite quote from Star Trek, that sums up a brighter way to think about the future of technology.
“The acquisition of wealth is no longer the driving force in our lives.We work to better ourselves and the rest of humanity.”