Skip to Content

The Mirror and the Mind

Generative AI, Copyright, and the Illusion of Originality

Chapter 1: The Rise of the Machines—How Generative AI Learned to Create

In the summer of 2022, an unusual artist began to make waves. It didn’t hold a brush, nor did it sit at a piano. It didn’t even have a body. Yet, within months, it could write poetry that moved readers to tears, generate paintings that sold for thousands, and compose music that sounded uncannily human. This artist was an algorithm, a large language model trained on the collective works of humanity, distilled into lines of code. It was not the first of its kind, but it was the most powerful: a culmination of decades of research, ambition, and a quiet, unspoken bargain with the past.

Generative AI did not emerge overnight. Its roots stretch back to the mid-20th century, when scientists first dreamed of machines that could mimic human thought. Early experiments, like ELIZA in the 1960s, were crude, rule-based chatbots that fooled users into believing they were talking to a real person. But the real revolution came with the rise of neural networks and deep learning. By the 2010s, researchers had cracked the code on how to train machines not just to follow instructions, but to learn—to recognize patterns in vast amounts of data and generate new content that felt original. The breakthrough was the transformer, a type of neural network architecture that allowed models to process language with astonishing fluency. Suddenly, machines could write essays, answer questions, and even tell jokes.

Companies like OpenAI, Google, and Mistral raced to build bigger, smarter models. They scoured the internet for data: books, articles, websites, social media posts, and code repositories. They ingested Wikipedia, Reddit threads, and digital libraries. They absorbed the works of living authors, dead philosophers, and anonymous forum posters. The logic was simple: if a model could be trained on enough text, it could learn to predict what words should come next, what images should look like, what music should sound like. The result was a new kind of creativity, one born not from a single mind, but from the collective output of millions.

But there was a catch. Much of this data was copyrighted. Novels, songs, and lines of code were protected by law, yet they were fed into these models without explicit permission. The companies behind the AI argued that this was fair use, that training a model was no different from a student reading a library of books. The output, they claimed, was transformative, not derivative. Besides, the internet had always been a free-for-all. Search engines indexed everything; social media platforms monetized user content. Why should AI be any different?

The answer, as it turned out, was not so simple. Unlike a student, an AI does not forget. It does not merely learn from the data, it reproduces it, in fragmented, recombined forms. When an artist discovered that an AI had generated a painting eerily similar to their own style, or a musician found their melodies echoed in an algorithm’s composition, the question arose: Who owns the reflection in the mirror?

Chapter 2: The Ancient Echo, When Ideas Belonged to Everyone

Long before the age of patents and copyrights, ideas flowed freely across borders and centuries. In the 15th century, as the Renaissance dawned in Europe, two men, Johannes Gutenberg in Mainz and Laurens Janszoon Coster in Haarlem, independently developed the printing press. Neither knew of the other’s work, yet both arrived at the same invention within years of each other. Who, then, was the true inventor? The question would have baffled their contemporaries. In a world without intellectual property laws, innovation was a communal act. Knowledge spread through apprenticeships, oral traditions, and hand-copied manuscripts. If you had a good idea, your neighbor was free to borrow it, improve it, and pass it on.

This phenomenon was not rare. The wheel appeared in Mesopotamia and Central Europe within centuries of each other. Paper was invented in China, perfected in the Islamic world, and only later reached Europe. Calculus was discovered by both Isaac Newton and Gottfried Wilhelm Leibniz, sparking a bitter dispute that raged for centuries. Even the myths we tell ourselves about lone geniuses, Archimedes in his bath, Edison in his lab, are often myths. Most breakthroughs are not sudden sparks of inspiration, but the slow accretion of shared knowledge.

Let’s call this Common Derived Knowledge: the understanding that most of what we consider “original” is actually the product of countless influences, passed down and reshaped over time. Shakespeare borrowed plots from Italian tales; Picasso drew from African art; every scientist stands on the shoulders of those who came before. The idea of a single, solitary creator is a romantic fiction. In reality, creativity is a conversation, one that spans generations.

Yet somewhere along the way, we decided that ideas could be owned. The 19th century saw the rise of patent offices and copyright laws, designed to protect inventors and artists from exploitation. But these systems were built for a world where creation was slower, where influence could be traced and credited. They were not designed for an age where a machine could ingest the entire canon of human art in an afternoon and spit out something new, or at least, something that seemed new.

The tension between Common Derived Knowledge and modern copyright is at the heart of the AI debate. If human creativity has always been a remix, why do we bristle when a machine does the same?

Chapter 3: The Human Algorithm; Are We Any Different?

Consider the life of a single person. Born into the world with no language, no memories, no skills, they are shaped by everything they encounter. A child learns to speak by listening to their parents. A student learns to write by reading books. A musician learns to compose by studying the greats. By the time they create something “original,” they have already absorbed thousands of stories, images, and sounds. Their mind, like an AI, is a product of its training data.

Are their ideas truly their own? Or are they simply the latest iteration in a long chain of influences?

Take a painter, for example. She grows up surrounded by the works of the masters, Van Gogh’s swirling skies, Monet’s water lilies, the bold lines of modern abstract art. When she picks up a brush, her hand moves in ways she didn’t consciously choose. Her style is a fusion of what she’s seen, what she’s loved, what she’s rejected. If she paints a sunset, is it her sunset, or is it a remix of every sunset she’s ever admired?

The same could be said for a scientist. Einstein’s theory of relativity didn’t come from nowhere; it was built on the work of Newton, Maxwell, and countless others. Even his famous “eureka” moment, the image of a man falling from a roof, was inspired by a thought experiment he’d read years earlier. His genius lay not in creating something from nothing, but in seeing connections no one else had.

This is where the comparison to AI breaks down, and where it becomes most interesting. A human can transcend their influences. They can ask questions no one has asked, imagine worlds no one has seen. An AI, for all its sophistication, can only predict. It does not understand the words it writes or the images it generates. It cannot dream up a theory that rewrites the laws of physics. It is a mirror, not a mind.

And yet, the mirror is becoming harder to distinguish from the real thing. When an AI writes a poem that moves us, or composes a symphony that stirs our souls, we are forced to confront an uncomfortable truth: perhaps we, too, are just very advanced mirrors, reflecting the world back in new and unexpected ways.

Chapter 4: The Backlash, Why We Hate the Mirror

If AI is just another link in the chain of Common Derived Knowledge, why does it provoke such anger? The answer lies not in the technology itself, but in the hands that control it.

Artists and musicians were among the first to sound the alarm. They watched as their styles were mimicked, their works used to train models that could churn out endless variations, often without credit or compensation. To them, it felt like theft. Not the kind of theft that happens when someone copies a song note-for-note, but something more insidious: the theft of potential. If a machine could replicate their work in seconds, what was left for them?

The open-source community, too, has pushed back. Many developers contribute to projects like Linux or Python not for profit, but for the love of creation and the belief in shared knowledge. When companies like Microsoft or Google use their code to train proprietary AI models, it feels like a betrayal. The ethos of open source is built on transparency and collaboration; AI, as it exists today, is often neither.

But the anger runs deeper than economics. It touches on something existential. We like to believe that our creativity is uniquely human, that it comes from some ineffable spark within us. When a machine produces something beautiful, it challenges that belief. If a poem written by an algorithm can make us cry, what does that say about the poems we write ourselves?

The real issue, though, is not the AI. It’s the power structures behind it. Right now, a handful of corporations control the most advanced models. They decide what data to use, what biases to encode, who gets to benefit. The fear isn’t just that machines will replace us; it’s that they will replace us on someone else’s terms. The idea that Big Tech controls the ideas of the future, because they own and create the Large Language Models. 

Epilogue: Finding a Balance

The debate over generative AI and copyright is, at its core, a debate about what it means to create. Are ideas property, or are they part of a shared human heritage? Can a machine ever be truly original, or is it just a very sophisticated parrot?

Perhaps the answer lies in rethinking how we value creativity. Instead of clinging to the myth of the lone genius, we could embrace the reality of Common Derived Knowledge, acknowledging that all creation is collaborative, even when it’s done by a machine. We could demand transparency from AI companies, ensuring that creators are credited and compensated when their work is used. We could treat AI not as a replacement for human artistry, but as a tool to amplify it.

Big Tech wants us to believe that one thing is certain: the mirror is here to stay. The question is whether we will smash it in fear, or learn to see ourselves in its reflections. On the other side, AI costs a lot more money than people are paying for it. AI could be a big Bubble like the one that burst in 2001. 

The Future in your Pocket: The story of one Device to Rule All