I used AI to rewrite The Hobbit for my 4-year-old. The experiment resulted in 5 startling realisations about our AI-driven future, touching on everything from accessibility and creativity to the very definition of humanity.
I've been a huge fan of the amazing world of Middle-Earth since I was young. So, when my son, now four, first asked me to make up a story—not just read one from his heaving bookcase—my mind went straight to Middle-earth. I wanted him to feel the wonder of Bilbo’s adventure, the bravery of the Fellowship. But the dense prose and ancient conflicts of The Hobbit and The Lord of the Rings form an impenetrable wall for a reception student.
Even just a few years ago, that would have been the end of it. I'd have put the book back on the shelf and told him a story about a friendly dinosaur instead. But the world has changed. I wasn't willing to wait a decade.
My solution was to turn to a Large Language Model (LLM). My goal was simple: translate the soul of these epic tales for an audience of one. What I thought would be a fun way to generate a few bedtime stories quickly became a profound journey. This simple project became a powerful lens, resulting in 5 realisations about our future in an AI-driven world.
My first attempts were naive and deeply disappointing. Prompt 1 went something like this: "Rewrite the first chapter of The Hobbit for a 4-year-old."
The result was a disaster. It was generic, lifeless, and stripped of all Tolkien's charm. The AI had simplified the words but obliterated the spirit. It was a summary, not a story. AI isn't a magic box; it's an instrument that needs to be played. I needed to shift from being a passenger to being a pilot.
Prompt 2 (The Shift): "Act as a master children's storyteller. Rewrite the scene where Bilbo meets the dwarves. Use simple, rhythmic sentences. Focus on the feeling of surprise and the fun of an unexpected party, not the rudeness of the dwarves. The mood is cheerful chaos."
The difference was night and day. By providing persona, constraints, and—most importantly—intent, the AI produced something with warmth and character. But the real breakthrough came from turning the process into a dialogue. I wasn't just giving instructions; I was collaborating.
AI Output: "Smaug the dragon slept on a large pile of gold."
My Feedback Prompt: "That's too simple. Let's make it more magical. Describe the treasure like it's the most wonderful toy box in the world. Use words like 'sparkling jewels,' 'shiny rivers of coins,' and 'glittering mountains of treasure.'"
This iterative loop—where my human judgment, taste, and emotional direction guided the AI's incredible linguistic power—is where the magic happened. And that process revealed five profound realisations.
My project wasn't really about simplifying a fantasy novel. It was about making a complex dataset (Tolkien's lore) accessible to a user with specific needs (a 4-year-old). This is a paradigm shift in how we will interact with information. For decades, accessibility has been about screen readers and font sizes. Soon, it will be about cognitive translation. Think of the implications:
AI is becoming a personalized interface between the world's complexity and every individual's unique level of understanding. It is the ultimate accessibility tool.
This first realisation—AI as a universal translator for complexity—is the gateway to the next profound shift: the personalization of everything to the nth degree. My project was a proof-of-concept for a hyper-personalized piece of media, created for an audience of one. This same principle will reshape every interaction we have.
The ability to generate these personal experiences means we are moving from a world of mass production to one of mass personalisation. Every digital service will be specifically tailored to you, not just your demographic.
Throughout this project, I wrote very few of the actual sentences. The AI handled the heavy lifting of language generation. And yet, the final story was undeniably mine. Why? Because I didn't act as the author; I was the director. I was the curator of meaning. My value wasn't in my ability to arrange words. It was in my ability to:
This is a blueprint for the future of knowledge work. As AI models handle more of the "what" (generating code, designs, or marketing copy), the premium on human value will skyrocket for the "why." Our most critical role will be to provide the taste, the strategic intent, and the ethical oversight. The best developers won’t just write code; they will strategically direct AI systems to build elegant and efficient architectures, leveraging their expertise to guide complex processes. Similarly, the most valuable data scientists will move beyond simply running algorithms, curating AI-generated insights with a keen strategic eye to ensure they align perfectly with business objectives and ethical guidelines.
We are moving from a world that rewards mere creation to one that demands strategic curation and insightful direction. The value won't be in the what, it'll be in the why.
Here's the uncomfortable question: Did I have the right to do this? My AI-assisted Tolkien retelling was for an audience of one, in the privacy of my home. But what if I had published it? This experiment doesn't just push creative boundaries; it pushes legal ones, forcing us into the murky waters of intellectual property law.
This project highlights a fundamental tension: AI gives us the technical ability to manipulate and personalise creative works in ways that our legal system fundamentally struggles with. As a community, we are building tools that operate in a legal vacuum. We urgently need a new consensus about where the lines should be drawn between personal use, transformative art, and outright infringement in the age of generative AI.
You can't tell an AI to capture the "spirit" of a story without first defining what that spirit is. How do you translate an abstract concept like Tolkien's "eucatastrophe"—the sudden, joyous turn in a desperate situation—into a prompt?
You're forced to deconstruct it. "When the heroes are surrounded and things look sad, create a happy surprise that feels earned and magical. Make the arrival of the eagles feel like the moment you find a lost toy you thought was gone forever."
To instruct the machine, you must first deeply understand yourself. We are being forced to quantify the unquantifiable, translating our nuanced, often subconscious, human experience into explicit instructions. This process is a powerful act of introspection. It forces us to become clearer thinkers and better communicators, not just with our machines, but with each other.
In teaching the AI about bravery, friendship, and hope, I had to articulate precisely what those things meant to me. The AI wasn't just a tool for storytelling; it was a mirror reflecting the essential components of our own humanity.
What began as a dad trying to share his love of a story with his child ended as a personal lesson of what our future might be. We are not building tools that will replace human thought, not just yet anyway, but partners that will augment it, breaking down barriers to knowledge and creativity.
However, as we do so, we are creating immense legal and ethical tensions that we cannot ignore. The most important prompt we're engineering is the one we're giving ourselves: in a world where AI can generate anything, what are our rights, what are our responsibilities, and what is truly important for us to communicate?
This blog post, born from a few straightforward instructions, not only exceeded my writing capabilities but also led to deeper understandings. Does its AI-assisted origin make it meaningless, or does the enhanced clarity it provided for my own thoughts elevate its value significantly?
I've been reflecting on AI more than ever recently, if you liked this please have a read of another fun experiment: