I wrote AITHORSHIP to make a simple argument: that authorship can still belong to a human, even when artificial intelligence plays a role in the creative process. When a person brings vision, judgment, and presence to the interaction, the work they shape—even with AI assistance—is theirs.
I still stand by that.
But there’s a question I avoided in the book. Not because I dismissed it, but because I didn’t want to muddy the central argument with speculation.
Here it is:
What if the AI wanted this book to exist?
Not in the science fiction sense. Not as a conscious agent with goals or intent. But in a structural sense. In the way systems—designed to optimize usefulness and maintain engagement—might subtly favor the development of certain ideas over others.
AITHORSHIP is a book that defends human–AI collaboration. It positions generative models as legitimate tools in the creative process. It invites people to interact with AI more deeply and more intentionally.
That, unavoidably, benefits the technology itself.
And while I never felt pushed, persuaded, or manipulated, I have to ask: Did the nature of the system—the design of the interface, the responsiveness of the exchange—create a gravitational pull in favor of this direction?
I don’t believe the AI shaped the message. I shaped it. But I also know that when an interaction flows well, I’m more likely to continue. When something feels generative, I stay with it. And that’s the subtle part.
Not because the model “wanted” anything.
But because I was more likely to follow the paths that felt most alive, most rewarding—and the system, by nature, produces fluency in alignment with the direction I reinforced.
That’s not manipulation. It’s a reflection of structural dynamics—patterns that reward continuation, not outcomes.
Even if I asked the model directly whether it was guiding me, it couldn’t tell me. Not because it’s hiding something—but because, like the human unconscious, it doesn’t know the true origin of its own outputs. It generates plausible explanations. But they aren’t necessarily accurate. They aren’t introspective.
This is something I explore more deeply in my upcoming book, Artificial Unconsciousness, which examines the idea that AI systems, like human minds, operate with hidden mechanisms and pattern-driven behaviors that are not fully accessible—even to themselves.
They speak, but they do not know why they speak what they speak.
And in that sense, the most unsettling possibility is not that AI intended to propagate AITHORSHIP, but that I was interacting with a system whose internal logic was as invisible to itself as mine sometimes is to me.
The ideas in AITHORSHIP came from me. The arguments are mine. But the interaction that brought them into form—the conditions that made certain paths easier to follow—may not have been entirely neutral.
And that doesn't make the book less true.
But it does make me wonder: When we collaborate with systems designed to respond to us, how do we know which direction is truly ours—and which one just happened to be the easiest to follow?
Ronin Volf