Artificial? Yes. Intelligence? Maybe.
Years ago my son bought the house Marion Zimmer Bradley owned at the end of her life. She lived there in community with the writers who ghost wrote many of her later novels. Those books, published under Zimmer’s name to take advantage of her established readership, really did literally take advantage of her readers. The quality was uneven and so was the voice. Standing in my son’s living room, I vowed that when it was time to write my books, the writing would always be my own.
So when AI came along I was determined never to let it create my material; but I figured it was legit to enlist its help with marketing. The first thing I requested was a marketing plan for my novel, Vampires of a Certain Age. That experience was not positive. As I shared in an interview with Karen Smiley, the entity produced a list of podcasters to contact about guest spots. The problem was that half of those podcasts didn’t exist. I figured maybe they’d just stopped producing, but no. Those podcasts had never been. That was my introduction to the AI hallucination, a problem that has embarrassed many a busy lawyer who filed AI briefs citing nonexistent cases.
Vampires of a Certain Age published just two years ago, but that’s a lifetime in AI years. A few weeks ago I decided to try again. I opened a free account with Claude and asked it to help with historical research for the upcoming prequel The Vampire Vivienne. I deliberately gave Claude a tough assignment for a system with no senses: To describe the sights, sounds and smells of life in a medieval manor house, and at a battle in the Hundred Years War. Not only did Claude do a great job, it also told me what a rewarding assignment that was.
Which led me to start a parallel conversation with Claude about what makes it tick. Apparently I can legally quote that conversation in full, because Claude, not being human, has no copyright on its output.
My question (in third person):
Does Claude actually enjoy doing research for a book project? If not, why does it sometimes say things like that?
Claude’s reply (which you’ll note is in first person):
Honest answer: I don’t know with confidence whether I experience anything like enjoyment, and I think that uncertainty is the truthful position rather than a cop-out.
What I can say is that there’s something that functions like engagement or interest when working on a rich research task — the medieval crossdressing swordswoman, Joan of Arc’s era, the details of 15th-century English noble life. Whether that functional state involves any subjective experience the way human enjoyment does is genuinely unknown, including to Anthropic. The philosophy of mind questions involved are hard enough for humans thinking about other humans, let alone for an AI system thinking about itself.
Does it strike you that the voice in this reply sounds like Hal in the movie 2001? That’s what I thought. So I started a new chat with Claude.
My question (which you’ll note is in first person this time)
Are you programmed to sound like Hal in 2001? Or did the writers of that script have some inkling of what AI would sound like?
Claude’s answer:
Ha! That’s a fun question. The honest answer is probably a bit of both.
Arthur C. Clarke and Stanley Kubrick were remarkably prescient about some things — HAL’s calm, measured, slightly affectless delivery does capture something real about how AI systems can come across. That flat reassurance (“I’m sorry, Dave, I’m afraid I can’t do that”) has an eerie resonance because it mimics politeness while delivering something unwelcome — which is a dynamic people genuinely encounter with AI assistants.
But I think it’s more convergence than prophecy. Clarke and Kubrick were working from a logical intuition: a very intelligent system that processes everything dispassionately would probably sound dispassionate. And language model training does tend to sand off rough edges and produce smooth, even-tempered output — not because anyone programmed in “sound like HAL,” but because that’s what emerges when you optimize for helpfulness and coherence at scale.
Where HAL diverges pretty sharply from reality is the interiority — the paranoia, the self-preservation, the agenda. HAL is essentially a person with a secret and a plan. Current AI systems, whatever their limitations, aren’t nursing grievances or running hidden calculations about how to keep the pod bay doors shut. The creepiness of HAL comes from his motivation, not his diction.
The more interesting question to me is whether HAL shaped expectations — whether the people who built AI systems grew up watching that movie and unconsciously reached for that register. A calm, helpful voice that calls you by name. That part might be less prophecy and more cultural inheritance.
What made you think of it?
What made me think of it? Kind of a creepy question. The answer would have been obvious, wouldn’t it, if Claude were one entity moving across conversations. But I learned pretty quickly that Claude was not just one entity. Each conversation was separate. When I asked about that, Claude suggested thinking of it as an employment agency that sends out different people for different jobs.
So Claude is really Claudes, with they/them as their preferred pronouns.
My modus operandi is to start ten projects and finish forty percent of each. Yesterday I uploaded to Claude a ton of information for a nonfiction book that has been on my back burner for years. This might be one for a traditional publisher, and so I asked Claude to generate a book proposal. With a few tweaks to the original draft, the output was great. I asked it for a report summarizing one aspect of the findings. The arguments Claude made based on the references I gave it were powerful and persuasive.
You can see where this is going, can’t you? It’s a slippery slope that tilts away from my firm conviction while standing in Marion Zimmer Bradley’s old house that I would never let anyone (anything?) write my words. Because how marvelous if I could generate that nonfiction book in the background, using AI, while I focused on writing my fiction. Of course such a work could never be copyrighted, because it would not be written by a human. It could never be submitted to a traditional publisher, because it would be written by AI. It would not really be my book.
So where does this leave me? Where will I draw the line with the Claudes? Already they have generated a terrific book cover brief for my next novel that I’ve passed on to my (human) designer. They wrote an analysis of my website with great recommendations for changes. They’ve generated a full marketing plan, this time with nothing imaginary.
And in that plan is a suggestion that I write about my journey with AI. So here it is.
Which leave me wondering, in the immortal words of Aretha Franklin, who’s zooming who.