Storytellers of various types have been concerned that large language models, also known as AI, will displace human writers and storytellers.
I’m not worried about serious and devoted writers of fiction being replaced by AI.
The Robert Howard experiment
Lets say all the written material Robert Howard ever read was fed into an AI. Then give the the AI the prompt of, “Based only on the Robert Howard data set, create a story that mimics those being published in Weird Tales circa 1930.”
Would the AI create Kull or Conan? Would it re-invent the sword-and-sorcery genre?
Let’s go further and add to the data set all the finished Conan stories and everything that RE Howard wrote about the Hyborian age.
Could it produce new Conan stories that feel like one REH himself wrote?
I doubt it very much.
AI lacks a great deal that Robert Howard had.
It wasn’t raised in Cross Plains TX. It never talked to a roughneck or saw a bar fight. It never had a correspondence with HP Lovecraft about the Irish language or the properties of civilization.
There is a distinct Robert Howard-ness found in those original stories that other human beings have failed to reproduce. They didn’t have those life experiences either.
If a human can’t do it, why would we expect AI replace a great storyteller like Robert Howard?
An AI can calculate that certain combination of words are more common in stories written by humans. It can calculate that combinations of certain words in a certain sequence have a likelihood of being more popular than others.
What it can not do is calculate which combination of words will make you cry or feel exhilaration or rage.
The machine doesn’t feel. It doesn’t love. It doesn’t hate.
Without love and without hate at the core of the storyteller’s work they will produce something that, at best, will entertain and distract you for an hour. They aren’t going to change your worldview, decide you need to make a change, give you hope, or help you process your grief.
Unless someone can figure out how to make a machine calculate in those terms, it will never be able to tell a truly great story.
Great points. I hadn’t really noticed or at least consciously seen the gap between the data LLMs are trained on and human authors. Texts of various levels of polish from, say, grocery lists to, say, an annotated Alice in Wonderland are fixed and the motive for writing them in that way is (at best) obscure. But people are constantly bringing their human history, memory, and ever-changing relationships to their writing. LLM outputs won’t ever know what did not make it to the page when they parse their data.
LikeLike