One common refutation of generative AI art is that despite the AI’s impressive output, it doesn’t have any lived experience when making it. This intrinsic property is what still separates humans from computers. From Tolstoy’s What is Art?:
Art is a human activity, consisting in this, that one person consciously, by certain external signs, conveys to others feelings he has experienced, and other people are affected by these feelings and live them over in themselves.
People argue that AIs can’t achieve this sort of existence:
Most crucially, AIs have no comprehension of the essence of art: living.
AIs don’t know what it’s like to be a child, to grow up, to fall in love…to age, to lose your parents, to get sick to face death.
This is what human expression is about. Art and creativity are bound to living.
I think we have this view because we currently have a many-to-one relationship with AI. We all interface with a monolith (e.g. ChatGPT), which unilaterally updates in the background with information fed by a team of VC-backed engineers. We invoke an ephemeral instance of this AI trained on the entirety of the Internet. It spins up, then spins back down, dissolving back into its unknowable slumber.

But is this the only container for AI?
In 2010, Ted Chiang published his novella The Lifecycle of Software Objects, which takes place in a world where software companies can design sentient digital creatures for humans to raise. Basically, what if your Neopets could think and grow older?
As Chiang describes in his story notes:
Science fiction is filled with artificial beings who, like Athena out of the head of Zeus, spring forth fully formed, but I don’t believe consciousness actually works that way. Based on our experience with human minds, it takes at least twenty years of steady effort to produce a useful person, and I see no reason that teaching an artificial being would go any faster.
Even without achieving the (un)holy grail of AI sentience, how might we begin to approximate a lived experience?
What if instead of a cloud-based entity, we each had access to our own individual AI vessels? Just like buying a Pokémon game cartridge, everyone’s AI would start off the with the same blank slate. But through your individual choices, your AI would eventually branch off into becoming a wholly unique instance.
Despite the supposed diversity achieved via prompt engineering, the current trends in AI art still kinda have a same-y look about them. They’re all based off the same training data after all. And one of the primary criticisms of generative art is that it is based off the stolen work of human artists. So what if instead, you had a personal AI that was only fed on work that you created?
You can also enter journal entries, and your AI observes how your output changes alongside the different moods and periods in your life, e.g. travel, breakups, etc. You provide it an environment and opportunities to interact with your friends’ AI, allowing for exchange of second-degree inspiration (a la Pokémon trading).
Additionally, you get to decide whether the AI’s core should get updated to the latest upgrade or if you’re happy with its current progress (a la Pokémon DLC).
Your AI companion would grow along with you over the years, evolving its style in parallel with your own. At the very least, if AI can be decoupled from the time-less, opinion-less broth of the current datasets, it would at least plant the seeds for diversity in AI aesthetics.
This isn’t meant to settle the debate around what differentiates human from AI creations, but it’s an interesting route to explore: How might our relationship with AI change if you truly felt it was yours and yours alone?
Via:
Assuming technology gets to a point where we don’t need terabytes of training data in order to extrapolate useful outputs