After AI
Taste, Timing, and the New Human Advantage
I recently saw a large Samoyed in Central Park strolling with its owner, less than a day after one of New York City’s biggest blizzards. These dogs are impossible to miss: their thick, white double coats envelop them from head to paw, giving them the look of miniature polar bears. Named for the Samoyedic peoples of Siberia, they were bred for brutal cold—hunters who moved in packs and, at times, pulled sleds across frozen ground.
This Samoyed, however, wore tiny black booties, struggling to stay balanced on the icy walkway. Its head was darting from left to right in a panic. It looked utterly confused, examining the bleached tundra as if it were in a foreign land. You would think something ancestral would stir when walking through weather like this—some genetic switch would flip on for the dog, reminding the great Samoyed that it wasn’t meant to be pampered on 5th avenue, but covered in snow in northern Russia, racing across the tundra bootyless.
Whatever genetic prerequisite for this breed seemed lost from my vantage point, left behind to perform its domestic duties.
So much of the discussion around AI imagines humans to become much like this Samoyed: lost and rudderless, still having all our intrinsic characteristics to excel within a particular environment. Yet, thanks to endless cognitive offloading, short-form content exposure, and a general lack of expertise-building, we’ve become strangers in the very environment we were built for. Tugged along on a leash by our AI overlords.
It can be especially depressing listening to software engineers hidden behind the walls of Big Tech Companies. On the most recent Ezra Klein podcast, Anthropic co-founder Jack Clark suggested that the goal of Anthropic was to have AI agents not just take over white-collar jobs, but to have mid-level managers look over those AI agents until they could “close the loop” on all humancentric tasks inside the company–until we have AI agents watching AI agents:
“So you're using AI systems you don’t totally understand, monitoring AI systems you don’t totally understand. How confident are you that you're going to understand that?” Ezra asks. “This is happening,” Jack responded, “we take this really, really seriously.”
Well, ok then.
Naturally, we are left wondering: what will the humans be doing once AI agents “take over.” These agents are supposedly ready to achieve specific goals by perceiving their environment and using tools without constant human supervision. The optimistic take is that we all level up our understanding. With AI agents, we won’t be writers; we’ll become editors managing a team of AI writers. We won’t be coders; we’ll be “vibe coding” and managing a fleet of AI coding agents.
The Samoyed has made itself useful in places like New York City by leaning into its other known traits: its friendly and affable disposition. They make terrible guard dogs, but remain playful into their old age. They are strong but also cute, and get along with other breeds. New dispositions become the premium.
I see the same happening here with AI. In a world where everyone is a mid-tier software engineer, a half-decent writer, a competent production manager, a junior-level lawyer, and an ordinary creative designer, having the right taste and understanding of one’s audience at the right moment will mean everything. Even now, when I write prompts to ChatGPT or Codex, I find myself thinking differently, trying to articulate exactly what is needed for the project, the exact feature of the context, and the precise goal. I don’t focus on the process, but on how the final product should look and what it should produce.
As AI systems increasingly execute tasks, human value will shift from production to orientation: the capacity to judge, shape, and time meaning within lived contexts. My guess is that a certain set of skills will matter the most: recognizing taste and judgement, understanding the value of craftsmanship, and targeting the Kairotic moment.
Recognizing Taste and Judgement
In a world where information and workflows can be offloaded, the ability to consider appropriate “fit” will matter. The ability to have some aesthetic judgement–to see how certain creative choices meet the moment.
What AI-generated slide deck will best reach my audience? What melody, created by AI, will work best in this song?
How we consider the development of taste has been a matter of debate. Immanuel Kant once argued that the beauty of an object is perceived universally, regardless of cultural or social differences. This isn’t to say we all perform equally when it comes to aesthetic judgment, but that we all have the capacity to excel at developing aesthetic taste, given our shared human faculties. Which might be true, but slightly negates the social and cultural construction of aesthetic judgment. According to Pierre Bourdieu, aesthetic tastes form as an impression of one’s social distinction, reflecting one’s social class and education. Taste is shaped by training, comparison, socialization, and cognitive mechanisms—not just a mysterious faculty. You’ll judge “quality” differently depending on your hometown or cultural status, for example. Moreover, those with enough economic and cultural capital might cultivate a heightened taste that Kant believed we all have access to automatically.
Today, we certainly aren’t alone with our “faculties,” or being culturally and socially impacted organically. Our aesthetic judgements are nudged through a web of algorithmic architecture that we navigate daily. Philosophers today wonder if developing taste is an individual act, carefully cultivated by autonomous choices, or something that emerges subconsciously thanks to these manipulative platforms.
Even with all this outside influence, recent studies have highlighted how humans are still judging art through sensing its level of “human expression,” and are disappointed when we see art as something uncanny or decorative. Perhaps this is why so many are beginning to view AI art with skepticism.
If we agree with Kant that everyone has the capacity for aesthetic judgement, then AI’s flattening of production actually heightens the importance of human taste, spotlighting our ability to argue for why this configuration of form, context, and meaning is worth our shared attention. These faculties are more important than ever in the world of AI slop. Everyone may be set to be a “creator,” thanks to AI software like Codex and Claude Code, but AI is essentially trained on, and therefore amplifies, the existing digital content from institutions that sustain and legitimize a hierarchy of culture and “taste.” Pushing the boundaries of art will require more than LLMs and algorithms; it will require human capabilities, looking to drive art forward.
Plus, seeing aesthetic judgement as something not static but socially and culturally constructed should tell us one very important thing:
It can’t be offloaded by AI.
It’s emergent from the embodied experiences we have as humans. A bot could cultivate “taste” through a series of zeros and ones, but that’s very different from living within and among an actual community, interacting with other people. AI agents cannot risk social positioning through aesthetic claims either, revealing something about themselves. We take the risks, shaping the boundaries of culture in the process, and risk should remain the gold standard of artistic value. Those who offload their creative judgment to AI may transact in volume, but will trade in a weaker currency.
We’ll all get better at assessing for taste once more AI-generated content hits, but a few will orient themselves around that skill, developing the confidence to spot AI’s inauthenticity the way we instinctively size up wall art from HomeGoods. We know it’s efficient, but we also know there’s a lot better stuff out there.
Craftsmanship
The medieval workshop was a space well respected in communities thanks to its attention to craftsmanship and quality. Craftsmen would gather there, hone their skills, and become pillars of the community. Richard Sennett argues that this raised the position of the craftsman within the social order, developing the cultivating quality and ethical codes within the workspace. This drew careful attention to such habitual forms of coordination and creativity. For the craftsman, a refined, embodied sense of what fits here and now was cultivated and respected, honed through many encounters with slightly different constraints. The blacksmith or the armorer might have had different trades, but they all worked through a particular process, one that reinforced the idea that the hand influenced the eye. That vision was forged through and with the body.
By the time the Renaissance came, forcing a separation between art and craft, we began to see a breakdown in the mentor/apprentice relationship within the trades. The workshop became inferior, a place reserved for a lower class of society. Process still matters, but how much, and for how long?
When AI creations become ubiquitous, I imagine a return to the “workshop,” and a need to think like a craftsman. Aesthetic taste will be fine-tuned to content that signals quality over quantity. We’ll want more of our things to have gone through some embodied processes before they reach their final form. You’ll want to not just develop refined aesthetic judgement, but also attend more carefully to how repetitive, embodied practices shape the creative process—and why that shaping matters.
That isn’t to say the use of AI will disappear, but it will have to be managed through a process that retains the value of coordination, repetition, and imagination. The craftsman knows how these three things flow together. Copying and pasting the final AI “product” won’t cut it. Sennette argues that repetitious performances are not boring when we create like craftsmen; instead, “we are alert rather than bored because we have developed the skill of anticipation.”
AI can repeat and perhaps learn, but not exactly like a craftsman, whose technical understanding of things develops with the help of human imagination. AI doesn’t have hands or the ability to touch. Repetitive performances improve our ability to anticipate, to take notice. Repetition in human craft arguably transforms the self, and I doubt that goes away.
The Kairotic Moment
When it comes to interpersonal communication, timing will matter most of all. We’ll have the information, but we’ll need to assess for timing. We can think of time as quantitative, what the Greeks called “chronos,” or more qualitative, timing communication for the right moment. Greeks called it kiaros.
It’s an ancient term, but I think it’s worth returning to in the AI agent era: people are going to find their place in AI workflows by getting to the “right moment.” Gaging the temperature, knowing one’s audience, and responding accurately given the context, something AI agents won’t fully understand since they’re bound to their digital environments. Humans can, in an instant, adjust the approach by asking importance questions:
What are my audience’s needs and wants in the moment? What relevant message can I deliver to them on this occasion? What tone, structure, and volume offer the best delivery?
Pairing non-verbal with verbal communication will be key. Can you make someone react and sense a particular emotion, at a particular moment, through your verbal and non-verbal skills? If AI agents are doing most of the work, who is going to upsell that work in the real world to other people?
There was a famous study on non-verbal communication in the late 1960’s where the results became known as the “7%-38%-55%” rule. The findings were as follows:
7 percent of spoken communication is comprehended from the words that are actually spoken (content).
38 percent is comprehended in the way in which the words are spoken (tone of voice).
55 percent is comprehended from facial expressions and body language (how you look).
If we keep offloading our work, focusing on screens, and socializing less, one could see how those who are empowered by the “7-38-55” rule will become modern-day magicians, wielding communication like some sort of superpower. AI will not disrupt the fact that we must connect with each other, and some do it better than others.
Developing taste, recognizing the value of process (much like the craftsman), and finding the right communicative moment. These areas feel timeless to me. They seem like intrinsic tools, ones that can help us feel at home in our “element.” We all have different elements, of course, that place where comfort meets skill. Where we hone in on a process, and everything suddenly feels natural. It’s a shame to see those in their element no longer able to perform. But we don’t have to choose that route just yet; we aren’t the Samoyed lost among the snow drifts.


