…Your value lies in your ability to leverage those work primitives, [the things that AI can produce], in ways that reflect your taste, in ways that reflect your expertise.
This can sometimes feel like it’s it’s a nod to gray hairs, right? It’s it’s a nod to the ability to have deep expertise. I have seen people who were just getting started in their career who have taste. They have figured out a particular corner that they are passionate about. They figured out something they have experience in and they figured out how to have taste in that area and they insist on it. That is a recipe for rapid career growth whether you’re starting out or whether you’re experienced, because AI is not good at it. AI is really, really not good at it.
Part of why it’s not good at it is that AI isn’t embodied. AI doesn’t develop the kind of deep, nuanced, metabolized expertise that we get from living in society for decades before we become adults and go to work. There’s no substitute for that. And so that is part of how we shape taste as creatures and that is part of what we bring that AI has trouble mimicking.
So if you’ve ever looked at something, you look at the work product that AI gives you, and you’re like, “It feels hollow. It feels artificial. I can’t put my finger on it, but it just doesn’t feel right.” That’s your taste speaking.
Nate Jones is one of my favorite people talking about AI. I included another of his videos in Overview of the GenAI Landscape.
For over a year and a half, I’ve remained optimistic about LLMs as “thinking partners”, while simultaneously holding a paradoxical opinion that they will not replace thinking. They can be a powerful “tool for thought”, like a good note-taking system, or a great human collaborator, but they rarely, if ever, will get the output of such thinking across the finish line. This has been reflected in my experience. I can never get a model to write like I write–like how I want to sound–despite repeated attempts through different prompting techniques.
Even my attempts to get an LLM to find the through-lines within my body of work (largely, if not best, represented by this blog) fail in annoying ways. The LLM will typically say things like, “The themes are ‘Finding meaning through tragedy, everyday life, art, and the work of building software systems.’” (I’ve even improved that sentence by rewriting it from memory.) Sure, but where are the actual through-lines? What thinkers have subtly (or not-so-subtly) influenced this corpus? What deeper themes hide under the surface? What philosophical or literary styles are borrowed? LLMs, despite having the entirety of those potential sources at its fingertips, can’t find a single one, unless I prompt it with a specific example.
Oddly, just yesterday I had my first experience where ChatGPT produced some writing that I would consider publishing. It was after a longer session, during which I first asked a question because I was trying to recall a specific idea from psychoanalytic theories that I knew would be in the model. It gave me a couple of options, and I focused on the one I was thinking of, first asking for more detail, then asking it to hypothetically apply it to the situation in which I was attempting to reference it. Then I remembered a piece of writing that I keep in my “canon”, uploaded it, and prompted:
I think the attached piece [redacted] might have some overlap with the [redacted] ideas we have been discussing. Briefly summarize how the ideas in the piece align, then briefly summarize a contrary view (they don’t align and are talking about completely different things). Then rewrite the piece, making it much shorter but using some [redacted] concepts and language to convey the same meaning.
It did well with that prompt, so I continued down the path that these things were indeed connected. It offered to write “a succinct manifesto/quip”. My genuine thought was that trying to boil it down to some aphorisms might expose either the validity of this connection I was making, or its stupidity. After some additional back-and-forth, where I switched modes to more of an editor of the model’s outputs, it landed on a nice little blog-sized thing that really worked for my original purpose, using concepts from my original query, the additional context-specific content I uploaded, and producing a new spin on it all in language I could almost present as my own without shame. It may be coming to this blog; I just have to figure out how to surround it with appropriate stipulations.