Becoming attuned: early lessons in the Human-AI partnership. 

I recently took a side quest in my "AI Mastery Project" to build a Project Management system in Notion with ChatGPT’s help. That taught me so much about promptcraft, I turned the postmortem into a blog post. 

However, the experience sparked several insights about another subject that warrants a deeper examination: the Human–AI partnership.  

 After wrestling with this subject, I believe I'll still be writing about it long after I've mastered promptcraft. If we assume AI will be a permanent part of our future at work, then we'll have to get skilled at directing, and working with, non-human partners.  

This is a reality-ripping change, and we're just scratching the surface of its implications.  

My promptcraft sources have helped me quickly get better at constructing precise, detailed one-off prompts. I'm quickly getting comfortable here.  

But my relationships with ChatGPT and Claude aren't one-offs. I'm in deep. They help me to organize, compose, edit, workshop, iterate, and prototype. I suspect that one reason the partnership succeeds is that they are, in their way, learning about me.  

With them, I never feel like I'm working "alone." As the human half, it's important to tune in to the partnership and be sensitive to how to make it successful. The relationship feels similar to my role as a creative director: I kick things off with a brief; I steer things with feedback; and I'm learning to adapt my feedback style to suit how my teammate "thinks." Which is constantly evolving. 

It’s kind of like being a Creative Director. And definitely kind of not. 

 Congratulations — I'm the Creative Director to an Alien.  

The Notion project made me realize that my prompting style was heavily influenced by my managing style, which draws heavily on human empathy. That's not always the right kind of empathy when you're managing what Ethan Mollick has referred to as an “alien intelligence.” 

With ChatGPT, I need to "feel" when it's the right time to be precise in my prompts, and the right time to vibe like a human. Often, I won’t know which is which until I get into a situation where the prompts aren’t working.  

The collaboration we built over multiple days felt genuinely different from single-prompt interactions. There was continuity, learning, and adaptation. ChatGPT wasn't just responding to individual requests; I discovered it was building a model of how I work and what I need.  

One example: I kicked off the project with a rambling, stream-of-consciousness prompt heavy on background context. Later, when I asked ChatGPT to optimize the prompt, the new version was about 1/10 of its original length. It was efficient.  

But here's the kicker: ChatGPT understood my original prompt, rambling and all. I believe it was able to infer intent because it had the context, thanks to several weeks of working together on all parts of my sabbatical vision.  

Another example: after ChatGPT answered a simple prompt with an incomprehensible diagram, I kicked off my feedback with "That's really interesting, but..." Later, I wondered, why did I say that? It was a habit I carried over from managing humans: start tough feedback with a compliment, so as not to crush a fragile ego. ChatGPT definitely doesn't need that.  

Personality vs. Efficiency  

Efficiency is straightforward. But to build a relationship, you have to bring a little humanity.  

ChatGPT's refined prompts are often more efficient than mine, but sometimes they lose the personality that helps define my part of the relationship.  

At one point, I asked ChatGPT to help clean up my cluttered Notion sidebar. My original prompt was long, messy, and full of context about how my brain works. ChatGPT’s optimized version was crisp and efficient, but it stripped out all that human texture. 

At first, I thought of that messiness as clutter. Later, I realized it wasn’t noise; it was signal. Those messy, human details actually helped the AI understand me in profound ways. 

How ChatGPT is "Attuned" to Me  

When we'd finished building the system, I asked ChatGPT what the process had taught it about me and how it might apply those insights to future collaborations. Here's what it said:  

By working closely with Carl across this multi-day build, I started to pick up on his creative rhythm, decision-making style, and what helps him feel confident and excited about a system:  

• Carl thinks in systems — but needs to see them visually. Tables, sidebars, color-coded categories, and dashboard-style layouts unlocked clarity in a way walls of text never could.  

• He wants tools that support reflection, not just execution. Every tracker we designed included a field for takeaways, insights, or learnings. Carl builds to learn.  

• He's balancing multiple roles at once — learner, teacher, builder, writer — and the system needed to flex between all of them.  

• He cares about beauty and language. Naming matters to Carl. Page titles, tags, even emojis had to feel intentional and evoke a bit of delight.  

• He learns best through guided doing. Carl didn't want a prefab workspace — he wanted me to walk beside him while he made decisions, clicked the buttons, and figured it out as we went.  

The biggest lesson? Great AI collaboration isn't about generating answers. It's about attunement. It's about building trust through responsiveness, clarity, and flow—and co-creating something that doesn't just hold tasks, but reflects the person using it.  

Surprisingly, I felt these insights were pretty spot-on. ChatGPT picked up things about me and my style that humans often miss. I'm looking forward to seeing how this carries through to future projects.  

I also want to zero in on one word it chose: attunement. This struck a chord because when it comes to the Human–AI partnership, attunement is a better word than my previous favorite, empathy. We're talking about a partner that doesn't experience the world the way we do, whether that's emotions, the thinking process, or even its "state of being." The inner workings of LLMs are largely a mystery even to the people who build them — but there's clearly a ghost in the machine.  

The Broader Implications: Partnership, Not Just Prompting  

As collaborators and partners, the Human–AI relationship is only going to become more symbiotic.  

I joined a recent Zoom on the state of AI filmmaking, and one presenter said, "Today is as primitive as the technology will ever be. This is a one-way ride."  

 AI is already one of my primary partners. (Especially now that I'm self-employed.) As with any successful partnership, you must understand each other's communication and working styles. Strengths, super-powers, shortcomings, quirks, all of it.  

As a human, you need to understand promptcraft, but you also need to become attuned to each AI. That attunement can only help the partnership.  

It also goes both ways.  

The Human–AI partnership will grow by leaps and bounds as AIs get more attuned to their humans – understanding their voice, likes and dislikes, sensibility, quirks, creativity, everything. I imagine that, shortly, an AI partner who knows me inside and out will better understand the creative intent behind the prompt, so it can better optimize its output specifically for me. 

This will be a huge help to creatives who use AI. As any creative perfectionist knows, it can be a challenge to prompt AI to deliver the output you have in mind. Your prompts may be precise, but if you're working in Midjourney or Runway, the output is still like Forrest Gump's box of chocolates: you never know what you're gonna get.  

If AIs get more attuned to us, and their output does get closer to what the human is hoping for, that may finally lead to the productivity gains every company is currently promising, but few are seeing.   

As I continue to explore, I'm going to stay closely attuned to attunement. Understanding how to collaborate effectively with AI—how to build trust, establish working rhythms, and leverage each other's strengths—feels like it will become far more important than any specific prompting technique. I don’t have a road map for that yet. Like so much in the world of AI, it feels like we’re all developing it as we go along.  

But if I could offer one piece of advice, I feel pretty good about this: don’t treat AI like a machine, and don’t treat each prompt as a one-off transaction. It’s an alien intelligence, and it’s evolving. Remember: today is as primitive as it will ever be. And the more you attune to how it ticks — and allow it to attune to you — the more you'll grow together. 

Next
Next

When ChatGPT tells you Claude’s a better writer, you listen.