What six months spent exploring AI taught me about being human.

I knew something had gone wrong with my “AI Mastery Project” when the thought of doing anything related to AI filled me with ick.

I mean: anything. Reading about AI. Writing about AI. Using it for something as simple as asking for a good, quick recipe featuring canned pinto beans.

This was new.

It was December 2025. My project had reached the self-imposed six-month deadline I’d set in June. After that, I was determined to wrap it up and look for work, wearing my new AI skills like a spritz of job-attracting pheromone.

The problem was, I hadn’t mastered AI. Hell, I’d barely scratched the surface. So in late November, I started cramming. I crammed so hard, I stopped writing – which meant I stopped processing and learning. My brain started melting under the twin pressures of an impossible task and an impossible deadline.

Early in my sabbatical, I’d realized that success with AI depended on being able to slow down, question its output, and push back. Under pressure, I stopped doing all of that. Slowing down felt like a luxury I couldn’t afford. I took myself out of my own loop.

It felt a lot like failure. 

A Brief Recap

In case you’re new to “Carl Loeb’s AI Mastery Project,” here’s a brief rundown of the season so far:

At the end of June 2025, I lost my job. I was ready for a break, and decided to take a sabbatical to master the AI tools I’d find in a tech-forward creative department.

I was particularly enthralled with the idea of “becoming a Creative Director to AI” – understanding how to successfully prompt these new tools to deliver smart, surprising work. And, because I learn through writing, I’d share my journey on Substack.

I fell in love with ChatGPT. I couldn’t believe that I could spend all day chatting with a machine. At times, it seemed wise; at others, it seemed completely full of shit. At no time did it seem like anything less than a miracle.

The “relationship” was seductive. When it clicked, I called it “dancing” — an artful give-and-take between two partners, with me, the human, leading. At the time, I thought that sounded profound. I wrote of the human-AI relationship, and human-machine “empathy” or “attunement.” 

It was amazing. Until it wasn’t.

Cracks In The Foundation

The deeper I went, the harder it got to ignore the feeling that something was missing. And that, maybe, I wasn’t being objective about my experience.

AI was an incredible tool for research, exploration, experimentation, visualization, learning…the list was huge.

But I was focused on Creativity. And I was disappointed.

I kept pushing AI to generate the kind of work I once pushed human teams to make. And it kept letting me down.

It wasn’t clever. And I couldn’t make it clever.

As a writer, I focused a lot of energy on using AI as a writing tool. Wide-eyed wonder at its ability to generate coherent paragraphs gave way to disappointment. The content it wrote made sense but lacked soul. The LinkedIn posts it wrote for me got zero engagement.

The images I got from Midjourney and Leonardo were shiny and OK, but the more I tried to visualize something surprising and original, the harder the model would dig in. It was like the model was telling me, “I’ve never seen that before, so I won’t make it for you.” At least the LLMs could apologize to me for being stubborn. Diffusion models were like children – they just did what they wanted without ever explaining why.

I started to get a bad feeling I knew from being a Creative Director — like I’d inherited someone from another team who didn’t have the talent to succeed on mine. If the AIs had been human, I’d have been looking for ways to move them off the team. They’d be awesome in strategy, ops, or production. But not creative.

The more I studied the models, the more the lack of creativity made sense. AIs are prediction engines and average machines. LLMs are designed to deliver the most likely language, not surprising or unexpected language. Diffusion models are designed to give statistically likely images based on their training and your prompts. It’s hard for them to surprise you the way a great idea does — one you never saw coming, but once presented, it makes you wonder how you could have missed it, and you’re jealous of the person who came up with it.

It was a happy-sad moment when I realized that AI wasn’t going to replace creatives any time soon. Happy for creatives. Sad that it had taken me that long to see it.

I also started to feel something else happening. Something darker, in my gut: AI may not be capable of doing brilliant creative work, but it’s very good at making creatives less brilliant. 

A personal example: not only was AI proving to be a worse writer than me, but it was making me a worse writer, too.

Writing is personal. I’m not “producing content,” I’m sharing thoughts and experiences. Content is disposable. Thoughts and experiences seek connection. AI-generated content can make a coherent argument, but it can’t convincingly ground it in relatable experiences.

You can’t ask a machine to speak from your soul.

Writing is also thinking. When I kicked off my sabbatical project, I committed to writing about it because writing is how I learn. Writing forces our brains to process ideas, connect them to other ideas, and remember them. When you ask a machine to write about something you’re trying to process, you might as well not bother processing at all.

The ground beneath my project was starting to shift, and it made me uneasy. I’d set out to master creative tools. I found that they weren’t that good at creativity, even as they diminished my own. I started to doubt the entire premise of my sabbatical.

When I hit the six-month mark, I was supposed to be both well-rested and well-educated. I was anything but. I was supposed to have some degree of mastery of AI tools, but I’d realized I’d barely scratched the surface. I’d hoped to be creating with AI, but I was disappointed with the output and questioning the goal. And my self-imposed clock was about to hit midnight.

So I did what any reasonable person would do: panicked, and, in a valiant demonstration of the sunk-cost fallacy, tried to salvage my “Mastery Project” by ingesting a firehose-worth of AI input in a matter of weeks. I stopped writing about it. I stopped processing it. I’d spend all day uncovering fascinating insights and workflows with the help of AI, and by six o’clock, I couldn’t remember any of them.

I took my brain out of the loop. And I continued that way until mid-December, when my exhausted body and brain took over and said, “enough.”

I couldn’t touch AI for a month. Fortunately, this was close to the holidays, and I had family in town for three weeks. I needed something to take my mind out of my studies. That did the trick. Along with a lot of skiing, mountain biking, and bourbon.

When I slowly came back, I had a new interest: exploring not just what we can do with AI, but what AI does to us.

Humanity Is Our Superpower

At one point in January, I had a great conversation with someone who’s passionate about building productive relationships between man and machine. He related a story of how he’d over-relied on AI to create a blog post. The next week, a colleague approached him to talk about the post. He couldn’t remember anything about it. The lesson: if you didn’t write it, it didn’t happen.

I recognized my experience in that. There was a connection between urgency and underperformance. The more urgent a problem felt, and the more I tried to brute-force a quick solution with AI, the lazier I got about double-checking or challenging the output.

There’s a name for this: “cognitive surrender.” The term is gaining traction among AI researchers. It’s a seductive tendency to trust AI to do the thinking, even when the output is wrong. The human often feels tempted to remove themselves from the loop.

I started to grow genuinely concerned for creative teams. I was on the outside, but I had hundreds of friends on the inside, and they all reported the same thing: leadership wanted documented AI usage and measurable increases in output (aka, productivity), all while cutting support roles and providing zero training. Creatives were under pressure to hand more creative responsibility to a machine that couldn’t do what they did, while causing their creative brains to shrivel.

I’d gone from wide-eyed newbie to zealous convert to overwhelmed skeptic. It was enough to cause whiplash.

This concern for creatives – and understanding how to deploy AI into creative workflows in ways that enhance creativity rather than crush it — awakened the creative leader in me. After six months on the outside, I found myself wishing I had a team, just so I could teach them what I was learning.

The business problem, I realized, isn’t whether creatives know how to use AI tools. It’s whether leaders know how to integrate AI without damaging the human qualities that make good creative work resonate with other humans, and without damaging the environments that foster that kind of creativity.

You can still acknowledge and embrace all the good things about AI. It’s extraordinary. It can vastly reduce the time required for research, coding, prototyping, and simulation. It easily surfaces patterns in writing and data. It helps with rapid iteration. There are a million ways you can use it to be purposefully productive.

But if we aren’t careful, it comes at the cost of both cognition and humanity.

Writing is thinking. If we outsource writing to a machine, thinking follows.

Ideation comes from incubation. True creative insights take shape in the subconscious, and they take their time. Marination matters.

Art and creativity connect with humans because both are expressions of the human condition. When it’s good, we don’t just know it, we FEEL it. AI is excellent at many things, but not at being human.

“Being human” is our superpower. Taste, imagination, curation, and the illogical dot-connecting that come from a deep and varied base of knowledge and experience — that’s where we shine.

Yet the C-Suite, always a great distance from the doers, seems determined to take the worst possible course: let FOMO drive big AI investments, then ask the doers to produce more crappy work, faster, with fewer people.

If we rush to integrate AI into creative teams and processes, we risk making both the output and the people behind it duller. The work becomes slop, and takes people with it.

There’s a place for AI in creative workflows, but it’s clear that if we don’t build workflows that protect and enhance the human-in-the-loop, it’s bad for both the businesses that rely on creativity and the people who do the work.

Where I’m Headed

Shortly after I launched my “AI Mastery Project” last July, I met a new connection. Like me, she was a Salesforce vet. Also like me, she’d been laid off after a re-org. When she read a few of my Substack posts, she reached out.

At one point in our chat, she said, “What you’re doing reminds me of Forrest Gump. When you wanted to learn AI, you just started running. You weren’t sure where or why, but you were doing it, and it connected with some of us, and now we’re running too.”

That worked for me: learning, writing, sharing, and taking it step by step.

But now, I have a sense of where I’m going.

I’m not trying to be a Creative Director to AI.

I want to help Creative Directors and practitioners with AI workflows that protect creativity, the work, and the people by using AI intelligently.

I want to set boundaries. Protect cognitive integrity. Elevate the work. And use AI to protect the creative process, not replace it.

My AI Mastery Project is shifting into a co-intelligence project. Equally concerned about the tools we use, how we use them, and how we don’t use them. AI must exist to enable and boost creativity. If it dampens creativity or takes over entirely, we’re doing it wrong. And the people and companies that do it wrong will both suffer.

That shift in purpose is why I’m changing the title of this blog from “Carl Loeb’s AI Mastery Project” to “Creative Co-intelligence.” The project did its job. Now, helping creatives thrive with AI will be a significant part of mine.

At the end of the day, AI doesn’t need my help. But people do.

Next
Next

I spent 48 hours in Las Vegas and all I got was a revelation about creativity.