We're firing 76 beefs, 34 chickens, okay?
In Gene Kim and Steve Yegge’s excellent book “Vibe Coding”, they make an analogy throughout the book of becoming the head chef of a kitchen full of AI Agents.
You’re responsible for planning the menu while also making sure the Agents don’t set the kitchen on fire. Somehow, real kitchens can operate with multiple people (who aren’t the head chef) who all contribute to the same dish that goes out the door to the customer. The head chef is also, seemingly paradoxically, ultimately responsible for every dish that the customer eats.
How do kitchens scale while also maintaining quality past the head chef preparing everything? Systems.
The most important page in the book happened to be towards the end of the first section where they talk about two different head chefs. One adopts a new practice of a French “Kitchen Brigade” with defined roles for each chef and patiently works out the bugs in the process. The other head chef complains loudly on social media after capturing a moment where their chefs prepare a dish that goes horribly wrong and gives up this new, frustrating practice.
The chef who sticks with the kitchen brigade system becomes much more successful and can scale their restaurant to Michelin star level quality. The chef who doesn’t adopt it gets left behind and can’t scale to a workload past themselves.
The Bear
I also just so happened to watch the second season of the TV show, “The Bear” during the time I read the Vibe Coding book.
It was an appropriate pairing, to say the least.
An emotional arc of the show involves a younger brother finding a note from his deceased older brother that says, “I love you dude. Let it rip”
As an older brother, this hits me hard. Here’s what it taught me about Enterprise B2B SaaS Sales. Just kidding.
Learning Culture
I think we all need this note from an older brother or mentor figure when we’re about to embark on something new. Something a little dangerous. Something a little scary.
Trying anything new feels like a threat to your normal way of life that you’re used to. Your way of life which depends on how you and others view yourself. You’re afraid to fail, and that’s okay because it is evolutionarily adaptive. It is how we are hard wired, but we have to change how the software reacts.
You are allowed to fail, as long as you learn from it.
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.
We need to create a psychologically safe space to experiment with any aspect of our lives, but especially with work, since it takes up such a large portion of our days and is the most competitive aspect of our lives. People like to throw around the word “innovation”, but don’t want to invest in the failure that is required to get there.
We need a “sandbox environment” where we feel safe to play around with new concepts and tools before any real consequences are felt. But eventually, real consequences will inevitably happen.
Performance Culture
As big of a Vibe Coding advocate I am, I still felt a little bit of trepidation this week when I started to make real Vibe-Coded PRs against real repositories that really go out to production. It’s all fun and games until you’re really going to change prod with code that you didn’t directly write.
I do “deploy to prod” with the app that I’m building on livestream, Roxas, but it currently has zero users and no SLA.
There is something universally unsettling about delegating the act of writing code out to someone or something outside of yourself as a Software Engineer. It’s almost giving up the title of “Engineer” and moving into a management role.
I argue that Software Engineering Managers have been Vibe Coding this whole time. They define requirements around an issue: the inputs, outputs, and transformation logic in between. The manager gives guidance, elucidates constraints, enforces enterprise policies, encourages industry best practices, and gets engineers unstuck and unblocked by dependencies. That is a lot of communication and no code. They do not write a single line of code themselves, yet they are responsible for the outputs of an entire team of engineers. They are responsible for every dish the customer eats.
Naysayers counter that you can hold a human accountable if they screw up. I would say that you do not if you claim to have a “blameless” engineering culture. (What’s the opposite of blameless? Blameful? Hiring manager says to you on your first day, “we like to blame each other here”. Would anyone want to work there?)

Blameless postmortems and RCAs are foundational to modern high-performing engineering teams. They assume a certain level of mutual respect and focus on the problems with the company’s systems, rather than the company’s people. Engineers finding the root cause of issues ask the “Five Whys” to try to understand all of the contributing factors leading up to a failure. Whether it be an erroneous change, or even sometimes the lack of change. They also look into what additional safeguards their systems need to prevent such a failure in the future.
The focus is and should always be on improving systems. That can also include improving training of people, because how you onboard people and train them on your systems, is a system too.
Yes, ultimately if someone is beyond training and systems safeguard, and has had 2nd, 3rd, 4th, and 5th chances to improve, yes, you have to let people go who cannot do the job and are costing you significantly more money than they are making you.
I say “significantly” because we need to separate out interns and junior engineers who are also a short term net negative on your company. However, they are the lifeblood of the industry. How do we make senior engineers? We have to invest in junior engineers. That investment takes a long time to start to have a positive ROI, numbering in years.
Naysayers are throwing out AI Agent software engineers when they don’t produce an ROI in weeks to months. Are we forgetting how patient we are with ramping up junior engineers?
AI Agent Learning Culture
Let’s step back from management mumbo-jumbo because I know that most engineers have no interest in managing other people and just want to write their code.
When you’re ramping up on a new programming language, are you using that new language right away to solve Leetcode problems?
That may be a great way to practice learning a new language, but you wouldn’t rely on it to get you your next job. You would default to your best, most practiced programming language. That language may be Java, but the job’s expected day to day language would be Kotlin. You pass the interview and start the job. How are you supposed to learn a new language without failing, at least a little bit at first?
The same thing is happening with Vibe Coding with AI Agents. People (read: the entire industry) expect instant positive ROI on this new “programming language” of English (or really any other written language).
Yet they forget the last time they tried to learn a new programming language. Were they instantly more productive with it because it was “newer”? One of the most widely-adopted, relatively new (13 years old) systems programming languages, Rust, is notoriously difficult to ramp up on, and only just recently is AWS now preferring to use it by default for their performance-intensive projects (which is likely all of them at their scale).
This shows that when something works, it is worthwhile for people to invest in learning how to use it. Even though it may take a while for it to be proven out, those who take the risk are rewarded with an edge in the marketplace.
AI-skeptical engineers are not giving the same chance to Vibe Coding with AI Agents. AI Agents can do the job, though, they are getting better at it every day.
AI Agent Performance Culture
If AI Agents right now are “only good on small, new, toy projects or problems”. Then what are large, complex systems but many, many small projects and solutions to problems, glued together?
How does a human fit all of that context in their head at one time? Hint: they don’t.
We break problems down one at a time, each into the simplest possible problem definition such that we can start to write code against it.
How do we write code? One line, one character at a time. A nondeterministic system trained on a very small portion of human-written code.
How does an AI write code? One line, one token at a time. A nondeterministic system trained on nearly all publicly-available human-written code.
Just like junior engineers or a new programming language, it will take some time to see widespread objectively positive ROI, but it’s not appropriate to call it a “complete waste of time” yet. You’d be calling all of your junior engineers that right now, given the objectively short timeframe that we’ve had widespread availability of AI coding agents (~6 months).
“But Mike, I’m afraid to use it at my job because I don’t want to look dumb / break prod / get laid off / and want to get a promotion”.
You’re going to look dumb, at least at first. Your reviewers are going to find vibe coded slop that you missed. They would have found your regular slop anyways on PRs that you previously didn’t proofread yourself.
This isn’t to advocate to completely rely on your reviewers, but encouraging the opposite. Do as much due diligence as you can on your AI-generated PR code. Tell your Agent to run tests, explain the tests, and explain how the code it wrote works with the existing code. We have the resources of Agents that never tire of you asking questions and challenging them. Now we need to be resourceful and do our homework around the code it generates. I’m writing this as much of a reminder for me as it is a message for you.
The positive thing is that managers and companies generally are encouraging the use of AI tools and are okay with a little bit of waste product as we figure this new thing out. If you’re afraid of your manager micro-managing your performance and you’re not free to experiment with revolutionary new tooling for fear of losing your job, I’d take that as a signal.
Enterprise Vibe Code
I get it, it’s anxiety-inducing where you are basically starting over as a junior engineer again at this tool that can generate an entire working solution that you don’t entirely understand.
Yet here you are with a big fancy title like, “Senior Principal Staff Chief Galaxy-Brain Ninja Pirate Architect” and people expect high levels of quality and volumes of code out of you. You risk decreasing those as you work through the learning curve and growing pains of becoming a Head Chef of a kitchen of AI Agents.

I think there’s a social pressure to use AI tools perfectly too. The pejorative of “slop” is in the running for Word of the Year as a “thought-terminating cliché”. I think we need to reframe the usage of it in a professional software engineering context.
When we ship “slop”, it’s a hitting “send” on a first draft. It could be potentially embarrassing, but at least the velocity of shipping is high. We are unafraid to put our work out into the world. That’s the start. We “let it rip”.
This pairs nicely with a concept from Alex Hormozi for creating is “More > Better > Different”. You need to put the reps in with “more”, before trying to figure out how to do something “better”, and repeat those many many times before you try to do something “different”. If do lots of reps of one thing over time, you’re naturally going to get better at it. You lose all of that experience when you change to something different.
We’re losing all of our reps of manual software engineering when we move to Vibe Code Engineering.

Just like we refine our own first drafts of code/content/thoughts with repetitions and iterations, we need to build systems that automatically and autonomously refine our Agents AI-generated “slop” so that it better fits what we want it to do (and not do!)
I believe we need to reclaim the term “AI slop” as Software Engineers because it just means that we’re learning. All we need to do is figure out how to get our junior commis chef to chop vegetables in a consistent way. Then after that, we will have to get the saucier to make a consistent savory steak sauce, etc.
It is my opinion as a Senior DevOps Engineer, whose entire career is to make systems to prevent breaking code changes, is that we need to invest even more time and effort into these systems to prevent any one Agent from bringing the whole system down. Just like we do with humans.
The first step is to be willing to risk looking a bit silly while we produce slop, but confident in our ability to eventually figure these things out, because we’ve done it over and over again before.
I love you dude. Let it rip.

