Stop Waiting for Perfect AI
'LLM BS tolerance' might be the most important career skill you're not building.

Your relationship with AI today will determine your career in five years. The question isn't whether AI will change work, it's whether you'll develop the emotional resilience to change with it. The challenge? We expect AI to be perfect like a computer, but it behaves imperfectly like a human.
Lots of us feel a sense of relief, whenever the AI gets something wrong, reinforcing our feelings that we are still irreplaceable
However, we need to keep in mind that AI today is as bad as it's ever going to be. And it's already very useful quite often. At the current rate of progress, we can't even imagine how good it might get rather soon.
At the same time, an LLM or AI is expected not to make any mistakes (It needs to be "computer good") in comparison to how we show empathy and tolerate mistakes from humans.
Working with an AI is a lot like working with somebody else, which can be challenging because we commonly feel that others can't do the job as well as we do.
Imagine you use LLMs to help you draft a message to your skip manager. However, the LLM has little context about the project and first asks you to provide more context. Even after providing the context, it's still missing the main idea you want to communicate, which you prompt to be included. Finally you have the main idea, but the message still doesn't sound like you, so you spend an extra couple of minutes making fixes. Writing it yourself would have probably been quicker and less full of frustrations.
Similarly a small code change that breaks many other things, you need to supervise all the changes and propose different approaches. Making it yourself might have just been easier all along.
This is why I believe that "LLM BS tolerance" is one of the main skills in today's world. You need to be able to tolerate the LLM Bullshit at times to be able to continue the journey of getting better at AI usage.
This frustration isn't new - it's the same resistance people felt when new technologies first disrupted familiar processes. There's actually a name for this psychological phenomenon.
The Betty Crocker effect
As industrialization was in full swing, new products came into market such as cake mixes. Initially, just water was needed to prepare a cake, but lots of customers felt it was too easy. In the end, they changed the formula so that you would also need to crack an egg. This simple step showed much better traction - the act of cracking an egg provided a bigger sense of contributing to making the cake. This is called the Betty Crocker effect.
Cake mix coming to market redefined what making a homemade cake means - cake decoration became the part of the process where bakers could express themselves.
Similarly with AI, it will no doubt redefine what working/living means. And we will have to find our own "cake decoration" to get fulfillment from. For many tweaking the output of AI even a bit can serve as "cake decoration"
The allocation economy
But here's where it gets interesting: this shift toward AI collaboration isn't just changing how we feel about our work - it's fundamentally changing what we're paid to do.
As we continue welcoming "intelligence on demand" to our lives, the value at work will shift from providing our own intelligence as knowledge workers into knowing how to deploy intelligence the best way.
This could look like: We identify and frame a problem, and provide needed context -> Use AI -> Human taste "icing".
Thus becoming basically AI managers. Deploying intelligence was something formally limited mainly to managers, but now anybody has the ability to do it thanks to AI. We are no longer limited by our individual experiences or education, instead we can leverage the collective wisdom of humanity.
In the allocation economy, you're compensated not based on what you know, but on your ability to deploy intelligence.
"Why struggle with imperfect AI when I can do it right the first time?" or "I'll wait until it actually works properly." Some people would rather stick to what they know works reliably.
But here's what waiting costs you: While you're perfecting your old skills, others are building their AI collaboration muscles. Think about it - when AI does get better (and it will, soon), they'll have months or years of experience working with it. You'll be starting from zero while they're already fluent in AI management.
It's like refusing to use email in 1995 because it was clunky and unreliable. Sure, fax machines worked fine, but the people who developed email tolerance early had a massive advantage when it became essential.
The "LLM BS tolerance" you build today isn't just about tolerating current limitations. You're learning how to communicate with AI, how to provide context effectively, and how to refine outputs. These skills will compound dramatically as the technology improves.
Building your tolerance
So how do you actually build this tolerance? The only real way to get better at using AI is to use it regularly, imperfectly, without overthinking it.
Pick one routine task this week - maybe drafting meeting notes, brainstorming ideas, or writing a first draft of something. Use AI for it, even knowing you could probably do it faster yourself. When it frustrates you (and it will), resist the urge to abandon it.
This is about trading certainty for potential. Yes, doing it the old way guarantees you'll get exactly what you expect. But pushing through the awkward back-and-forth with AI - accepting the imperfect output, the extra prompts, the need to refine - that's how you build the tolerance that will compound as the technology improves.
The goal isn't perfect results today. It's developing your collaboration muscles with an imperfect but rapidly improving partner.
Because in five years, when AI has gotten dramatically better, you'll already know how to dance with it.