Would you rather take a single $100 bet where you’re 99.99% likely to win $100, or take ten $100 bets where each has a 75% chance of winning? Most people will choose the safe bet even though, statistically, you should pick the ten bets because you’ll likely net about $500, with very little chance of making less than the $200 you’d gain from the first bet.

Businesses, which are made up of people, also overwhelmingly choose the safer, first bet. Investors, customers, and especially managers worry that any failed bet signals future failures rather than being just a calculated risk. They can be quick to decide the bet-takers “don’t know what they’re doing” instead of seeing a sensible risk that just didn’t pan out. Simply put, their goal is minimizing complaints, and thus personal risk, not maximizing profits.

AI, however, lives in the world of the second bet—a world where quick, mostly-correct answers beat slow, guaranteed answers. A recent article I read highlights that financial technology, software, and banking lead in AI integration. This makes perfect sense: finance and banking already understand risk and percentages and are comfortable with bets that, if taken repeatedly, pay off.

Software development is iterative by nature. Any programmer claiming perfect code the first time around is either incredibly slow or outright lying1. With so many variables and potential pitfalls, software thrives on trial and error. The faster you iterate, the better.

The Current Way Things Work: Costly Signaling Theory

I have long suspected most business rituals—project plans, PowerPoints, endless meetings—were fluff. I often thought, “If people spent as much time working on projects as they do on status presentations, we’d already be finished.” I believed it was all a massive waste of time.

Turns out, I was completely wrong. It’s actually worse than a waste of time, it’s destructive. These rituals aren’t harmless fluff—they actively hinder progress2. They go beyond neutral time-wasting into active negativity.

I’ve written before about how perceptions of risk shape behavior. Management mistakenly believes that elaborate charts and detailed plans reduce risk, but Costly Signaling Theory explains what’s really happening. Businesses, and people, make extravagant displays to prove their wealth, resources, or competence. These displays are credible precisely because they’re costly—creating fancy PowerPoints requires genuine time, effort, and skill. A financial advisor’s opulent office signals success and stability, but if you think about it, wouldn’t you prefer an advisor who invests money efficiently instead of wasting it on marble flooring? If you knew that a financial advisor was competent and honest, you would opt for the financial advisor who was most frugal, just spending on what they needed to operate, instead of reducing your potential gains by spending on their office.

COVID-19 clearly illustrated how the perception of risk affects actions. When travel shut down, many companies switched to remote work almost overnight, something that for most businesses would usually require months or even years of planning. But necessity is truly the mother of invention and companies often managed this feat in days or weeks. Once COVID retreated, so did the urgency, and thus the willingness to make, rapid change.

Bottom line: Most businesses, and almost all management cares more about avoiding complaints than about maximizing efficiency or profits.

AI, though, is inherently imperfect. Its strength lies in speed, flexibility, and versatility, often surpassing human experts. However, if you need 100% accuracy—or even 95%—general-purpose AI isn’t the right tool.3

New Thinking Is Needed Before AI Can Be Fully Effective

As a software developer and executive, I’ve been involved with hundreds of projects and countless instances of executive frustration at imperfections—which, in software, are inevitable.

Once, a CEO reprimanded me for a tiny issue. Imagine your bank statement correctly listing every transaction, but once in every 10,000 entries, the running total was off by a cent—though the final total was correct. It’s something to fix, sure, but hardly critical.4

This reaction isn’t rare. Employees typically respond in two ways:

  1. Quietly fix issues without telling management (since severe reactions discourage transparency).
  2. Become overly cautious, wasting days or even months trying to reach an unreachable perfection.

The issue wasn’t even an actual complaint or a likely one—just an issue found during internal testing that could potentially be noticed. It’s amazing anything useful ever gets done under these conditions.

Herein lies the rub: AI isn’t reliable enough to avoid complaints. Recently, an AI chatbot on Apple’s website gave me incorrect instructions about managing developer credentials for the mirror mirror project. Either the site had changed or the AI hallucinated, but the instructions were definitely wrong. I use AI extensively, and it often makes mistakes. Usually, this doesn’t bother me at all5 because I understand its limitations. It’s fast enough that being right 75% of the time still makes it extremely useful. If I were to expect 99.9% accuracy, as most people seem to , I would be pretty unhappy with AI, but because my mindset is prepared for the imperfection, it’s not a problem.

Embracing Risk

“Faced with the choice between changing one’s mind and proving there is no need to do so, almost everyone gets busy with the proof.” – Galbraith’s Law

Companies that shift their mindset to embrace rapid but imperfect progress stand to gain tremendously. Those unwilling (which is most companies) will eventually be replaced. The good news for these slow-moving organizations is that this takes time—often decades, as shown by the Productivity Paradox. The bad news is that replacement, while gradual, is inevitable.

What Can I Do?

Typically, I’d end by giving some practical advice. I could trot out the usual suggestions: educate executives about the risk based mindset needed to profit most from AI, run small-scale experiments, etc. But realistically, pushing hard for this massive change in most organizations is likely career-ending rather than career-enhancing. Management quickly realizes they aren’t ready for this shift, and backing such changes could threaten their positions. Naturally, they resist.

If you’re fortunate enough to be in a company that’s genuinely forward-thinking and comfortable with calculated risks, you have a real chance. For most people, though, your best move is to understand AI deeply and leverage it for your own productivity. If retirement is near, continuing business-as-usual might work—for now. If not, jumping ship to a more agile, risk-tolerant organization might be your best choice.

Better yet, take your deep industry knowledge and your forward-thinking vision, and launch your own competitor based on this new approach. If change is inevitable, why shouldn’t you be the one driving it?


  1. It is so rare for code to work perfectly on the first try that if I have a piece of code that seems to work perfectly the first time I get very worried that I’m missing something in my testing. More often than not, I’m proven correct.
  2. To be clear, I’m not saying you shouldn’t plan or give status, just that when the plans and status becomes the focus instead of the project being focus, the project suffers.
  3. There are exceptions to this. For instance, AIs highly trained on specific areas, such as cancer detections for x-rays and CAT scans, can rival the best human experts. But while these AIs excel at their particular areas, they are useless for all other areas. The more narrow the focus of the AI, the more reliable it is, but the less useful outside of its expertise.
  4. Transparency is deeply important to me, so I openly share issues—even knowing some people may overreact or use what I reveal against me—because it builds trust.
  5.  Well… ok… actually it does bother me sometimes and even pisses me off when it sends me in loops, but not often.

Discover more from Lowry On Leadership

Subscribe to get the latest posts sent to your email.

One response to “Why Embracing Imperfection is Key for AI Success”

  1. […] infallible: We’ve grown to think of computers as not making mistakes. To be successful with AI, we must change this mindset. AI is only as good as a mid-level coder and expecting perfection is asking for disappointment. Of […]

Leave a Reply

Quote of the week

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

~ Sam Altman (apocryphal)

Designed with WordPress

Discover more from Lowry On Leadership

Subscribe now to keep reading and get access to the full archive.

Continue reading