Modern computing engines such as Quantum Computing and Artificial Intelligence (AI) are markedly different from traditional computers in a fundamental way: imperfection is part of how they operate. This contrasts with traditional computing where computers are expected to handle trillions of transactions flawlessly. As a consequence, not just computer science, but everything that relies on it, which is… well… pretty much everything, will need to be rethought.

Fast but Inaccurate

Anyone who follows Artificial Intelligence (AI) knows that from time to time it ‘hallucinates’, making up ‘facts’ that aren’t true. In one widely shared example, lawyers filed a ChatGPT-written citation to the court only to find out ChatGPT made them all up. I have found hallucinations, and other errors, to be commonplace with AI. So much so that I won’t let it write a line of prose or of code without careful review. Frankly, it gets a lot wrong.

So, why use something that’s unreliable? Because it’s very, very fast. It’s also mostly right. As I mentioned in my article about AI processing of school transcripts, as long as the error rate is low enough that correcting the output takes less time than creating the output from scratch, using the AI is worthwhile.

In my experience, AI for code, while not perfect, is well past the point of being quicker to correct than to write the code myself. Because, I can correct it quickly, my coding is at least 10X faster than before. On the flip side, I find the prose it writes needs a lot more correction to meet my style, so I choose not to use much AI for my writing.

True Speed = Raw Speed – Error Correction

Speed is about speed to the finished result, not an interim result. This goes back to the old 80/20 rule, where the first 80% of a project is completed in the first 20% of the time. After rapid initial progress, the project looks like it’s mostly completed, but, finishing it up takes much longer than getting the first 80% done.

Take writing a book, for example. My experience is that at first it comes together quickly, in just a matter of a few months with concerted effort. But the result of the first draft is never the quality of writing you want and so you start the laborious process of editing. Depending on the quality of the initial pass, the quality level you are looking to achieve, and how good your editing process is, it can take many months or even years to get to a completed product. You might have a promising initial draft in two months, but not have a publishing-ready book for a year or more.

Quantum Computers: Amazing but Inaccurate Speed

My friend Peter Mancini is a Quantum Computing expert. Quantum Computers use advanced physics to exploit properties of individual atoms and parts of atoms to solve mathematical problems not easily solved by today’s computers.

Peter says that Quantum Computers are not widely used because they are exceptionally error-prone right now. He puts the error rate per calculation at between 1% and .1% or between one in a hundred and one in a thousand. This might not sound like much, but that would amount to between five and fifty typos in this post. Bad, but probably not catastrophic in a blog post. Quite catastrophic, however, in a computer program, where even a single zero or one being incorrect can make the entire thing fail.

In contrast, modern memory chips, called Dual Inline Memory Modules (DIMMs) have error rates measured in errors per year. Given that that a standard DIMM module such as the DDR4-2133 can handle over 2 billion operations per second, a typical error rate of 32% of DIMMs experiencing a correctable error a year amounts to one error in every 10,000 trillion calculations, or more than ten trillion times more reliable than quantum computers. 

Peter tells me that there are three main ways to deal with error:

1. Prevent error in the first place.

2. Add redundancy and error checking to detect and correct the errors.

3. Correct errors by using external data (A.K.A. fact checking).

Preventing error is a major reason enormous sums of money are spent to build quantum computers. Some are built deep in caves to avoid electromagnetic interference or are built on shock absorbers to protect them from any movement. In short, they are supremely delicate machines with any slight variation in environment possibly causing errors.

In its simplest form, error correction is doing the same calculation multiple times and, if each time gets the same result, assuming the result is correct. If not, then you assume it’s an error. Advanced error correction is mathematically intense, and often does not require the entire calculation to be rerun.

Because even with these herculean efforts error rates in quantum computing remain high, recently IBM, a major creator of quantum chips, announced that they had a new chip with a 1,000 Quantum-Bits, but that they would be now focus on smaller chips, with better error correction. In short, IBM has decided that to increase overall speed it needs to reduce raw speed and increase error correction.

The last method mentioned, checking with external data, has a more limited usefulness. After all, if getting the data from outside sources were fast and efficient, there’d be no need to have a quantum computer in the first place. Where it can be useful is in cases where Quantum Computing can greatly narrow down the list of possible candidates and classic computing can verify them.

Learning to Live in an Era of Inaccuracy

One might draw the general lesson that new technologies must often wait for a new generation of managers, whose decisions are not biased by the investments of the past, to take control” – Paul David

I’ve heard many people say that AI is useless because it is so often incorrect. I know of many companies that won’t let staff use AI for this same reason. This makes perfect sense to me; these companies are successful and so will keep doing things the way they have been doing them. Most will cautiously explore the new territory of AI but only cautiously implement it. In truth, however, they are ill equipped to use the new technology. All of their management systems, including such mundane things as time accounting, are all based around the old technologies.

AI’s propensity to ‘hallucinate’ or produce inaccuracies is a well-known issue. Despite this, its speed and overall accuracy make it a valuable tool when used with proper oversight. The same applies to Quantum Computing, where error rates are currently high, but the potential for solving complex problems is unmatched. The challenge lies in developing robust error correction methods and learning to live with a certain level of inaccuracy.

As we adapt to these changes, our approach to technology must evolve. We need to embrace the imperfections of AI and QC while leveraging their strengths. This means rethinking our management systems, training new generations of tech-savvy leaders, and creating a culture that values innovation alongside accuracy. By doing so, we can fully realize the benefits of these transformative technologies.


Discover more from Lowry On Leadership

Subscribe to get the latest posts sent to your email.

One response to “Into the Era of Imperfection – Artificial Intelligence (AI) and Quantum Computing (QC) are powerful, but not as accurate as we expect computers to be.”

  1. […] these than many others. My perspective is shaped by comparing AI not to modern computing, which is largely infallible, but to fallible humans. I believe we’ll need to acclimate to some degree of imperfection to […]

Leave a Reply

Quote of the week

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

~ Sam Altman (apocryphal)

Designed with WordPress

Discover more from Lowry On Leadership

Subscribe now to keep reading and get access to the full archive.

Continue reading