Skip to content
Back to Blog
Expert story

Blame the AI

Explore how our high expectations of AI influence its effectiveness in business, and learn to view AI as a valuable tool rather than a scapegoat.

As Artificial Intelligence (AI) becomes increasingly integral to our lives and business operations, many people are quick to blame it for shortcomings, often overlooking the influence of human behaviour and perception. This raises crucial questions: Why do we hold AI to such high standards? What impact does this scrutiny have on businesses? And how can we foster a more balanced view of AI as a valuable tool rather than a scapegoat?

In this blog post, Bram will explore these questions and examine how our expectations of AI shape its effectiveness in our organisations.


 

We do things better ourselves

A recent article by Frederik Anseel in De Tijd (a Belgian newspaper focusing on financial and economic news) gives an interesting view on the use and the possibilities of Artificial Intelligence (AI): “We are blinded by the dazzling technological possibilities of AI. But the limitations may have nothing to do with technology, and everything to do with the brain. Human psychology will set limits to what we can leave to AI.”

An intriguing observation is made: despite the frequent failures of human behaviour, we tend to be more forgiving of human errors than those made by AI. The article highlights several key points, including the example of self-driving cars. While human drivers are responsible for over 500 traffic fatalities each year in Belgium, we do not ban humans from driving. Yet, one fatal accident involving a self-driving car sparks public outrage and calls to ban the technology altogether. This difference in judgement stems from a fundamental belief: “What we do ourselves, we do better.” A very interesting read on this subject is the research by César A. Hidalgo and in particular the book How Humans Judge Machines (a free digital copy can be downloaded here).

Even more fascinating is the paradox revealed by the research of Anseel: people tend to perform better when assisted by AI, but once they realise the advice comes from a machine, their performance declines.

Additionally, the article also cites research showing that workers respond positively to digital algorithms when they believe they were implemented to help them. However, when algorithms are perceived as tools for control, the result is an increase in burnout. This raises an important question for businesses that rely on AI: how do we ensure AI is viewed as an assistant rather than an overseer?

Our experience at Customaite

At Customaite, we have seen firsthand how our users tend to be far less forgiving of errors made by our AI-powered software than of mistakes made by themselves or their colleagues. Even minor errors, such as missing a small piece of information, can lead users to reject the system entirely—much like the reaction to self-driving cars. This rejection can ultimately lead to reluctance to use the system despite its undeniable benefits.

This mirrors what the article in De Tijd pointed out: we expect AI to be flawless, and when it is not, it loses trust quickly. With human colleagues, however, there is room for empathy and understanding. AI, in contrast, is judged purely on its performance. When it falls short of that expectation, the trust gap widens.

This behaviour is further reinforced by the fact that users initially always adopt a reserved attitude towards Customaite. When we introduce Customaite somewhere, we often start with a trust deficit, based on the assumption of the users that the software is there to control or replace them.

Bridging the trust gap with Human-Assisted AI

At Customaite, we have always adopted a human-centred approach to AI. Our mission is clear: AI is here to assist humans, not replace them. This collaborative approach—what we call "Human-Assisted AI"—ensures that AI helps users where it excels, and humans step in to guide or correct the system when it makes an error or is incomplete. By positioning AI as a tool that supports rather than controls, we aim to foster trust and reliability in our software.

We focus on fostering a collaborative relationship between our AI algorithms and its users. By reinforcing that AI is there to assist, and by building tools that allow users to review and easily correct the system’s decisions, we can maintain user trust. Transparency is key. We do not hide the fact that AI, like any system, is prone to occasional errors. What matters is how quickly and easily those errors can be identified and fixed.

We have also seen that when users perceive AI as a partner rather than a supervisor, they are more willing to embrace the technology. This is in line with the research findings mentioned earlier: workers respond positively to AI when they believe it is implemented to help them, not monitor them.

Lessons learned

The road to widespread adoption of AI in critical sectors like customs declarations requires more than just technological improvements. It requires understanding the psychological and emotional responses of users. Humans naturally judge machines more harshly than each other, but by emphasising collaboration and transparency, we can bridge the trust gap. At Customaite, our Human-Assisted AI approach ensures that AI is seen not as a replacement but as an empowering tool, enabling our users to perform at their best—even when the technology is not perfect.

Bram Vanschoenwinkel

Bram is an AI expert who started his career as a researcher at the Computational Modelling Lab at the University of Brussels and obtained a Ph.D. in Science (Machine Learning, Data Mining). Over the past 10 years, Bram led a team of 15 data scientists offering actionable business insights from data. He is currently Chief Product at Customaite.

MatejPlaninic

Get in touch with Matej

Want to see how Customaite works with your documents and how it fits in your company? Fill in the form below and discover how Customaite will make your life easier.

More Insights

Blame the AI
Expert story

Blame the AI

Explore how our high expectations of AI influence its effectiveness in business, and learn to view AI as a valuable tool rather than a scap...

A look into the future: Will you be the next Apple, Netflix, or Google?
Expert story

A look into the future: Will you be the next Apple, Netflix, or Google?

By adopting AI-driven customs declaration systems, you can transform customs operations, enhance efficiency & streamline the customs declar...

Innovation is not (Only) about technology
Expert story

Innovation is not (Only) about technology

Learn how innovation goes beyond technology at our Innovation Day and how a non-tech approach can lead to valuable insights and improvement...