We don't need another chatbot!

and yes, I'm an AI practitioner.

when people think about AI, most think of it in the form of a chatbot or its equivalent. i’ve built >5 chatbots, deployed and live to hundreds-thousands of users within an MNC (multi-national corporation) in the last 1.5 years since AI became popular. and honestly, the world DOES NOT NEED another chatbot. i’m not saying that the algorithmic developments for large language models (i.e. category of model behind AI) are not amazing and that we should halt it. i’m saying that the world is too fixated on just ONE version of how the interface can look like. on just ONE way that humans can interact with it.

when i was speaking to some venture capitals about my startup’s product, one of the questions I got was - “why isn’t it designed with a chatbot interface?” well, because that’s boring. that’s so 2024. we shouldn’t build AI products where users need to explicitly request what they need every time. what users need for their daily use case should be embedded in the product workflow. how AI is engineered within the product should be designed around user behaviour/psychology, seamless and inter-connected within the product features. to just use AI as a retrieval tool primarily, with nicely worded output, is an insult to the technology. there are so many other machine learning design principles that can (and should) be integrated.

that’s the challenge for the next few years. firstly, how do we design truly smart solutions? secondly, how do we reduce adoption friction?

a good example is elevators that was introduced in the 1850s. at first, elevators had operators. you told a human which floor you wanted, and they handled the machine. when fully automated elevators were introduced in early 1900s, the interface was simple:

  • buttons were labeled.
  • safety mechanisms were in place.

from a design and engineering standpoint, the system was ready. they were actually safer and more efficient, but people refused to use them.

not because the technology didn’t work or the interface was confusing, but because removing the human broke trust.

the important part is this: the problem wasn’t poor design. the interface had already abstracted away the complexity. what remained was psychological friction - the time it takes for users to trust a new interaction model and internalize that it works.

we can (and should) design intelligent, workflow-embedded interfaces where users don’t need to prompt, query, or “talk” to the system. but even when the interface is right, adoption won’t be instant. users still need repetition, reassurance, and proof that the system behaves predictably.

that’s the next challenge.

firstly, designing smarter systems/products, and with that, earning trust over time. such that the intelligence is invisible, reliable, and eventually taken for granted.

Stay KLAR

KLAR - from the German for “clear.”
On content, AI, and the systems behind real results. Without hype, hacks, or shortcuts.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Copyright © 2026 KLAR.ai Pte. Ltd.