Days
:
Hours
:
Minutes
:
Seconds

Elementra - 100% Elementor Theme + New Demos $69$59

View Now
Skip to content Skip to sidebar Skip to footer

Why AI Support & ChatBots “Suck” (Until They Don’t). From “Hate” to 50+% Autonomy [Adam 3.0]

AI customer support chatbot

“All AI support apps and chatbots suck.” That’s what our own ThemeREX Support team said loudly when we first tried using one. And let’s be honest, they had a point. We’ve all been stuck talking to bots that don’t listen, don’t understand, and definitely don’t help.

What started as a tiny experiment turned into something bigger: a deep dive into why people hate AI support, what actually makes it fail, and how we could change that.

The result? Adam 3.0 – an AI agent that now performs with 50% autonomy and even made our support team view chatbots and AI support from a different angle

1. Why and how did the experiment with AI support start?

Here at ThemeREX, we design, build, and sell WordPress themes and website templates – both on ThemeForest and on our own site. And like most theme authors, we live in the world of constant customer support. Every month, our team handles between 2,000 and 3,000 tickets. Some are quick, like “Where do I find this option?” or “How do I import the demo?”. But others can pull you deep into server setups, caching conflicts, or 3rd-party plugin compatibility problems.

Our support agents work hard, but we can feel the pressure building

  1. The industry’s “standard” response time of 12 – 24 hours suddenly felt ancient.
  2. Users expect instant replies, not tomorrow, not later today, but right now.
  3. And we could see it in the feedback: even when we solved issues perfectly, customers often mentioned one thing – speed.

We wanted to change that. We wanted to be faster, smarter, and more consistent. Naturally, the idea came up: what if AI could help?

So, we started testing a few third-party AI chatbots. And… it was awful. They misunderstood questions, gave generic answers, hallucinated, and turned already-frustrated users into even more frustrated ones.

That’s when we realized – if we ever wanted AI to truly assist our customers, we’d have to build it ourselves.

That’s why we’ve assembled our own in-house AI team to conduct R&D and come up with our own AI solution.

2. Why do we hate chatbots? How may this attitude evolve?

Before we dive deeper into our journey, let’s be 100% honest for a moment and talk about the core reasons why people (including us) hate AI chatbots. Because this story isn’t just about technology. It’s about perception, emotion, and how those slowly change over time.

  1. Some chatbots are really bad.
    They’re slow, they’re clumsy, and they trap you in endless circles of questions that never actually help. We’ve all experienced that frustration. In most cases, it’s not even the bot’s fault. It’s the result of a poor architecture, a shallow knowledge base, or no real understanding of context.
  2. AI doesn’t know everything – especially in complex products.
    When you sell high-tech products like WordPress themes, hosting-related setups, or plugin integrations, you deal with issues that may require real technical intuition. Some AI bots simply can’t go that deep; they can only repeat what’s been pre-written for them.
  3. The “AI will replace humans” mindset.
    Too many businesses see chatbots as a way to replace human agents entirely. And that’s where things go wrong. AI can assist, guide, or pre-qualify, but replacing human empathy and flexibility is still a mistake. For now!
  4. No real data = no real understanding.
    Many AI bot owners never measure the real impact of their bots: how many issues they actually solve, how often they frustrate customers, or what the long-term perception looks like. Without data, you can’t truly know if your AI is helping or harming your brand.
  5. The result: distrust.
    As a result, customers today are naturally skeptical. They don’t trust bots. Even when the AI gets the job done, people hesitate to believe it. There’s a built-in doubt that says, “It answered fast… but does it really solve my problem?”

And that leads us to something fascinating we discovered during our own journey. Even with Adam 3.0, when our AI resolved some tickets in under a minute, a portion of customers still left 3-out-of-5 ratings. Why? Because there was no human moment – no sense of empathy, no feeling of connection.

It reminded us of a line from Inglourious Basterds by Quentin Tarantino:

“You don’t like them. You don’t really know why you don’t like them; all you know is you find them repulsive.”

In the early days, we felt exactly the same about AI support. But (as we soon learned) that feeling can evolve.

3. Adam 1.0 - The First Step (and First Disappointment)

In 2023, when the idea of building our own AI support agent finally took shape, we got two camps.

The Tech Support team camp became the client – the one setting expectations and demanding results.

The AI Tech team camp became the supplier – responsible for designing, training, and delivering what would later be known as Adam 1.0.

We started with a simple, almost naive concept:

“Let’s create a universal documentation for all our themes, feed it to AI, connect it to our support system, add a prompt for AI, provide a Transfer to Live Operator button for corner cases, and add a feedback form for customers.”

It sounded reasonable at that time. So we began experimenting with the best language models available: GPT-4, Claude, and Gemini.

  • Claude impressed us with how well it could read and understand documentation.
  • GPT-4 produced human-like, friendly, well-structured replies.
  • Gemini, at that time, struggled with both, often hallucinating and mixing up product details.

After dozens of internal tests, prompt refinements, and documentation tweaks, Adam 1.0 was born.

Simplified schematics of Adam 1.0:

AI flow

He could greet users politely, pull data from the docs, and even try to solve common questions. On paper, it all looked promising.

But in reality, it wasn’t.

The numbers told the truth:

  • 21% autonomy – only one in five tickets was fully solved by AI.
  • 15 min average response time – slower than expected.
  • 11% first-reply resolution – customers had to go back and forth with the bot.
  • 79% transfers to live operator, with 35% of them with the reason “Don’t want to be assisted by AI.”

And behind those numbers was a wave of frustration. Customers felt misunderstood. The support inbox is filled with complaints like “Please, give me a real person!”

Screenshot from the real ticket in 2023:

adam-1-0-ticket

Our Support team (the “client” in this internal project) absorbed all that negativity. Instead of getting relief, they got extra work cleaning up after the AI. Soon, “Adam” became an inside joke and a synonym for “Why did we even try this?

We were standing at a crossroads. Was this simply the natural first failure (you must spoil before you spin) or a clear sign that AI support just didn’t belong in our workflow?

AI customer support chatbot

Fortunately, we chose the first path. Instead of giving up, we decided to re-engineer Adam 1.0 – keeping the lessons, leaving the mistakes, and setting the foundation for what would later become Adam 2.0.

4. Adam 2.0 - Re-engineering Hope

After the failure of Adam 1.0, our AI team knew there wouldn’t be a second chance. The Support department had lost patience, and another disappointment would have buried the entire project.

Then, in 2024, GPT-4o was released – faster, smarter, and finally capable of handling context in a way that made sense for customer support. That moment gave us a spark of hope and a new direction.

We realized that the real problem wasn’t the model itself – it was the data and our oversimplified approach. The quality, structure, and organization of the knowledge base mattered more than anything else. So, we rebuilt everything from scratch.

From One to Many

Instead of one massive “universal” documentation file, we created seven specialized documentation files, each dedicated to a specific product type. Adam could now switch to the relevant dataset depending on which theme type the customer was using. From general → to very specific information.

Introducing “Agents” – Smart Knowledge Blocks

Each of the 7 docs was split into smaller logical units we called “agents” – focused knowledge segments like “Agent Errors”, “Agent Customization”, “Agent Installation”, “Agent Plugins”, and so on.

Why? Because that’s how real reasoning works. Imagine someone asks you, “What’s the capital of the USA?”

Your brain doesn’t scan every fact and area of your knowledge. It instantly narrows the context to “geography”, then recalls that the USA is a country in North America and its capital is Washington, D.C.

We wanted Adam to think the same way – locate the right knowledge category or categories first, then pull the precise answer.

Teaching Adam in the Right Format

Every piece of documentation was then converted into a Q&A (FAQ) structure and stored in JSON.

Example:

Adam AI

This turned out to be one of the most effective data formats for AI – simple, structured, and easily retrievable. It gave the model clarity on both intent and context, leading to more accurate and confident replies.

Adam Ai

Reading What Customers See

And then came the game-changer: in May 2024, OpenAI finally added the ability for the API to read screenshots.

For the first time, Adam could actually “see” what the user was showing – plugin settings, console errors, admin panels, and respond accordingly.

So, how did AI chatbot Adam 2.0 work?

In short, simplified terms, the new version operated like this:

  • Structured, product-type-specific documentation.
  • Contextual “agents” that activate only when relevant to avoid hallucinations.
  • Support for reading screenshots directly from customer messages.
Adam AI

The Results

No miracles – but finally real progress:

  • Autonomy: 21% → 33%
  • Response time: 15 min → 5 min
  • First-reply resolution: 11% → 32%
  • Hate level (“Don’t want to talk to AI”): 35% → 23%

For the first time, customers started showing patience, even sometimes appreciation. And our Support team reported something we hadn’t heard in years:

There’s finally less load. We can focus on quality assurance, analysis, and user feedback instead of answering the same questions all day.

Adam 2.0 wasn’t perfect; some customers still did not like him, but it was proof that AI support could work if built with the right structure, empathy, and idea.

Prove that Adam 2.0 was not perfect:

Adam AI

5. Adam 3.0 - The Breakthrough (50% Autonomy Era)

By the time we moved to the next stage, the AI team had earned something rare (for them) trust and freedom. After the relative success of Adam 2.0, they were no longer seen as “those who broke support,” but as a partner that could actually make it better. With that came more room for R&D, experiments, and bold ideas.

AI across the entire company

What started as a small support project grew into something much bigger. The AI team built a corporate AI panel and integrated AI tools into every department – from development and presale to marketing, content management, documentation, and of course, support.

Adam AI

AI functionality was integrated into the  company’s bestsellers:

And other WordPress Themes with AI Integration.

Adam was no longer a side experiment; he became part of the ThemeREX ecosystem.

A new engine, a new brain

The heart of Adam was rewritten using the OpenAI Agents API – a framework that allowed multiple skills, roles, and memory layers inside one intelligent system. This change turned Adam from a simple responder into a modular assistant capable of reasoning, context retrieval, and cooperation between different “agents.”

Then came another turning point: in August 2025, GPT-5 was released. It was smarter, cheaper, and equipped with a much larger context window, which meant Adam could finally process an entire ticket conversation, along with a broader set of documentation and metadata. The difference was immediate – Adam began to understand users instead of just replying to them.

The birth of auto-documentation

But the biggest leap wasn’t in the model – it was in the logic. Our AI team introduced what we now call auto-documentation.

Here’s how it works:

  1. A customer starts chatting with Adam.
  2. Adam doesn’t yet know the answer → the ticket is forwarded to a human operator.
  3. The operator replies, solves the issue.
  4. Once the case is marked as “successfully resolved”, Adam analyzes the entire dialogue and adds that new knowledge into the right documentation section and relevant “agent.”

Over time, this created a self-learning system – Adam’s knowledge base grew naturally with every solved case (no matter whether it was solved by him or by a live operator).

Adam AI

Humans and AI – finally on the same team

The most unexpected outcome wasn’t technical – it was cultural. The Support and AI teams, who once argued endlessly, started to collaborate.

Operators suggested prompt tweaks, small improvements here and there, and the AI team implemented them.

Slowly, Adam 3.0 became something more than a tool – he became a real member of the ThemeREX Support team.

The results (August 2025)

The numbers spoke for themselves:

  • Autonomy: 21% → 33% → 51%
  • Response time: 15 min → 5 min → 2 min
  • First-reply resolution: 11% → 32% → 45%
  • “Don’t want AI” feedback: 35% → 23% → 9%

It still wasn’t even close to 100% (and probably will never be), but for us, it was a true breakthrough.

The gap between humans and AI support didn’t just shrink; it turned into collaboration. And that’s when we realized: the future of support isn’t about replacing people – it’s about building a team where live operators and AI work together.

What if AI resolves the most urgent but frequent questions, and the live operators handle cases that require a human to be involved?

6. Adam 1.0 → 2.0 → 3.0 The numbers that flip one’s opinion

If there’s one thing that can change opinions faster than words, it’s numbers.

And the numbers behind Adam’s evolution did exactly that.

When we look back at how far the system has come, it’s easy to see why our entire perception of AI support changed.

Here’s what truly made the difference:

  • Autonomy: 50+% of all support tickets that come to Adam 3.0 are completely handled by him and resolved successfully.
Adam AI
  • Response time: 2 min – customers now get answers instantly, not overnight.
Adam AI
  • First-reply resolution: 45% – nearly half of cases are solved on the very first AI response.
Adam AI
  • “Don’t want AI” feedback: 9% – skepticism is fading as trust grows.
Adam AI
  • Screenshot of Adam’s reply and feedback from the customer (Oct 2025):

Adam AI

But the most impressive results for ThemeREX aren’t in percentages, they’re in hours.

Adam 3.0 now saves an average of 573 hours of support working hours every month. That’s hundreds of hours redirected toward improving our products, testing new features, and giving more attention to complex customer requests.

Live support operators now respond 2 to 3 times faster with average reply times of around 6 hours instead of 12–24.

And since Adam reduces repetitive workloads, the entire team can focus on what humans do best – empathy, analysis of AI performance, and quality improvement.

The financial impact is equally clear: Adam 3.0 cuts down support costs significantly, allowing us to reinvest those funds into building modern products like Elementra and introducing new features that shape the future of our themes.

Finally, and perhaps most importantly, our customers’ attitude toward AI support started to change.

What once caused frustration and mistrust now inspires curiosity and even appreciation. It’s safe to say: the numbers didn’t just prove Adam works – they made us believe in AI support again.

VersionModelAvg ResponseAI AutonomyNotes
1.0GPT-415 min21%Manual KB
2.0GPT-4o5 min33%Image reading, agent routing
3.0GPT-52 min51%Full KB integration, dashboard
Note: Avg Response and AI Autonomy reflect evolution across versions.

7. What have we learned, and happy to share?

After everything that was built, tested, broken, and rebuilt, we’ve learned one simple truth:

“AI Support can’t be a goal in itself”.

It’s not a checkbox to tick or a shiny feature to brag about. It’s a system that only works when it becomes part of the company’s DNA.

Not just a chatbot

An AI bot with a few knowledge files isn’t a solution. Real AI support must be a carefully designed architecture, fully integrated into your company’s workflow, support system, documentation, development, analytics, and even product management.

The format of knowledge matters as much as the knowledge itself

We learned that how information is stored and structured directly affects how well AI can use it.

Clean, modular, well-formatted data is the difference between a bot that “knows” and a bot that “understands”.

AI cannot, and should not, handle everything

No matter how good the model is, some cases will always require humans. Custom setups, licensing, hosting, and unique configurations. There will always be exceptions. Customers must have a clear, easy path to reach a live person via chat, ticket, or email.

Some people will never like AI – and that’s okay

There will always be users who prefer a human touch, even if it takes longer. The goal of AI support isn’t to force automation on everyone. It’s to offer help that’s faster, smarter, but optional.

AI never stops evolving

Every new model release, every added feature (like reading screenshots, larger context windows, or improved reasoning) can be a game-changer. A modern AI support system should embrace these updates, not ignore them. Growth comes from iteration.

Adam

Balance is everything

We don’t believe AI will ever replace human support completely. But we’ve proven for ourselves that combining human expertise with AI’s speed and scalability creates the best of both worlds: faster responses, higher satisfaction, reduced costs, and more time for what truly matters: improving the product and caring for customers.

8. Proven Results from the Dashboard

  • 51% tickets resolved by AI
  • 2-minute average AI response time
  • 4.12 / 5.00 customer satisfaction for AI responses
  • $0.10 average cost per AI ticket vs $4.50 for human tickets
  • ≈ 0 complaints about AI speed vs 39 for human operators
  • ≈ 50% faster total service department-wide
Tickets resolved by AI 51%
Average AI response time 2 min
Customer satisfaction 4.12 / 5.00
AI ticket cost $0.10
Speed complaints ≈ 0
Total service improvement ≈ 50% faster

9. FAQs

1. What is a chatbot?

A chatbot is a computer program that simulates conversation with users through text or voice. In customer support, it answers basic questions, guides users through steps, or connects them to human agents. Most chatbots rely on predefined scripts, which is why they often feel repetitive or unhelpful.

2. What is AI support?

AI support uses artificial intelligence to understand customer questions, find relevant answers, and assist human operators. Unlike traditional chatbots, modern AI systems like Adam 3.0 analyze real documentation, ticket history, and even screenshots to respond accurately and in context.

3. Why do people hate chatbots and AI support?

Because many early systems were poorly built. They gave generic, robotic replies, misunderstood questions, and trapped users in endless loops. Over time, this created frustration and distrust. People wanted help, not prewritten answers – and AI wasn’t ready to meet that expectation yet.

4. Are there good AI support chatbots?

Yes – but only when they’re built with real data, structure, and empathy. Tools like Adam 3.0 show that AI can now assist effectively if it’s trained on high-quality knowledge, tested with real tickets, and continuously improved alongside human agents.

5. What makes a good AI support system?

A good AI support system should be:

Context-aware – able to understand what the user is asking.

Data-structured – connected to clean, organized documentation.

Transparent – allowing easy transfer to human agents.

Human-assisted – designed to collaborate, not replace people.

Self-improving – learning from every solved case.

6. Can AI completely replace human support?

No. Even with advanced systems like Adam 3.0, complex or emotional cases still need human understanding. The best results come when AI handles frequent questions, and humans focus on empathy, analysis, and quality.

7. Why does AI need structured data to work well?

Because AI learns from patterns. If documentation is messy or inconsistent, the model can’t connect the right information. Clean, modular, and Q&A-formatted data helps AI reason logically instead of guessing – turning “knowledge” into real understanding.

8. How did Adam 3.0 achieve 50% autonomy?

Through better architecture. Adam 3.0 uses structured documentation, visual context (it can read screenshots), and an auto-documentation system that learns from resolved cases. Each new support ticket makes the system smarter and faster.

9. What is the main lesson from the Adam 1.0 → 3.0 journey?

That technology alone isn’t enough. The success of AI support depends on collaboration between humans and AI – building trust, refining knowledge, and improving step by step. Failure was part of the process that made the system truly useful.

10. What is ThemeREX?

ThemeREX is a WordPress development studio that creates professional website themes and templates for creative businesses, agencies, and freelancers. The company sells products on ThemeForest and its own site, ThemeRex.net.

11. Where is ThemeREX Support located?

ThemeREX Support operates remotely with a distributed international team. It provides ticket-based assistance for all verified customers via the official support portal.

12. Does ThemeREX use AI for support?


Yes. ThemeREX developed its own AI support agent, Adam 3.0, to help customers resolve issues faster. Adam works alongside human agents, handles more than half of all tickets autonomously, and continuously improves using real solved cases.

13. How does Adam work behind the scenes?

  1. Customer submits a support ticket via ThemeREX Support.
  2. Chooses: ‘Talk to Adam Ingram’ or ‘Connect with a Live Operator’.
  3. Adam reads your message, analyzes screenshots, and retrieves relevant documentation.
  4. He delivers a complete response – usually within 120 seconds.
  5. If human review is required, Adam transfers it automatically through the AI router, balancing workloads across the team.
  6. All conversations and SLA data are tracked in the dashboard.

10. Key Takeaways

1. AI support doesn’t fail because of technology – it fails because of design.

Most chatbots collapse under poor data, unclear goals, and a lack of empathy. The real breakthrough begins when AI is treated as part of the company’s system, not a side experiment.

2. Structure beats scale.


Seven well-organized documentation sets in JSON format did more for accuracy than any model upgrade. Clean, modular data is the foundation of any reliable AI support system.

3. Collaboration, not replacement, is the future.


Adam 3.0 proved that AI works best when it supports people – not when it tries to replace them. Human intuition and AI efficiency together create balance and speed that neither could reach alone.

4. Speed isn’t everything – trust is.


Even instant replies mean little if users don’t believe the bot understood them. True progress happens when users start trusting AI as a reliable teammate.

5. Every mistake teaches the model.


From Adam 1.0’s failure came the logic of Adam 2.0, and from there, the self-learning engine of Adam 3.0. Growth in AI support is not linear; it’s iterative, data-driven, and humbling.

6. The journey never ends.


AI evolves with every new model, every ticket, every user. Support teams that keep refining and retraining their systems will stay ahead – not because they automate, but because they learn faster.

For the Updates

Exploring ideas at the intersection of design, code, and technology. Subscribe to our newsletter and always be aware of all the latest updates.

Download a Free Theme