Skip to the main content.

News sites are awash with reports of hallucinations produced by the current crop of large language model Artificial Intelligence (AI) platforms. Systems like Google's Bard, Microsoft's Bing Chat, and OpenAI's ChatGPT occasionally respond confidently with answers that are simply wrong. The model's training data does not justify the incorrect responses. Researchers and engineers often don’t know why the machine is inaccurate.

Machine learning (ML) systems are probabilistic systems—they operate using probabilities. For any input, there is some or no output, along with a value, usually between zero and one hundred, indicating the system's confidence in that result. The ideal situation always is a correct result coupled with high confidence. An incorrect result associated with high confidence is the worst possible situation.

Ultimately, can you trust the machine? In the case of Large Language Models (LLM), maybe not right now. However, we all use ML AI systems for tasks more mundane than composing poems about the Estate Tax for example. Although they exhibit characteristics similar to ChatGPT and Bard, trust in results from applications like intelligent document processing (IDP) is practical and possible.

ChatGPT poemR2

There are three key concepts of AI system performance that are essential for building trust: Transparency, Explainability, and Feedback. Let's take a deeper look.

1. Transparency
First, every AI system has an objective, a set of goals that the system is designed to achieve. For an IDP AI, the purpose is specific: Understand the general form of the document (classify) and find a set of specific business-related values associated with that type from the document (extract).

Transparency means that we, as the users and administrators of the system, can understand how the system works, what training data it uses, and how the design of that data set intended for the system to come to a successful conclusion. Without transparency, we cannot understand the system or its limitations, and interactions with it can be frustrating. With transparency comes trust.

2. Explainability
The ability to describe, in detail, exactly how the AI system arrives at its decisions is known as explainability. Explainability is notably missing from LLMs and other complex AI systems. Engineers cannot trace the flow through the training data network that causes any specific result.

With IDP ML systems, it is practical to understand why the system made a particular decision and how it arrived at that decision; they are explainable. If I can explain it, I can trust it.

3. Feedback
Feedback is the process of analyzing low-confidence results, combining that analysis with explanations, and using the information to make the system perform better. Intelligent document processing AI models concentrate on identifying a document's type and then extracting useful information from it. If the type is wrong, the extracted data is most likely incorrect.

With processes in place to tell the system manually or automatically what it got right and what it got wrong, model performance constantly improves, and trust in the system naturally increases. Trust comes with the proper application of feedback.

In Conclusion
As an engineer implementing IDP ML systems, it is important to me to educate administrators and end users on the concepts of transparency, explainability, and feedback. I want everyone involved to trust the system. AI endeavors cannot succeed without that trust.

 

Get more interesting content, expert tech tips, and new articles delivered to your inbox – Subscribe to Genus Insights! 

Related Blogs and Insights

Why Traditional CMS Search Fails and How AI is Changing the Game – Part 1

Why Traditional CMS Search Fails and How AI is Changing the Game – Part 1

This Genus Technologies 4-part blog series explores Retrieval-Augmented Generation (RAG) for Question Answering (Q&A) systems. The first three blogs...

Retrieval-Augmented Generation (RAG): The Key to AI-Powered Q&A Solutions – Part 2

Retrieval-Augmented Generation (RAG): The Key to AI-Powered Q&A Solutions – Part 2

This Genus Technologies 4-part blog series explores Retrieval-Augmented Generation (RAG) for Question Answering (Q&A) systems. The first three blogs...

Unleashing the Power of RAG: Transforming Industries with Intelligent Q&A – Part 3

Unleashing the Power of RAG: Transforming Industries with Intelligent Q&A – Part 3

This Genus Technologies 4-part blog series explores Retrieval-Augmented Generation (RAG) for Question Answering (Q&A) systems. The first three blogs...