Insights

Latest industry news and insights curated by the Newtide team.

What is “Explainability” and Why Human Oversight Alone Doesn’t Cut It in AI Systems

Published On June 13, 2025

As enterprises embrace AI across critical workflows—finance, logistics, supply, trading, credit —the question isn’t just how well these systems perform. It’s whether we can trust them, and as any Reagan era Gen Xer would say, trust but verify. For trust to take root, leaders must grapple with a central tension: how to ensure both explainability and human oversight without creating bottlenecks or accepting the illusion of control.

For me the easiest way to grasp “explainability” is remembering when we had teachers take points off a test because we didn’t “show our work” to explain how we arrived at our answer. It wasn’t enough to just provide the answer, you had to prove you knew how to get there to demonstrate full understanding of the concepts being tested. Explainability is simply applying this same requirement to the answers AI’s provide, put simply in critical situations we need them to “show us their work”.

A new MIT Sloan Management Review and Boston Consulting Group report, “AI Explainability: How to Avoid Rubber-Stamping Recommendations”, highlights this dilemma. Based on a survey of 1,221 global executives and insights from an expert panel of academics and practitioners, the article explores whether effective human oversight can reduce the need for AI explainability—or whether both are essential.

The Central Finding: Oversight and Explainability Are Complements, Not Substitutes

More than three-quarters of panelists disagreed with the idea that human oversight makes explainability less necessary. Instead, they argued that oversight and explainability are “mutually reinforcing pillars” of responsible AI governance.

Key insights:

Explainability builds trust: Without understanding why an AI system made a recommendation, human reviewers are at risk of becoming passive validators—rubber stamps in a black-box process.

  • Oversight needs grounding: Oversight that lacks transparency can create a “dangerous illusion of control,” as MIT’s Elizabeth Renieris puts it.
  • Societal values matter: Beyond operational checks, explainability supports autonomy, fairness, and due process—especially in high-stakes domains like HR, Credit, or particularly in our industry where personal first party data is used, like in most Loyalty applications.

Practical Tensions: When Explainability Isn’t Feasible (or Useful)

Despite this strong consensus, several voices in the study—such as Katia Walsh (Apollo Global Management) and Teemu Roos (University of Helsinki)—highlighted the limits of explainability:

“AI reasoning systems have reached a level of sophistication and complexity that rivals the human brain.” – Katia Walsh

This view reflects a growing discomfort with what some call “explainability theater”—where visualizations or simplified rationales may look reassuring but fail to offer real insight or control. As AI models scale in complexity (especially deep learning systems), the challenge becomes: Can we make the important understandable, or only the understandable important?

Implications for Enterprise AI Deployment

For organizations deploying AI in production workflows, the debate isn’t academic—it’s operational. From pricing automation to fraud detection to decision support agents, explainability governs how teams review, override, or course-correct AI behavior.

NewTide’s perspective:

  1. Agent frameworks must expose reasoning steps: Whether through logs, intermediate states, or confidence scoring, AI agents must present their decision process in human-friendly formats.
  2. Employees in oversight roles should be trained for AI literacy: It’s not enough to know the domain—oversight reviewers need to understand model limitations, training data risks, and drift indicators.
  3. Don’t over-index on visualizations: Heat maps and saliency graphs are helpful—but only when grounded in decision logic and auditable inputs/outputs.
  4. Focus on context-aware controls: Not every decision needs the same level of explainability. Flag when outputs deviate from expected norms and escalate only when warranted.

Read More for Further Insights 🔍

This debate is evolving. Here are additional resources that broaden or challenge the study’s findings:

Complementary Perspectives:

  • NIST AI Risk Management Framework (2023)
    Link – Offers a structured approach to building trustworthy AI, including explainability and human-in-the-loop processes.
  • DARPA’s XAI Program Overview
    Link – A defense-focused effort to make deep learning models more interpretable in mission-critical environments.

Critical Voices:

  • “The Mythos of Model Explainability” – Zachary C. Lipton
    Link – A foundational critique arguing that many explainability efforts are ad hoc and lack rigor.
  • “Why We Need to Rethink Explainable AI” – Harvard Data Science Review
    Link – Suggests shifting from “explanation” to “communication” and tailoring it to context and user capability.

Blog Articles

The Battle at the Frontier of AI

The Battle at the Frontier of AI

How Gavin Baker’s insights point directly to the future we’re building at NewTide AI A Conversation Worth Your Time Every so often, a discussion surfaces that does more than explain technology—it provides a framework that helps leaders see where the world is actually...

Enterprise General Intelligence: The Real Frontier Beyond AGI

Enterprise General Intelligence: The Real Frontier Beyond AGI

The wave of AI mania that has been spawned by OpenAI's launch of ChatGPT in 2022 often seems to pivot around one tantalizing question: when will artificial general intelligence (AGI) arrive? In Silicon Valley, the narrative has focused on whether large language models...

Insights

What’s beyond AI Agents?

What’s beyond AI Agents?

I typically don’t like analogies because almost all the time, they are a watered-down version of the whatever concept someone is trying to explain. But if there was ever a good analogy for AI Agents, it’s digital workers. Pull out your org chart. Each role on the...

Follow along on Linkedin

Stay in the loop on the latest in fuels, convenience, and enterprise AI. Follow us on LinkedIn for insights, updates, and a peek behind the scenes at NewTide.