The Applied AI Thoughts for Realization Blog Post 3

The Impact Layers — Why AI Models Alone Don’t Deliver Value

Introduction

This is the third post in the Applied AI Thoughts for Realization series.

In the first post, Why AI Feels Overwhelming, we tackled the problem of “AI Fatigue” and the trap of Tactical Thinking—chasing the latest tools without a plan. We argued for a shift to Structural Thinking, focusing on the architecture of problems rather than just the features of models.

In the second post, A Simple Mental Model — 4 Pillars, we established the horizontal dimension of our mental model. We categorized the AI landscape into 4 Domain Pillars—Consumer, Enterprise, Science, and Physical AI—and used the “Engine vs. Vehicle” analogy to show why a “Sports Car” strategy (Consumer) fails when you need a “Cargo Train” solution (Enterprise).

With the understanding of the 4 Pillars of different AI ecosystem and how the application of AI varies across the pillars its critical to understand what is required to deliver value in ech of the pillars. This brings us to the vertical dimension of our framework: The Impact Layers which play critical role in delivering value for each pillar.

Why a Model is not Everything in a AI Solution?

There is often a disconnect when we try to adopt AI. On one hand, we see headlines about models acing exams and writing code. It makes us feel model is what you need to role out a AI driven solution. On the other hand, when Enterprises or Consumers try to start deploying AI based solutions, they end up realizing its much more than just having a model. Enterprises have to address multiple challenges since they face multiple challenges like:

  1. The solution is not responsive and impacts user experience.
  2. Does not produce consistent or reliable results.
  3. It’s not quite fast enough, or it produces results that need a second look.
  4. Integration with existing systems is complex and time-consuming.
  5. The cost of running the solution at scale becomes prohibitively expensive.
  6. It works in demos but fails on real-world, messy data.
  7. Users don’t trust the output enough to act on it without verification.
  8. Models occasionally “hallucinate” or make confident errors.

The question then arises Why is there such a gap between the Intelligence we hear and read about around the Model and realizing the actual Capability we like to experience?

The answer lies in understanding that a “Model” is not the final “Product” or “Solution”.

A model is just raw potential—like a powerful engine sitting on a factory floor. To turn that potential into actual value, it is dependent on several layers of translation. It needs to be hosted on hardware, connected to tools, wrapped in an interface, and integrated into a workflow.

If any one of those layers is weak, the entire experience fails.

The “Iceberg” Theory of AI

A helpful way to visualize this is the Iceberg Theory of AI.

When you interact with an AI application—whether it’s a chatbot, a recommendation engine, or a robot—you are only seeing the tip of the iceberg.

  • Above the Water (Visible): The Application Layer. This is the user interface, the buttons, and the response time. This is what we judge.
  • Below the Water (Invisible): The massive infrastructure that supports that tip. This includes the Agents (logic), the Models (intelligence), and the Hardware (compute).

Most of the hype focuses on the “Model” layer deep underwater. But most of the failure happens in the layers between the model and the user. To understand why an AI project succeeds or fails, we need to look below the surface and examine the 5 Layers of AI Impact.

The 5 Layers of AI Impact

Progress in AI doesn’t happen all at once. It moves up this stack, layer by layer.

Layer 1: Hardware (The Foundation)

This is the physical reality of AI. It includes the GPUs (chips) that train models, the data centers that host them, and the edge devices (phones, robots) that run them.

  • Why it matters: Hardware dictates feasibility. You might have a brilliant AI model, but if it requires $10,000 of compute per hour to run, it cannot be a consumer product. If it takes 5 seconds to respond, it cannot be a self-driving car.
  • The Constraint: Cost, Energy, and Latency.

Layer 2: Models (The Intelligence)

This is what we typically call “AI.” It includes Large Language Models (LLMs), diffusion models (images), and predictive models. This layer provides the raw reasoning and pattern-matching capability.

  • Why it matters: Models dictate potential. A smarter model can solve harder problems.
  • The Constraint: Context Window (memory), Hallucination (accuracy), and Reasoning capability.

Layer 3: Agents & Tools (The Orchestration)

This is the bridge between thought and action. A model can only output text; an Agent can use that text to call a tool—like searching the web, querying a database, or clicking a button.

  • Why it matters: Agents dictate utility. Without this layer, AI is just a chatbot. With this layer, AI becomes a coworker that can book flights, write code to disk, or control a robot arm.
  • The Constraint: Reliability. If an agent gets confused and clicks the wrong button, it causes chaos.

Layer 4: Applications (The Interface)

This is the software layer where the human meets the machine. It includes the UI/UX, the workflow integration, and the “vibe” of the product.

  • Why it matters: Applications dictate adoption. A powerful agent wrapped in a confusing interface will be ignored. The best AI applications often hide the AI completely (e.g., Netflix recommendations).
  • The Constraint: Friction and Trust. Users must feel in control.

Layer 5: Impact (The Value)

This is the final result. It is not software; it is the change in the real world. Does this tool save time? Does it cure a disease? Does it increase revenue?

  • Why it matters: Impact dictates sustainability. If an AI project doesn’t generate real value (ROI or societal good), it will eventually be shut down, no matter how cool the technology is.
  • The Constraint: Human Behavior and Economics. Just because a tool exists doesn’t mean people will change their habits to use it.

The Bottleneck Theory: Why Progress is Non-Linear

The most important thing to understand about these layers is that they must work in coherence.

We cannot simply “upgrade” one layer and expect the whole system to improve. In fact, the system is always limited by its weakest link.

  • Historical Example: In the 1960s, AT&T invented the Picturephone. It was a brilliant Layer 4 (Application) idea. But Layer 1 (Network Bandwidth) wasn’t ready. The product failed spectacularly.
  • Current Example: Today, we have incredible Layer 3 (Agent) concepts—AI employees that can do everything. But often, Layer 2 (Model Reliability) isn’t quite there yet; the models still hallucinate occasionally. As a result, the “AI Employee” fails to be reliable enough for critical work.

This interdependence creates a “hurdle for adoption.” You might have the budget and the desire, but if one layer in the stack is immature, your project will stall.

Guidance: The Incremental Approach

So, how do you build when the stack isn’t perfect? You adopt an Incremental Approach.

Instead of trying to build the “Ultimate AI System” that relies on every layer being perfect, you build for the layers that are ready today.

A sample scenario how to Approach incremental Build:

  1. Start with “Human-in-the-Loop” (Layer 3 Lite): Don’t try to build fully autonomous agents yet. Build “Copilots” where the AI drafts the work, and a human reviews it. This mitigates the Layer 2 (Accuracy) risk.
  2. Focus on “Low-Risk” Applications (Layer 4 Safety): Deploy AI in internal brainstorming or draft generation before putting it in front of customers.
  3. Scale as Layers Mature: As models get cheaper (Layer 1 improves) and smarter (Layer 2 improves), you gradually remove the human guardrails.

Advantages:

  • Immediate Value: You get ROI now, rather than waiting 5 years for “AGI.”
  • Learning: Your organization learns how to work with AI data and workflows.
  • Safety: You avoid catastrophic failures by keeping humans involved.

Disadvantages:

  • Maintenance: You have to constantly update your system as the underlying layers change.
  • Process Change: It requires changing how people work (training them to use Copilots), which is often harder than just installing software.

By respecting the bottleneck, you build systems that actually work, rather than science fiction that breaks on day one.

Summary

In this post, we explored the vertical dimension of AI execution: the 5 Layers of Impact. We saw how a seemingly simple AI application is actually supported by a complex stack of Hardware, Models, Agents, and Interfaces, and why the “weakest link” in this chain often determines success. But understanding the pillars and layers is only half the picture. In the next post, “How Pillars and Layers Work Together,” we will merge the horizontal Pillars and vertical Layers into a unified perspective. This approach will allow you to predict the behavior, timeline, and constraints of any AI project by understanding how technical layers interact differently across each distinct domain pillars.

Author’s Note: AI-assisted writing tools were used to support the creation of this post. All concepts, perspectives, and the underlying thought process originate from me; the AI served only as a drafting and refinement aid

Previous Post : The Applied AI Thoughts for Realization Blog Post 2

Understanding How AI Thinks (and Where It Doesn’t) – Part 1 Are LLMs Really Understanding?

From a DeepSeek Article to trying to Understand Semantics vs Reasoning Cognitive concepts in AI


Introduction:

This first part captures the beginning of my thought journey. What started as reading an article about DeepSeek’s long-text technique slowly turned into a more fundamental question about what we really mean when we say an AI system “understands.”

A Simple Article That Led to a Big Question

I recently read an article about a research study that questioned a technique used by DeepSeek to help AI models read very long texts. The idea sounded impressive: compress large amounts of text so an AI can process more information at once.

But the researchers found something surprising.

The AI seemed to perform well not because it truly understood the text, but because it relied on patterns it had seen before. When those patterns were disrupted, the model struggled badly.

Even though I already had a working understanding of how LLMs work and Transformer architectures function, something about this finding triggered my interest to learn deeper. If these models were struggling the moment patterns broke down, what exactly were they doing when we say they “understand” text?

This thought triggered a deeper line of questioning in my mind — not about DeepSeek specifically, but about how we interpret progress in GenAI as a whole.

That curiosity naturally led me to ask:

Are modern AI systems really understanding, or are they just very good at guessing?

Once that question formed, it became clear that I needed to first separate two ideas that are often mixed together: semantic understanding and cognitive capability.

Semantic Understanding (Knowing What Something Means)

The first concept I needed clarity on was semantic understanding — a term frequently used but rarely unpacked.

Semantic understanding simply means understanding the meaning.

In everyday language:

It answers the question: “What does this mean?”

Large Language Models (LLMs) are exceptionally strong in this area.

They can:

  • Read a paragraph and explain it
  • Summarize documents
  • Translate languages
  • Recognize relationships between ideas

For instance, when an AI explains a legal document or summarizes a report, it is exercising semantic understanding. In many ways, this mirrors how humans comprehend words and sentences.

However, as I reflected on the DeepSeek article, an important limitation became obvious.

Semantic understanding stops at meaning.

It explains what is being said, but it does not decide what should happen next.

That realization naturally pushed me toward the next question: if understanding meaning is not enough, what role does reasoning actually play?

Reasoning Models (Thinking Better, Not Acting Better)

At this point, my attention shifted to reasoning models, often marketed as “thinking” AI.

These models are designed to show their work. They break problems into steps, apply logic, and produce more structured explanations.

On the surface, this feels like a major leap forward — and in many ways, it is.

But when I looked more carefully, I noticed that reasoning models still revolve around a single question:

“What is the best response to this input?”

Even with better logic, they still do not:

  • Choose goals (which is critical for decision-making — without goals, outputs remain just well-organized facts)
  • Take responsibility for outcomes
  • Act independently in the world

So while reasoning models think better, they don’t actually decide.

This insight clarified something important for me: reasoning improves semantic structure, but it still operates within the same boundary.

That naturally led to the next question — if neither understanding nor reasoning decides action, then what does?

Part 1 Conclusion: A Boundary Becomes Visible

By the end of this first part, one boundary had become very clear to me.

Understanding meaning and reasoning about it — even in sophisticated ways — does not automatically lead to decision-making or action. Something else is required.

In the next part, I will share my learning about the missing layer: cognitive capability, and why AI agents represent an important architectural shift rather than just a smarter model.

The ERP Awakening: From System of Record to System of Intelligence

The Foundation of Stability

For the last 30 years, the enterprise software industry has focused on one massive engineering achievement: Stability.

Enterprises have implemented SAP, Oracle, and Microsoft Dynamics to serve as the bedrock of their operations. They optimized for the “System of Record”—an immutable, reliable vault where every transaction is stamped, stored, and secured. In this regard, the strategy succeeded. The foundation is solid.

The Challenge: Data Rich, Insight Constrained

However, a vault is designed to keep things in, not necessarily to let insights out.

Today, the modern ERP operates like a massive, well-organized reference library. It contains all the answers—”Why is margin down?”, “Which supplier is late?”—but finding them requires users to walk the aisles, pull specific files (T-Codes), and decode complex rows of data. This architecture creates three distinct layers of operational friction:

  1. The Insight Latency: Business leaders cannot ask questions directly. They often rely on technical intermediaries to build reports, leading to a “time-to-insight” gap of days or weeks.
  2. The Productivity Burden: Skilled professionals spend hours on high-volume, manual tasks—drafting standard emails, visually verifying invoices against purchase orders when there is an exception, or creating requisition forms.
  3. The Execution Variance: Critical workflows can experience delays due to minor “micro-stops”—like a pricing discrepancy of a few cents—that require manual human intervention to clear.

While the enterprise possesses the data, it often lacks the agility to act on it instantly.

Moving from System of Record to System of Intelligence

If the modern ERP is a comprehensive library, the operational bottleneck lies in the absence of a guide. Users are currently forced to act as their own researchers—navigating complex schemas and table structures just to retrieve basic facts.

Hence the strategic value of Generative AI lies not in replacing the library (the ERP), but in providing an intelligent Librarian to navigate it. By layering cognition over storage of records, enterprises can transition from a passive System of Record to an active System of Intelligence.

The “Three Stages” of Change

To make this transition actionable, organizations should view the evolution from a System of Record to a System of Intelligence not as a single leap, but as three distinct stages of maturity. Each stage builds trust and capability, moving from passive insight to active orchestration.

Stage 1: Synthesizing Intelligence (The Conversational Analyst)

  • Key Objective: To democratize access to complex ERP data, enabling “self-service” analytics without technical dependency.
  • Strategic Rationale: The primary bottleneck in most enterprises is “Insight Latency.” Business users face a barrier to entry—they do not know the technical schema required to query the ERP. The first step is to remove this friction by allowing natural language interrogation of the data.
  • Execution Strategy: Enterprises implement Text-to-SQL layers that act as a “universal translator.” Instead of navigating menus, users query the database using natural language. The system translates the intent into a precise SQL or OData query.
  • Tangible Impact:
    • Use Case: A Regional CFO needs to understand a sudden variance in APAC logistics costs. Instead of commissioning a BI report (3-day lag), they ask the system directly and receive a visual breakdown of freight surcharges in seconds.
    • Outcome: Zero time-to-insight for ad-hoc queries.

Stage 2: Augmenting Operations (The Generative Assistant)

  • Key Objective: To standardize communication and documentation while significantly increasing workforce velocity.
  • Strategic Rationale: Once users have insight, they must act on it. Often, this action involves creating content—emails, contracts, or summaries. This stage focuses on removing the “Blank Page” fatigue that drains high-value human talent on low-value drafting tasks.
  • Execution Strategy: This involves Content Generation thru Context Injection. The architecture feeds specific transaction data (such as open Purchase Orders or vendor contracts) into the LLM prompt, instructing it to draft content based on that specific reality for human review.
  • Tangible Impact:
    • Use Case: A procurement team needs to send dunning emails to 50 suppliers regarding late shipments. The Assistant auto-drafts 50 unique emails, each referencing the specific PO number, delay duration, and relevant penalty clauses from the master contract.
    • Outcome: Massive productivity gains and strict legal/policy compliance in external communications.

Stage 3: Autonomous Orchestration (The Process Agent)

  • Key Objective: To achieve “Zero-Touch” processing for routine variances, freeing human capital for complex problem-solving.
  • Strategic Rationale: Speed is often lost to minor details. Traditionally, any error—no matter how small—halts the process for human review. This stage shifts the paradigm to “Management by Exception,” where the system autonomously resolves routine problems, leaving only complex strategic decisions for human experts.
  • Execution Strategy: Deploying Agentic Automation. Autonomous agents are granted write-access to specific API endpoints and governed by strict policy logic (e.g., “If variance < $5, then approve”).
  • Tangible Impact:
    • Use Case: The Accounts Payable close is stalled by hundreds of “micro-variances” where invoice totals differ from POs by cents due to rounding errors. The Orchestrator scans, verifies the tolerance policy, and posts the clearing documents automatically.
    • Outcome: A faster financial close and a shift of human effort from data entry to strategic relationship management.

The Engineering Challenge: Building Trust

While this transition unlocks immense potential, it forces IT departments to confront a fundamentally new maintenance paradigm: the shift from managing deterministic code to governing probabilistic behaviors.

In traditional systems, if a report generates a wrong number, it is usually a bug in the code that can be traced, patched, and redeployed. In the era of AI, systems face Probabilistic outcomes. A model might generate a slightly different answer depending on context.

This requires new “safety rails”:

  • Glass Box UI: Systems must always show the user where the answer came from (citations).
  • Human-in-the-Loop: For high-stakes actions (like paying a vendor), the AI should draft the proposal, but a human must execute the final approval.

The Path Forward

The journey to a GenAI-augmented ERP is an architectural evolution, not a “rip-and-replace” project. To manage risk and ensure successful adoption, enterprises should align their implementation roadmap with the three-stage maturity model defined above.

By starting with Stage 1 (Insight), organizations can validate data accuracy and build user trust in a safe, read-only environment. Once confidence is established, they can advance to Stage 2 (Creation), introducing productivity gains while maintaining human oversight. Finally, only after proving stability, should they progress to Stage 3 (Action) for autonomous processing. This measured evolution ensures that capability grows alongside governance, minimizing operational risk while maximizing business value.

At 1CloudHub we are closely working with Enterprise customers to help them navigate the path to maturity through our consulting services and our solutions and products that help Enterprise to accelerate the pace of adoption to augment GenAI with ERP systems.

Coming Up – Navigating Day 1 Challenges

In the next post, the focus will shift to the foundation. Before building these intelligent layers, enterprises need to ensure their data is ready to support them. The discussion will cover practical strategies for Data Hygiene and how to start small with “Sidecar” pilots.

Author’s Note: AI-assisted writing tools were used to support the creation of this post. All concepts, perspectives, and the underlying thought process originate from me; the AI served only as a drafting and refinement aid

  • Coming Up: Post 2 – Navigating Day 1 Challenge : The Practical Reality of Implementation.