The Applied AI Thoughts for Realization Blog Post 2

A Simple Mental Model — How I Break the AI World into 4 Pillars

Introduction

In my previous post, I had shared about the need to shift from Tactical Thinking (chasing tools) to Structural Thinking (understanding the landscape) to understand the AI Landscape. In this post, we will build the foundation of that structure.

When we talk about “Applied AI,” it is easy to get fixated on the “AI” part—the models, the algorithms, the neural networks. But in the real world when we try to adopt AI, the model is often just one part of the equation.

Applied AI is not just about Models; it is a system.

To make AI work, you need more than just intelligence. You need data pipelines, user interfaces, safety guardrails, integration logic, and hardware infrastructure. You need to consider the human who uses it and the environment where it operates. When you look at the full picture, you realize that “AI” is just one ingredient in a complex recipe. And just like in cooking, the same ingredient (AI) produces a completely different result depending on what else you mix it with and how you serve it.

The Core Concept: Why “AI” Is Not One Thing

The biggest mistake organizations and individuals make is treating AI as a monolithic wave—assuming that the same rules, timelines, and strategies apply everywhere. They ask generic questions like “When will AI replace jobs?” or “Is AI safe?”

These questions do not have a simple straight forward answer because since adopting AI is not just focusing on one thing ie “AI”.

The Analogy: The Engine vs. The Vehicle

Consider an AI model (like GPT-4 or Claude) as a high-performance engine. An engine is a sophisticated core component, yet it provides no transportation utility on its own. To function effectively, it requires a chassis, wheels, a steering system, and an operator. It must be integrated into a complete vehicle.

Imagine attempting to solve every transportation challenge with a single strategy: “Install a high-performance sports car engine.”

  • On a racetrack (Consumer): This approach works perfectly; speed is the primary objective.
  • Plowing a field (Enterprise/Industrial): A high-revving engine is ineffective; the requirement is torque, traction, and sustained power under load.
  • Transporting cargo across an ocean (Logistics): Raw speed is irrelevant compared to fuel efficiency, durability, and massive scale.
  • Exploring the surface of Mars (Frontier/Science): A standard combustion engine will fail instantly due to environmental constraints; the need is for rugged autonomy and specialized engineering.

This is exactly how Applied AI works. The “Engine” (the intelligence) might be similar across different use cases, but the “Vehicle” (the application) must be radically different depending on the terrain. Some times even the Engine has to be modified for some use cases.

This principle applies directly when rolling out AI driven applications. Different applications require fundamentally different architectures, not just different features. Cotninuing with the vehicle anology below section talks about how we can map the 4 AI pillars to different vehicle type:

  • Consumer AI (The Sports Car): Optimized for high velocity, agility, and individual engagement. The priority is reducing user friction and maximizing experience.
  • Enterprise AI (The Freight Locomotive): Engineered for massive scale, unwavering reliability, and strict governance. The priority is secure, consistent throughput on defined rails.
  • Science AI (The Deep-Sea Submersible): Purpose-built for extreme precision in unexplored environments. The priority is navigating high-complexity domains to extract novel insights rather than speed.
  • Physical AI (The Industrial Rover): Designed for real-world interaction where the cost of failure is physical. The priority is safety, sensor integration, and navigating dynamic, unstructured environments.

If you try to apply “Sports Car” thinking to a “Cargo Train” problem, you will crash. This is why we need to break the AI landscape into 4 Pillars.

The 4 Pillars of Applied AI

Now that we have explored the vehicle analogy, it is clear why AI cannot be treated as a single entity when adopting and applying it. The architecture, stack, and strategy must vary based on fundamentally different challenges: speed vs. reliability, user delight vs. regulatory compliance, and digital outputs vs. physical safety. We can categorize these adoption patterns into four distinct pillars.

Pillar 1: Consumer AI

This is the AI that touches our daily lives. It is fast, personal, and often creative.

  • The Goal: Enhance individual productivity, creativity, or entertainment.
  • The Constraint: User Experience (UX) and Latency. If it takes 10 seconds to reply, users walk away. If it’s hard to use, they ignore it.
  • The “Vehicle Anology”: The Sports Car. It’s about speed, style, and the driver’s feeling.
  • Real-World Examples:
    • ChatGPT / Claude: Chatbots that help you write emails or plan trips.
    • Midjourney: Tools that generate art from text.
    • Siri / Alexa: Voice assistants that manage your home.

Pillar 2: Enterprise AI

This is the AI that powers businesses and organizations. It is serious, governed, and integrated.

  • The Goal: Automate processes, analyze data, and augment knowledge work at scale.
  • The Constraint: Accuracy, Security, and Integration. A chatbot that hallucinates a discount code is annoying; a financial AI that hallucinates a revenue number is a lawsuit. It must connect securely to internal data.
  • The “Vehicle Anology”: The Cargo Train. It carries a heavy load, runs on fixed rails (processes), and reliability is more important than 0-60 mph speed.
  • Real-World Examples:
    • Customer Support Bots: Systems that handle thousands of refund requests automatically.
    • Code Copilots: Tools that help developers write secure code faster.
    • Legal Document Analysis: AI that reviews contracts for risks.

Pillar 3: Science & STEM AI

This is the AI that pushes the boundaries of human knowledge. It is precise, computationally expensive, and transformational.

  • The Goal: Accelerate discovery in biology, physics, chemistry, and math.
  • The Constraint: Precision and Complexity. “Good enough” isn’t acceptable here. The AI must model the laws of physics or biology accurately.
  • The “Vehicle Anology”: The Deep-Sea Submersible or Space Rover. It goes where humans physically cannot, exploring the unknown depths of data.
  • Real-World Examples:
    • AlphaFold: AI that predicts protein structures, revolutionizing biology.
    • Weather Forecasting Models: AI that predicts extreme weather events with higher accuracy than traditional physics models.
    • Material Science Discovery: AI finding new battery materials.

Pillar 4: Physical AI

This is the AI that leaves the screen and enters the real world. It is the hardest pillar because the real world is messy and unforgiving.

  • The Goal: Interact with physical objects, navigate environments, and perform manual tasks.
  • The Constraint: Safety and Physics. If a chatbot makes a mistake, you get bad text. If a robot makes a mistake, it breaks something or hurts someone.
  • The “Vehicle Anology”: The Industrial Robot or Autonomous Truck. It must be rugged, aware of its surroundings, and fail-safe.
  • Real-World Examples:
    • Waymo / Tesla FSD: Autonomous vehicles navigating traffic.
    • Warehouse Robots: Amazon’s robots moving packages.
    • Humanoid Robots: Emerging robots designed to fold laundry or work in factories.

Why This Distinction Matters

You might ask, “Why not just categorize AI by what it does—like Text AI vs. Image AI?”

Categorizing by modality (text, image, video) tells you what the tool is, but it doesn’t tell you how to manage it. A text model used to write a poem (Consumer) behaves completely differently from a text model used to summarize a medical record (Enterprise).

By categorizing by Pillar, you gain a clearer understanding of what to expect. You can immediately identify the constraints, timelines, and success metrics that apply to your specific AI project.

1. Different Speeds of Innovation

  • Consumer AI moves at the speed of software. New apps launch weekly.
  • Physical AI moves at the speed of hardware and safety regulation. It takes years to certify a robot or a self-driving car.
  • Mistake to Avoid: Don’t get frustrated that your warehouse robots aren’t improving as fast as ChatGPT. They are in a different pillar with different friction.

2. Different Measures of Success

  • Consumer AI is measured by engagement and delight.
  • Enterprise AI is measured by ROI, accuracy, and cost-savings.
  • Science AI is measured by breakthroughs and new knowledge.
  • Mistake to Avoid: Don’t judge a scientific model by its user interface, or an enterprise tool by how “fun” it is to chat with.

3. Different Risk Profiles

  • If a Consumer image generator makes a weird picture, it’s a meme.
  • If an Enterprise legal bot hallucinates a clause, it’s a liability.
  • If a Physical robot fails, it’s a safety hazard.

When you know which pillar a project belongs to, you can immediately anticipate:

  • What constraints will dominate (speed? safety? accuracy?)
  • What stakeholders will be involved (users? regulators? scientists?)
  • What timeline is realistic (weeks? months? years?)
  • What failure modes to expect (bad UX? compliance issues? physical harm?)

Instead of discovering these answers the hard way—through trial and error—the pillar framework lets you predict them upfront. This is the “predictive power” of structural thinking: you’re not just reacting to problems; you’re anticipating them before they occur.

When you understand which pillar you are operating in, you stop applying the wrong rules to the game. You stop trying to drive a tractor like a Ferrari.

This clarity transforms how you approach any AI initiative. Rather than asking the vague question “How do we adopt AI?”, you can now ask the precise question: “Which pillar does this project belong to, and what does that tell us about how to execute it?”

For example, if your company wants to build an internal knowledge assistant for employees, you know immediately that you are in the Enterprise pillar. This means:

  • You will need to prioritize data security and access controls from day one
  • The AI must integrate with your existing identity management and document systems
  • Hallucinations are not just annoying—they could spread misinformation across your organization
  • Your success metric is not “how engaging is the chat” but “how much time did we save” and “how accurate are the answers”
  • You should expect a 3-6 month rollout, not a weekend prototype

Contrast this with building a creative writing assistant for novelists, which sits in the Consumer pillar. There:

  • Speed and personality matter more than perfect accuracy
  • Users expect a delightful, intuitive interface
  • Your success metric is user retention and satisfaction
  • You can iterate weekly based on user feedback

The same underlying language model could power both applications, but the vehicles you build around that engine are completely different. The pillar framework gives you this insight before you write a single line of code or sign a single vendor contract.

Summary

In this post, we established the first fundamental layer of structural thinking: the 4 Pillars of Applied AI—Consumer, Enterprise, Science, and Physical. We explored how the same underlying AI “engine” produces radically different outcomes depending on the “vehicle” it powers. Most importantly, we learned that knowing which pillar your project belongs to allows you to predict its constraints, stakeholders, timelines, and failure modes before you begin.

But identifying the right pillar is only half the story. Even within a single pillar, AI projects succeed or fail based on how well the underlying layers—from hardware to models to applications—work together. In the next post, “The Impact Layers — How AI Progress Actually Happens,” we will dive beneath the surface to explore the 5-layer stack that determines whether AI potential translates into real-world value, and why even the smartest model can fail if a single layer is weak.

References

Author’s Note: AI-assisted writing tools were used to support the creation of this post. All concepts, perspectives, and the underlying thought process originate from me; the AI served only as a drafting and refinement aid

The Applied AI Thoughts for Realization Blog Post 1

Why AI Feels Overwhelming — And Why That’s the Wrong Way to Look at It

Introduction

In case of people who closely follow AI related developments as well as for people who are in early stages of understanding the AI landscape its become very hard to track the developments in the space and understand how new technologies, tools, techniques and solutions can be applied in their respective domains and use case ideas they have. For the ones closely following developments in AI space every morning, it feels like the landscape of Artificial Intelligence has shifted overnight. You wake up to a barrage of headlines: a new Large Language Model (LLM) that crushes previous benchmarks, a new image generator that renders reality perfectly, or a new agentic tool that promises to automate your entire workflow.

For engineers, leaders, and decision-makers, this constant acceleration often triggers a mix of excitement and anxiety. There is a pervasive fear of falling behind—the sense that if you don’t master this specific tool released today, you will be obsolete tomorrow. This is “AI Fatigue,” and it is the natural result of trying to drink from a firehose without a cup.

The objective of this blog series, Applied AI Thoughts for Realization, is to help the readers to put down the firehose and step back. The objective is not to cover the latest news or review the newest tools. Instead, the goal of the blogs in this series is to provide you with a structured mental model—a way to organize the chaos into a coherent map.

Over the course of this series, I will try to avoid the hype cycles and focus on a first-principles approach to understanding the AI landscape. I will help you to explore how to categorize AI into distinct “Domain Pillars” based on where it is applied, and how to understand the dependencies and progress within the Domain Pillars through specific “Impact Layers.”

By the end of this series, you won’t just have more information; you will have a mental model framework. When you encounter any news about new AI developments or innovations—whether it’s a breakthrough in consumer gadgets, an enterprise platform launch, or a scientific research milestone—you will be able to instantly map it to its specific domain pillar and identify which layer it operates within. This clarity will help you understand not just what the announcement is, but where it fits in the broader landscape, why it matters in that context, and whether it’s relevant to your work.

The Problem the Series helps to Solve : The Trap of Tactical Thinking

Imagine you decide to build a house. You walk into a massive hardware store, credit card in hand.

On Monday, you buy a power drill because the salesperson says it’s the fastest one ever made. On Tuesday, you see a new type of saw that uses lasers, so you buy that too. On Wednesday, you hear about a revolutionary type of hammer, so you rush back to the store.

By the end of the week, your garage is full of cutting-edge tools. You are exhausted from researching specs and comparing brands. But when you look at your empty lot, you realize a painful truth: You haven’t laid a single brick. You have a collection of tools, but you don’t have a blueprint.

This is exactly where most of us are with Artificial Intelligence today. We are stuck in Tactical Thinking.

We treat AI as a shopping list of features and vendors. We obsess over the “tools”:

  • “Did you see the context window on that new model?”
  • “Is OpenAI better than Google for coding?”
  • “Should we use RAG or fine-tuning?”

While these questions aren’t irrelevant, asking them first is a trap. When you focus solely on the tools, you become reactive. You are constantly pivoting based on the latest press release. You judge AI progress by how fast the “drill” spins (model benchmarks), rather than whether it can actually help you build the “house” (solve a specific problem).

This tactical approach leads to two major issues:

  1. Paralysis: You are afraid to commit to a solution because something better might come out next week.
  2. Misalignment: You try to force a tool into a job it wasn’t meant for—like trying to frame a house with that laser saw just because it was expensive.

To escape this cycle, we need to stop looking at the tools and start looking at the architecture.

The Shift: From Tools to Structure

The antidote to tactical paralysis is Structural Thinking.

If tactical thinking asks “What tool should I use?”, structural thinking asks “Where does this problem live, and what are the constraints of that environment?”

When you shift your mindset from tools to structure, you stop chasing every new announcement. You realize that AI is not a single, monolithic wave washing over everything in the same way. Instead, it is a set of capabilities that behaves radically differently depending on the context.

Why Structure Matters for Scalability and Flexibility

The biggest advantage of structural thinking is that it future-proofs your strategy.

In the tactical world, your strategy is brittle. If you build your entire workflow around a specific vendor’s model, and that vendor changes their pricing or a competitor releases a better model next month, your strategy breaks. You are constantly rebuilding.

In the structural world, your strategy is flexible. You define the architecture of your solution—the data flows, the safety guardrails, the user interaction patterns—independent of the specific engine powering it.

  • If a new, faster model comes out? You simply swap it in as a component.
  • If a regulation changes? You adjust your governance layer without tearing down the whole application.

Structural thinking allows you to build systems that last, rather than prototypes that expire. It moves you from being a consumer of technology to an architect of solutions. It forces you to acknowledge that a “good” AI system for writing a marketing email is fundamentally different from a “good” AI system for controlling a robotic arm—not just because the tools are different, but because the structure of the problem (risk, speed, cost, accuracy) is different.

The Solution: A Preview of the Framework

To navigate this landscape effectively, we need a map. Over years of working with AI across various domains, I have developed a mental model that breaks the AI world down into two distinct dimensions. Think of it as a coordinate system for understanding any AI development.

Dimension 1: The 4 Domain Pillars (Where AI Applies) First, we must recognize that “AI” is not a single thing. It is a set of technologies applied in radically different environments. We divide the landscape into four vertical pillars:

  1. Consumer AI: The AI we use in our daily lives (chatbots, image generators).
  2. Enterprise AI: The AI that powers businesses (automation, data analysis).
  3. Science & STEM AI: The AI that accelerates discovery (drug discovery, material science).
  4. Physical AI: The AI that interacts with the real world (robotics, autonomous systems).

Dimension 2: The 5 Impact Layers (How AI Progresses) Within each pillar, progress doesn’t happen in a vacuum. It moves through layers of maturity, from the raw silicon to the final societal change:

  1. Hardware: The chips and infrastructure.
  2. Models: The algorithms and intelligence.
  3. Agents & Tools: The orchestration that makes models useful.
  4. Applications: The interfaces we actually touch.
  5. Impact: The real-world value and behavioral change created.

The Power of the Grid When you combine these, you get a grid. You can place any news story, any tool, or any project onto this grid. Suddenly, the chaos disappears. You aren’t just looking at “AI”; you are looking at “Layer 2 (Models) within Pillar 3 (Science).” Below is a diagram that helps understand the framework.

This framework allows you to ignore the noise that doesn’t affect your specific coordinates and focus deeply on the areas that do.

What to Expect from This Series

This blog post is just the starting point. Over the coming weeks, we will unpack this framework piece by piece, giving you the tools to apply it to your own work.

Here is the roadmap for the series:

  • Part 1: Foundational Thinking We will dive deeper into the mental models. We’ll explore the 4 Domain Pillars in detail to understand their unique characteristics, and we’ll break down the 5 Layers to see how innovation actually flows from hardware to impact.

  • Part 2: Pillar-by-Pillar Deep Dives We will dedicate specific articles to each of the four domain pillars—Consumer, Enterprise, Science, and Physical AI. We will analyze the specific trends, constraints, and opportunities within each domain.

  • Part 3: Applying the Framework Finally, we will turn theory into practice. We will discuss how to use this framework to make better decisions, whether you are evaluating a new vendor, planning an internal AI project, or simply trying to stay ahead of the curve.

By the end of this journey, you will have a clear, reusable lens through which to view the AI landscape—one that turns information overload into actionable insight.

References

Author’s Note: AI-assisted writing tools were used to support the creation of this post. All concepts, perspectives, and the underlying thought process originate from me; the AI served only as a drafting and refinement aid

Data Privacy vs Safety of Society : Which is important ?

Recently there has been lot of debates happening in the area of data privacy especially with regards to companies trying to take a high moral ground in protecting customer data from being exposed to Government Agencies. I am not just talking about the Apple fiasco happening but there have been many such cases in the recent past. Before letting know my view I would like to share my thoughts on the origins (as i perceive) of this problem.

Lets start with Internet (which people say was a short form for inter-networking), it all started with the invent of computers in 1950 followed by the evaluation of closed network computing in 1960’s and 70’s with only government and educational institutions using the networks to exchange information. Client Server computation was the approach with thin dummy clients used as interaction terminals and all computation and data storage centralized at the Servers.

With the invention of TCP/IP in 1978 the whole world of connectivity and applications using the technology exploded with it the users. Closed networks owned by government, defence companies and educational institutions evolved into commercial networks offered by ISP’s and other service providers.

Along the way other evolution were happening fast in the key areas like the computing power (Moore’s law effect), Memory and Storage that enabled the manufacturing of personal computers which were smaller in size and were affordable.

The killer combination of Personal Computers and the Invention of TCP/IP (Ethernet) enabled fast growth in the usage of computers and internet which became the back bone of all computational needs.

Internet Users evolved from few thousands in 1950’s to few millions in 1970’s and few hundred millions through 1980’s and 90’s. Turn of the century saw a major boom in internet users which currently stands at around 3.5 billion users at the end of 2015(approximately 45+% of world population).

This phenomenal increase in people using an open network has helped make the world to be very connected and has led to a information revolution. At the same time there are people who have also taken advantage of the openness and used it for sinister means.

Currently almost half the globe is using the internet not all of them are aware of the dangers involved using the internet due to which they fall prey to various types of attacks. Either getting virus into their computer or getting hacked into their banking accounts or trapped in a fishing fraud and the list goes on. My objective in this blog is not to talk about the impact Internet can cause to the privacy of illiterate internet users who get hacked and personal data taken away but about the people who use the same internet to hide what their doing and harm societies and why giving importance to the privacy of such people will bring harm to society.

As high lighted early the deadly combination of computing power in small devices added to it the access to an open internet provides a deadly weapon for people who would like to cause harm to societies around the world. With such capabilities people are able to communicate with others across the world to plan and execute sinister acts without getting noticed. People are now able to tap on to the computational capabilities available in smart phones or a Tabs or a Laptops to secure their data and communications without much difficulty. Comparing this, a decade back when the computational power of PC’s were equivalent to the power in a feature phone.

People started developing software and services that would help protect information and communications that became handy for the evil minded people. Sharing and Access to information became easy due to the pervasiveness of internet. Policing such a network has become a nigh mare. One can just use a Smart phone to access information, communicate, do commerce etc, they do not require to own a PC or Laptop to communicate and share information. They just need a device in the palm ie. which is the mobile phone or a tablet.

To tap on to the growth of internet and pervasiveness of the internet enabled devices companies started building various types of services to offer to the internet users. Among them the key being communication and social media services. This enabled people to create virtual communities to the extent that the entire online business model and valuation of such companies were based on the number of the user base they have. Facebook for example has almost 1.6 billion active users which is more than the population of China or India. Combined user base of top 10 such service providers is more than the total world population. If we look it from a consumer lens this is a very good news as they have lot of choices. But this has also started creating lot of social challenges. I again do not want to get into a judgement as there are contradictory views prevailing in this subject.

One of the key reason why people end up using such services in large numbers is that they have full freedom to be anonymous in using such services. Even though these are virtual services due to the reach it has its also has become a medium to be used by people to influence other peoples easily. Getting out ones message, irrespective of the stature and background, to huge mass of people has become very easy. At the same time, it is has become very hard to monitor and moderate such communications for various reasons and one of them being the protection given to users by the service providers in the name of privacy.

Definitely there are benefits that the consumers get by using such services the challenge is when people take advantage of the anonymity and try to cause harm to individuals or to society. We have heard of stories at personal level were people impersonate and try to spoil the reputation of other people, we have heard about people who spread rumors that cause major riots, kids preyed on by strangers and more. Since we do not have a structure and mechanism to monitor such acts of crime, protecting from such ill effects has become very difficult. Even agencies with money and resource power are only able to monitor specific leads.

In recent years the most worrying trend has been the act perpetrated by people taking advantage of the privacy protection offered by companies offering communication and social services has been act of terrorism. I do not want to provide you with examples of terrorist incidents that have happened around the world in the last few years but definitely the investigations have provided good insights into how people have used technology and services to carry out their acts of violence. People have effectively used the internet services to propagate messages, recruit people, organize and plan resources and communicate with each other. The security agencies in various countries were unable to monitor such activities as all the communication to perform the activities were protected by the relevant service providers.

The sad part of all this is how the technology providers and services providers are falling over each other to defend they stand on privacy. My question is how would they answer to the thousands of people who have lost their life’s due to these terror acts and in which the technology and services have provided a major support to carry out these acts.

There are many questions for which we have no answers yet to effectively address the challenge. Below are few of them,

  1. How do we avoid people taking advantage of the anonymity provided by technology and service providers ?
  2. How can the social media and communication services providers can self regulate, moderate and monitor their user base to look for people using it for sinister means ?
  3. What kind of structure and regulatory policies governments can work out so that their citizens can have faith in the process when they access citizen data ?
  4. How to control activities performed across boarders ?

Below are some of my thoughts on the subject of balancing privacy vs safety to societies,

  1. People have to realize and recognize internet and usage policy of the country from where they use the internet. Until citizens follow the same one need not worry about any persecution or impact to privacy.
  2. People need to be careful when sharing information in the internet. They need to realize its a open world there and they dont have any control on how the information they share will be used. Hence the key is that they are aware of what they share and who they share with.
  3.   People need to realize governments have to do the duty of protecting their citizens. As described above with so much stacked up against them its not fair to blame them if they try to monitor or access citizens data for intelligence gathering pro active action or post incident investigations.
  4. Governments need to put in place dedicated agencies which will be responsible to perform the monitoring of citizens internet activities and ensure proper process and security mechanism in place. The agencies need to be regulated by an independent organization which have representation of people and organization trusted by citizens.
  5. Companies providing social and communication services are obliged to ensure that people using their services are complying with the policies and guidelines of the country from where the service is used. This can kill the viability of some of these business but it does not mean that it can cause harm to individuals and communities of any country by people using their services.

With the strong feelings around privacy, freedom of expressions, being anonymous etc my above thoughts may not be welcomed my majority of internet citizens. My thoughts are purely based on my belief that if a government or agency can save my life and that involves getting access to my personal data I am ok for such compromise on privacy.Either my internet activity being monitored or I am body scanned or Tap Checked in Airport or My Bags being scanned I always feel its been done to save my life !

Big Data Tools One Size Does Not Fit All when it comes to Enterprises

We have heard enough about Big Data in the last few years. The technologies, the benefits, the big ideas etc. Now we are moving into the era of Application ie. how to realize the real benefits. Yes the Interprisers (Internet Companies) are in the fore front of using the various technologies and the data effectively since most of the current set of tools have their origins from these Interprising companies. The sucess of the current set of tools in the Interprises are because of the nature of the data they have to handle which is predominently unstructured. But when it comes to the real enterprises the challenge arises because of the nature of the data and the operations they perform on their data. The enterprise data are mostly structured and the operations performed on them are mostly relational. It does not mean Enterprises cannot benefit from using the Big Data technologies. The benefits a traditional enterprise would get using the current set of big data tools and technologies would not be same as what an Interprising company would be looking at. Hence enterprises need to have a different approach in using the technologies and tools. One of the key aspects will be looking at interoperability of the traditional big data tools sets like hadoop or nosql db’s etc with traditional Enterprise data management tools used for datawarehousing, ETL, BI related functions. Its critical that Enterprises have a clear principles to achieve the same.

Connected Devices Juggernaut

Google’s purchase price of 3+ billion for a start up that has a solution for Connected device is big considering that the solution is yet to prove its worth. Looking at the bigger picture connected devices and the information / data it will be generating is going to be a major area of focus in coming years. The path for success for companies focused in providing products/solutions is not going to be smooth. The reason for the same is the dependencies and challenges prevailing in the existing environment/ecosystem. Current solutions available out in the market are still a closed loop ones and proprietary. Including the Nest solution which Google has spent its monies for. As we have seen and has been proven again and again its critical to have an open platform and ecosystem for such technologies for mass adoption. Below are a list of challenges that i see is required to be addressed for the connected devices story to really take off.

1. Limitation related to connectivity. The solutions should make use of pervasive wireless technologies like the mobile networks rather than using fixed wireless technologies like WiFi or low range technologies like zigbee.

2. Identity and Authentication : There needs to be a programmable identity module framework that will help seamless connection to networks.

3. Data Privacy and Security : This has been a hot topic of the current internet services and would still prevail and be a strong barrier to data generated by connected devices. If we do not have a trusted model and framework for data sharing its hard to convince people of using such devices and sharing data.

4. Data Management (Collection, Management and Sharing) : Huge amounts of data are expected to be generated by connected devices. There needs to be solutions and services in place that can help collect such data, manage the data and help share the data.

5. Standards : Connected devices can be deployed by an enterprise, individual, government agencies for various purposes. The categorization and organization of such data in a standards based approach is critical. There are some standards based activity carried out by some organization and in certain technology areas. We need to have a single body to frame the end to end standards related to connected devices.

6. Cost and Revenue : One of the key barrier for connected devices is going to be the cost. One of the factors that will impact the same is the revenue model and the mass adaptability of the solution.

There is going to be a great potential for connected devices (imagine a petrol pump sensor alerts a user of low gas and the user immediately gets a phone alert with a google maps link to the nearest petrol pump with directions. To add on to this use case imagine if the back-end application that receives constant updates on the gas levels and based decides to send the notification based on the time of the day, gas pump distance from current location etc.). They key now is Organizations need to quickly address the various barriers that would hinder the adoption of connected devices.

Big Data – Expectations for this year

There has been a great push related to big data technologies and applications in the past years. Since most of these technologies have evolved from the Web and Social Media world there has been some challenges in applying the same in enterprise space for few key reasons. Enterprise data are mostly structured and since most of the current big data related technologies focus in resolving the handling of unstructured data it has been difficult for enterprises to come up with justifiable business case to use big data technologies. One of the use cases enterprise tend to look at is to get insights of their customers by combining the current structured data they own with the social media data of their customers. This also has its challenges since enterprises do not have information and access to their customers social media identity. Hence currently enterprises who are looking at big data technologies are purely looking to use it for the purpose of cost savings rather than using it for revenue generation. Hence I am looking forward to this year for companies to come out with technologies that enterprises can really use it for direct business benefits rather than it being just used as a infrastructure component to reduce costs.

One more area were is see lot of activities will be related to stream computing. Lot of moment has been generated last year with technologies like Storm, Spark , Splunk etc. I am looking forward to see how these technologies are going to be applied in enterprises. These technologies have great potentials to help enterprises in the real of real time decision making.

Are we ready to do mobile payments

In the past few months and weeks we are increasingly hearing concrete steps taken to bring mobile payments to the main stream retail. Chains like McD and Starbucks trying to work with Paypal and Square respectively to launch mpayments across their outlets. The promising thing is that we are now seeing serious big players getting involved rather than small SMB merchants. It would give the necessary push for the mpayment services since these companies would help in doing the required marketing and customer education for mpayment services.

The key challenge now is the standardization of the various technologies. Hope the lack of standardization does not cannibalization the market by individual players using their own technologies and approach that will act as a major hindrance in adoption of this technologies.

McD/Paypal Initiative in News : http://www.pcmag.com/article2/0,2817,2408640,00.asp

Starbucks / Square Relationship : http://www.economist.com/blogs/babbage/2012/08/retail-payments