[←]
Where AI Meets Entropy
October 2025

Over the past six months, I worked on a project called Cartograph, building AI systems for real-world businesses like restaurants, gyms, contractors, and retailers. What I saw up close was the gap between how AI assumes the world works and how it actually does. The further you move from the digital edge toward the physical economy, the more structure gives way to entropy.

AI assumes the world is orderly with clean data, consistent inputs, and structured logs. But real businesses aren't like that. They're messy, fragmented, and human. And that's what makes them interesting. Spend an afternoon inside a neighborhood restaurant or gym and you'll see where AI meets entropy firsthand — systems duct-taped together, manual entries fixing broken imports, data decaying as it moves from one tool to the next. What looks smooth from the outside is, underneath, a web of workarounds held together by habit and intuition.

That's where the illusion breaks. AI, as it's often imagined, presupposes structure. But in the real economy, structure is an aspiration, not a given.

The idea behind Cartograph began there: if AI was ever going to reach the real world, it had to learn to operate inside entropy, not above it. When we started building, I expected a landscape of structured APIs, interoperable data, and clean schemas. I thought we'd connect a few systems, unify their data, and create something close to a real-time operational map. What I found was closer to anthropology than engineering — CSV exports with corrupted totals, data fields repurposed for things they were never meant to hold, owners editing numbers by hand at 11 pm. because "that's just how we do it."

It was chaotic, but also strangely consistent — every system worked just well enough to keep the business running. And that was the first real lesson: data is behavior. And behavior is inconsistent.

Every business encodes its own logic — not just in how it names things, but in how it acts. A restaurant voiding a ticket might be fixing an error, refunding a meal, or comping a loyal customer. A contractor who stops logging hours might be on vacation or gone for good. A gym marking a "freeze" could mean anything from a pause in billing to a lost member. The numbers never tell the story; they only reflect it — imperfectly.

Over time, I began to see entropy as the default state of small-business systems. Entropy isn't just messiness. It's the slow drift from clarity to confusion as information passes through people and software. Every handoff, export, and workaround erodes meaning. Truth fragments into formats; structure becomes improvisation. Integrations break. Context disappears. Schema drift sets in. Meaning erodes in motion. The POS logs "discounts" five different ways depending on who's working the shift. The network goes down, and the restaurant records zero sales for two hours. The manager exports a report, edits it manually, and re-uploads it somewhere else. Every fix adds friction, every patch adds noise, and somewhere between the first transaction and the final report, truth gets lost.

That's what makes building intelligence systems so complex: meaning is gone before the data ever reaches a model.

With Cartograph, we learned the only way forward was to build in the mess, not around it. We stopped trying to force order and started listening for rhythm — not cleaning data, but interpreting it. Instead of treating every discrepancy as an error, we began to see it as a clue. The real work wasn't building perfect integrations; it was building systems that could infer intent. We started to design for uncertainty — what I came to think of as probabilistic interfaces: systems that could live in ambiguity, improve with exposure, and make better guesses over time. The product became less like a dashboard and more like a listener — something that learned how the business behaved before it tried to tell it what to do.

That shift changed everything. Once we stopped expecting order, the patterns began to emerge. We built feedback loops that learned from usage, not configuration. We found signals hidden inside the noise itself.

But as the system got smarter, we realized its limits weren't technical at all — they were human.

Underneath all that complexity was something simpler: trust. Business owners don't trust automation that doesn't think like they do. They've been burned by software before — tools that overpromised, underdelivered, or made their lives more complicated. When you walk into a business and talk about AI, their first instinct isn't excitement. It's doubt.

The only way to earn trust was to reflect their own logic back to them — to show, not tell. Instead of slides, we'd connect their systems, surface insights from their own data, and let the results speak. When the system pointed out something they already suspected — an overdue invoice, a rising expense, an underperforming location — you could see it land. That's when trust was built.

That experience changed how I think about intelligence. The hardest technical challenge wasn't scale; it was interpretation. The hardest human challenge wasn't adoption; it was trust. Real intelligence doesn't live in pristine datasets. It lives in the real world — full of noise, exceptions, and context. And that's what makes it so interesting.

The deeper I got into Cartograph, the more I realized what we were really building wasn't a product — it was a reconciliation engine. Every day, it tried to make sense of conflicting truths: what the data said, what the people believed, and what was actually happening. When those three things aligned, it felt like magic. Most days, they didn't. And that's what made it worth building.

What looks like a data problem is often a coordination problem. The owner, the accountant, and the employee all touch the same numbers but interpret them differently. I saw it happen countless times. A restaurant's accountant would "correct" payroll data to match reported tips, the manager would adjust hours to fit the schedule, and the owner would modify totals again to align with cash flow. Three versions of truth, all technically correct in isolation — and totally misaligned in practice. The same dynamic scales up across industries — the entropy of alignment. Fixing that isn't a technical challenge; it's a human one. The systems that win will be those that coordinate intent as much as information.

What I saw at Cartograph is what the entire AI industry will soon face. The world isn't full of APIs. It's full of improvisations — small local scripts, half-maintained systems, and people who know exactly how things work even if the tools don't. The same entropy that lives inside a restaurant's POS lives inside hospitals, logistics networks, and government databases.

AI won't fail because it isn't intelligent enough. It'll fail because its inputs aren't coherent enough.

The next frontier of AI isn't reasoning. It's reconstruction — learning how to rebuild context from partial information, how to make sense of incomplete worlds. Intelligence won't come from bigger models; it'll come from better coordination — between systems, between teams, and between the humans who depend on them.

Every real system decays. Data drifts. Incentives misalign. Context erodes.

The only sustainable intelligence is one that can continuously rebuild coherence from the noise — one that can see the structure in the disorder and act accordingly.

That's the system behind the system: not an architecture of perfect logic, but one of adaptive coordination — the kind that listens before it acts and learns before it decides.

That's what building Cartograph taught me. Intelligence, in the end, is less about knowing everything and more about learning how to make sense of what's already there.