Sanjay Krishna Anbalagan
A portfolio in three acts: build · write · research
2026
SR. ENGINEER  ·  AWS
PHD CS  ·  UMASS LOWELL

I make
complex systems
legible.

Ten years on a single problem — making the internal state of complex systems readable to whoever needs to understand them. First humans. Now AI.

live trace · footprint v1
causal · auditable · AI-readable
every decision → traceable · every answer → citable
$ cat ./manifesto.md § 01 ↘ permalink

Most enterprise AI ships hallucinating, opaque, and untestable.
It doesn't have to.

A working thesis, restated continuously through code and writing.

Read the argument

I build the developer abstractions that make production generative-AI applications explainable by construction. Not after the fact. Not by adding a logging layer. By making the system itself self-describing.

If a backend can produce a causal trace of every decision it made, then a model can reason over it, an engineer can debug it, and a regulator can audit it. The same artifact serves all three readers.

— Explainability isn't a feature. It's a property of the substrate.

$ whoami --story § 02 ↘ permalink

I've been here since the baby days.

A decade shipping apps on top of language models — from the era when an LLM forgot its own middle sentence, to the era when it runs a tool graph with a boss. FootPrint and agentfootprint are the shape of what that apprenticeship taught me.

2020  ·  Highlight era
What the app was doing: wrapping the world in tags
The lesson — why I built FootPrint and agentfootprint

I raised these apps the way you raise a kid. In 2020 the LLM was a baby — I had to wrap every important fact in <highlight> tags and pray it didn't wander off mid-sentence. In 2022 I was teaching it to count — think step by step, one rung at a time. In 2023 I was reviewing its homework — three candidate plans, score each, pick the best. By 2024 I wasn't parenting anymore. I was onboarding a junior employee: here's the tool belt, here's the graph of when to use what, emit a trace so your work can be audited.

Every prompt hack we invented to scaffold the model — highlight tags, step-by-step, tree-of-thought, ReAct, routing — became built-in instinct in the next generation of weights. The prompts died. The shape they were compensating for did not. That shape is a graph of decisions, each with a cause. The model grew into it. The apps still have to carry it.

FootPrint is that graph, honest — directed, causal, inspectable. agentfootprint is what happens when you let the graph host an LLM as one of its operators. Both are the condensed form of what a decade of raising these babies taught me: if you can't trace the reasoning, you haven't built the system yet.

Read the full essay on Medium ↗

$ ls ./work/ § 03 ↘ permalink

Work — shipped.

02 projects · open source · TypeScript
In[1] — intent
No. 01 / Open source · npm

FootPrint ·

the flowchart pattern for backend code

TypeScript·★ 7·⑂ 2·v1
$ npm install footprintjs
Out[1] — artifact

Business logic becomes a directed graph that produces causal traces an LLM can reason over. Self-explainable systems by construction — no telemetry retrofit required.

See the spec
  • 7 flow patterns · transactional state · PII redaction
  • Auto-generated tool descriptions for agents
  • 6 modular libraries — memory · builder · scope · engine · runner · contract
  • Parallel fork/join · streaming · patch-based state
  • Time-travel replay across the entire execution graph

Read on GitHub ↗  ·  npm ↗

In[2] — intent
No. 02 / Open source · npm

agentfootprint ·

context engineering, abstracted

TypeScript·MIT·built on FootPrint
$ npm install agentfootprint
Out[2] — artifact

PyTorch's autograd abstracted gradients. React abstracted the DOM. agentfootprint is that move applied to context engineering — declare what content lands in which slot of an LLM call and when. The framework owns the iteration loop, so the typed-event stream and replayable checkpoints come for free.

See the spec
  • One mental model — 3 slots (system · messages · tools) × 4 triggers (always · rule · on-tool-return · llm-activated) × 1 Injection primitive. Every named pattern (Skills, RAG, Reflexion, ToT) reduces to this.
  • The trace is a cache of the agent's thinking — Causal Memory persists decision evidence as JSON, so audit answers, cheap-model follow-ups, and SFT/DPO training trajectories all read from one recording.
  • 2 primitives + 4 compositionsLLMCall, Agent; Sequence, Parallel, Conditional, Loop
  • 6 providers · MCP · 47 typed events across 13 domains — Anthropic · OpenAI · Bedrock · Ollama · Browser · Mock; pause/resume on a different server hours later via JSON-serializable checkpoints
  • Mocks first, prod second — build the entire agent against in-memory mocks at $0 API cost, swap one boundary at a time for production

Read on GitHub ↗  ·  npm ↗

$ cat ./writing/*.md § 04 ↘ permalink

Writing — in public.

Enterprise Gen AI Application · 320+ subscribers
Subscribe to the newsletter
$ cat ./research/*.bib § 05 ↘ permalink

Research

Peer-reviewed · Springer · HCI International
2025

Bridging UI Design and Chatbot Interactions

HCI International 2025 · Springer Proceedings
Read the abstract

Applying form-based interaction principles — Submit/Reset → STAY/SWITCH — to the design of conversational agents. A bridge between fifty years of GUI affordances and the new conversational surface.

2026

Visible Reasoning

Accepted · HCI International 2026 · Springer Proceedings
Read the abstract

A framework for deterministic LLM agent transparency — proposed as a third paradigm distinct from chain-of-thought and LLM-as-judge. The academic backbone of agentfootprint.