The Best Time, Quietly
We are not living through "faster tools". We are living through a compression of effort.
Things that used to take teams and months now take one person and one night. Not because the problems are easier—but because execution has been externalized.
Code, translation, formatting, orchestration, publishing: they are no longer the bottleneck.
Judgment is.
What Actually Changed
- Programmers don't really write code anymore
- Workflows that took weeks collapse into overnight runs
- Knowledge is cheap; intent and follow-through are not
- The gap between idea and artifact is disappearing
This isn't about "AI being smart". It's about friction being removed.
Once you manually run something end-to-end and turn it into a workflow, it stops feeling impressive—it just feels obvious.
That's the real shift.
Humans vs Agents
Agents coordinate better than humans, not because they're smarter, but because they have zero communication overhead.
- No negotiation
- No ego
- No ambiguity about ownership
- No fatigue
In long, multi-day, multi-step tasks, this difference compounds brutally.
This is why long-running agents matter more than clever prompts.
History Rhymes (Again)
Renaissance Technologies didn't win because of "better math". They won because they industrialized computation earlier than others.
Peter Brown—Hinton's first PhD student—wasn't hired to do research. He was hired to turn math and computing into a system.
That pattern is repeating, just at a much lower entry cost.
This time, individuals get to play.
Why 2026 Is Not a Prediction
2026 doesn't feel like the future. It feels like a state you can already opt into.
- Long-horizon agent workflows
- One-person systems with organizational leverage
- Execution that runs while you sleep
Shannon exists because I needed that reality to be reliable, not magical.
Production-Grade Multi-Agent Platform - Built with Rust, Go, and Python for deterministic execution, budget enforcement, and enterprise-grade observability.
Choose Belief, Then Iterate
This era doesn't reward skepticism first. It rewards trying, then compounding.
Believe just enough to act. Move in small steps. Let the workflow accumulate value.
That's it.
2025: A Short Year-In-Review
2025 wasn't one breakthrough—it was normalization.
What became normal:
- Reasoning models as defaults (o1, o3, DeepSeek R1, Claude with extended thinking)
- Coding agents on the command line (Claude Code, Codex CLI, Gemini CLI)
- Long-running agentic tasks measured in hours or days
- Computer use and browser automation agents
- AI-native IDEs replacing traditional editors (Cursor, Windsurf)
- MCP as the protocol for tool integration
- Real-time voice conversations with models
- Context windows hitting 1M+ tokens
- Image and video generation reaching production quality
- $200/month AI subscriptions as baseline cost
- Chinese open-weight models matching frontier labs (DeepSeek R1 moment)
- Vibe coding, YOLO shipping, and normalized deviance
- Building dozens (or hundreds) of tools as an individual
What quietly shifted:
- Open-source caught up to closed-source on many benchmarks
- Model providers racing to the bottom on pricing
- Data centers became politically unpopular
- "Conformance" mattered as much as raw capability
- Slop increased—but so did leverage
- Mobile devices became viable dev environments
2025 wasn't about who was "winning".
It was the year the floor rose.
What I Built This Year
Looking back at 2025, my work centered on understanding agent systems deeply enough to build production infrastructure around them.
I wrote an AI Agent book covering everything from ReAct loops to multi-agent orchestration patterns—the architectural knowledge that's becoming essential as agents move from demos to production systems.
I built Shannon, a multi-agent platform designed for the problems that matter in production: deterministic replay, budget enforcement, and security boundaries. Not because the world needed another agent framework, but because running agents reliably requires infrastructure that most frameworks don't provide.
And I started practicing AI quantitative trading—applying reinforcement learning to markets. Renaissance Technologies industrialized computation decades ago. Now that same leverage is available to individuals willing to combine domain knowledge with AI infrastructure.
The common thread: systems thinking applied to AI.
Understanding how agents fail, how costs compound, how workflows need to survive restarts and network failures. These are engineering problems, not research problems. And 2025 made that distinction clearer than ever.
Looking Forward
2026 will be less about new capabilities and more about deployment.
The question shifts from "can AI do this?" to "how do we run this reliably at scale?"
Infrastructure. Governance. Cost control. Observability.
The boring stuff that makes the exciting stuff work.
But there's another shift happening beneath the surface.
The Transformer architecture—attention mechanisms, static weights, context windows—has carried us remarkably far. But it's still fundamentally different from how biological brains work. No continuous learning. No dynamic weight updates during inference. No true memory formation.
The next frontier may look less like scaling and more like rethinking the architecture itself. From recurrent patterns to something closer to how neurons actually fire and adapt. TensorLogic is my early exploration in this direction—symbolic reasoning meets neural computation, with structures that can learn and update continuously.
I'm betting on both: the infrastructure to run today's agents reliably, and the research to build tomorrow's architectures differently.