What if logic and learning weren't two separate things? What if they were just different modes of the same underlying process?
This question is at the heart of Professor Pedro Domingos' paper "Tensor Logic: The Language of AI", which proposes a unified framework where symbolic reasoning and neural learning emerge from the same substrate: tensor equations.
I've been building an implementation of these ideas to explore how this paradigm works in practice. The result feels surprisingly brain-like—rigid when you need guarantees, flexible when you want it to learn from data.
The Core Insight
Traditional AI has always felt like two separate worlds:
Symbolic AI gives you rules, logic, and interpretability. You can trace every inference step. But it's brittle—it can't learn from data or handle uncertainty.
Neural AI learns from examples and handles ambiguity beautifully. But it's opaque, prone to hallucination, and can't guarantee logical consistency.
Tensor Logic dissolves this boundary by expressing both paradigms as tensor operations. A logical rule like becomes a matrix multiplication: . Attention mechanisms become the same join operations that power logical inference. Everything runs on the same substrate.
Four Brain-Like Properties
1. Dual Processing: Fast and Slow Thinking
Like Kahneman's System 1 and System 2, the framework operates in two modes:
Boolean mode delivers exact, deterministic logic. No approximations, no hallucinations. When you ask "Is Alice Bob's grandmother?" you get a provable yes or no based on the facts in the system.
Continuous mode enables learning and uncertainty. Relations become probabilities, rules become soft constraints, and the entire system becomes differentiable—ready to learn from data via gradient descent.
The magic is that you can switch between these modes seamlessly, even mid-computation. Train in continuous mode to discover patterns, then lock into Boolean mode for production guarantees.
2. Routing and Gating: Cortical Columns
The brain doesn't process information in one monolithic blob—it routes signals through specialized circuits that compete and collaborate. Tensor Logic mirrors this through:
- Relation banks: Collections of learned transformations (like "parent", "colleague", "influences") that can be combined dynamically
- Attention heads: Parallel pathways that focus on different aspects of the input
- Learned gates: Small neural controllers that decide which relations to apply at each reasoning step
Instead of hard-coding , the system can learn to compose multi-step relationships from examples, choosing the right chain of reasoning for each query.
3. Bottom-Up Meets Top-Down: Bidirectional Flow
The brain constantly integrates two information flows:
Forward chaining (bottom-up): Start with facts, apply rules, propagate consequences. "Alice is Bob's parent, Bob is Carol's parent, therefore Alice is Carol's grandparent."
Backward chaining (top-down): Start with a goal, work backwards to find supporting evidence. "Is Alice Carol's grandparent? Let's check if there's a where ."
Both flows use the same tensor operations, just in different directions. This bidirectional reasoning mirrors how the cortex combines feedforward perception with feedback predictions.
4. Self-Modification: Learning to Notice New Patterns
Perhaps most brain-like: the system can expand its own vocabulary.
Through tensor factorization (specifically, a technique called RESCAL), the system discovers latent relationships hidden in the data. If you feed it facts about family trees, it might autonomously discover concepts like "sibling" or "ancestor" without being told they exist.
These invented predicates then become first-class citizens—available for future reasoning, composition, and learning. The system literally reshapes what it can think about.
What This Enables
This architecture isn't just conceptually elegant—it solves real problems:
Hallucination-free knowledge systems: Train embeddings on messy data, then enforce logical constraints during inference. Get the learning capacity of neural networks with the reliability of databases.
Explainable deep learning: Express transformers and attention as explicit tensor equations. Every computation has a symbolic interpretation you can audit and debug.
Few-shot reasoning: Learn compositional rules from a handful of examples. "Show me 5 examples of grandparent relationships" system learns the composition pattern and generalizes.
Hybrid search: Combine fuzzy similarity (vector embeddings) with hard constraints (graph structure) in a single query. "Find researchers similar to who work at institutions connected to ."
The Path Forward
Current limitations mirror known challenges in neuroscience and AI:
Typed knowledge: Real brains distinguish entity types (people vs. places vs. concepts). Adding typed embedding spaces would enable more structured reasoning across heterogeneous domains.
Probabilistic inference: The brain handles uncertainty through probabilistic population codes. Integrating probabilistic graphical models as tensor programs would enable richer uncertainty quantification.
Perception: Logic alone isn't enough—we need to ground it in raw sensory data. Expressing CNNs and kernel machines as tensor equations would close this loop.
Abductive reasoning: Humans excel at generating hypotheses to explain observations. Extending the framework to support full abductive inference (not just forward/backward deduction) remains an open challenge.
Why This Matters
We're at an inflection point in AI. Pure neural scaling has delivered remarkable capabilities, but the cracks are showing—hallucination, brittleness, lack of interpretability, inability to guarantee correctness.
The solution isn't to abandon neural networks and return to symbolic AI. It's to recognize that both paradigms are projections of something deeper: computation itself, expressed through tensors and equations.
When you build AI this way, you get systems that:
- Learn from data like neural networks
- Reason with guarantees like logic engines
- Compose knowledge like humans
- Explain their thinking in mathematical terms
- Improve their own representational capacity
This is the architecture I believe scales—not just to larger models, but to more capable, trustworthy, and understandable intelligence.
Learn More
- Read the original paper: "Tensor Logic: The Language of AI" by Pedro Domingos
- Explore the open-source implementation on GitHub
- Technical details on how attention reduces to logical joins, predicate invention mechanics, and training strategies are all documented in the repo
I'm always happy to discuss these ideas further—feel free to reach out if you want to dive deeper into the concepts or implementation.