
Natarajan Vaidyanathan
Building adaptive brain-inspired intelligence
There’s a version of the current AI moment that looks, from a certain angle, a lot like a category error. We’ve built systems of remarkable capability yet genuinely surprising fragility. Sophisticated models reason across disciplines but stumble on the kind of problems a child’s brain solves without effort. The interesting question isn’t whether scale fixes this. It’s why we keep expecting it to.
I came to AI sideways, through computational neuroscience and control engineering - fields that do not have the luxury of infinite compute or clean loss functions. What those fields taught me, more than any specific technique, is that intelligence in the wild is almost never only about prediction performance. It’s about what you do at the edge of your model. How you behave when the world stops confirming your priors. And if you can explain and learn why.
That gap, between a system that interpolates well and one that knows when it’s extrapolating, is what this blog is about. Not as a critique of the current paradigm, but as a genuine question: what would it look like to build from the other direction? To start with uncertainty quantification, with structured state representations, with the kind of priors that biological systems evolved under real physical constraints - and see how that can inform us. The tools I keep returning to come from control theory, dynamical systems, and the parts of neuroscience that don’t make it into ML papers often. Whether that’s the right lens or just my lens, I’m still working out. But it tends to produce different questions. And right now, different questions feel more valuable than better answers to the same ones.



