Medicine, Investing, and the Limits of Deductive Reasoning
The Human Compulsion to Explain
Humans have a deep-seated need to explain phenomena through cause and effect. We cannot simply observe — we must construct causal narratives. The sun rises and sets, and across millennia we have proposed sun gods, geocentric models, and heliocentric models to explain it. Each framework felt definitive in its time.
Consider flight. We learn in school that airplanes fly because of lift generated by wing shape. But the actual fluid dynamics involved are far more complex than that classroom explanation suggests. And critically, it was not physicists who built the first airplane — it was engineers who iterated through failure after failure. Physics later provided the theoretical framework to explain what the engineers had already achieved empirically.
This distinction between theoretical understanding and practical mastery turns out to be central to both medicine and investing.
What Kostolany Knew About Doctors and Investors
André Kostolany, the legendary Hungarian-born speculator who spent decades navigating the volatile markets of Paris, Berlin, and New York, once made a striking observation about what kind of person makes a good investor. He argued that investors most resemble doctors — not engineers, not economists, not mathematicians.
His reasoning: both investors and doctors begin with diagnosis. Why is this market rising? Why is this patient deteriorating? From that diagnosis, they derive a course of action. When the initial assessment proves wrong, both must adapt — abandon the failing approach and find a new one. Neither medicine nor investing is a science in the strict sense. Both are forms of art.
Engineers and economists, Kostolany argued, think in purely mathematical terms. An engineer must never rely on intuition. But for an investor — and for a physician — intuition built from years of experience is not a weakness. It is indispensable.
The Medical Student's Mistake
When I entered medical school after studying life sciences, the first year felt manageable — it was essentially applied biology. But starting in the second year, when clinical medicine began in earnest, I struggled badly.
The problem was not the volume of material, though that was substantial. The problem was that I was approaching medicine with the wrong epistemological framework.
I studied textbooks cover to cover, learning the deductive chains: symptom A leads to finding B, which indicates treatment C. The logic appeared clean and sequential. I avoided the "question banks" — collections of clinical case scenarios — because studying from them felt like cheating. Real learning, I thought, meant mastering the underlying principles.
My grades told a different story. They declined steadily.
It was not until a friend sat me down before the licensing exam and said "just start solving cases" that things changed. Working through clinical scenarios — where a patient presents with symptoms X, Y, and Z and you must decide on the next step — I began to understand things that hours of textbook study had failed to teach me.
The insight was this: medicine is not a deductive science. It is an inductive discipline built on centuries of empirical observation. When a textbook presents a clean causal chain from pathophysiology to treatment, it is providing a pedagogical simplification of what is fundamentally pattern recognition refined through experience. The textbook makes inductive knowledge look deductive so that it can be taught in a classroom.
Why Experience Trumps Formulas in Complex Systems
The best mathematicians and physicists tend to do their most groundbreaking work when young. The best physicians are almost always experienced ones. This is not coincidental.
The human body is a complex system where textbook presentations are the exception rather than the rule. Patients present with atypical symptoms, comorbidities that complicate standard treatment algorithms, and responses to therapy that defy prediction. Navigating this complexity requires pattern recognition that can only be developed through extensive clinical experience.
This is why medical training is structured as an apprenticeship. After classroom instruction, students spend years shadowing attending physicians, then serving as interns and residents. The essential knowledge transfer happens not through lectures but through supervised practice — seeing patients, making decisions, observing outcomes, and adjusting.
Evidence-based medicine, the current paradigm, is explicitly statistical and inductive. Clinical trials and epidemiological studies generate probabilistic evidence, and treatment decisions are made based on the weight of that evidence. Some advocate for a future of "science-based medicine" where molecular and genetic understanding would enable purely deductive treatment. But even IBM Watson, with its enormous computational power, learned medicine not through first-principles biochemistry but through statistical analysis of physician decision-making on patient data.
Complex systems resist deductive mastery. Even supercomputers must learn inductively.
The Parallel to Investing
The parallels to investing are direct.
Economies, like human bodies, are complex adaptive systems. Political events, monetary policy, consumer psychology, commodity prices, natural disasters, institutional trust — the variables that influence markets are too numerous and too interrelated for any individual (or computer) to model deductively.
Economics studies these systems by reducing them to simplified models with tractable variables. This is valuable intellectual work, but it is to investing what biochemistry is to clinical medicine — necessary background, but not sufficient for practice. Kostolany put it bluntly: "Economic experts are like gladiators fighting blindfolded."
The investment masters — Graham, Buffett, Munger, Kostolany himself — all emphasized that their published principles were distillations of experience, not formulas to be mechanically applied. When Benjamin Graham wrote about price-to-earnings ratios and margin of safety, he was providing a pedagogical framework for thinking about value. He was not providing an algorithm.
The real substance of their books is not in the summary chapters listing investment criteria. It is in the extended case studies and experience narratives that most readers skim. Those narratives are the equivalent of clinical cases — they build the inductive pattern recognition that enables good judgment in novel situations.
Formulas as Steroids
There is a medical analogy that captures the danger of formulaic thinking. Corticosteroids are powerful anti-inflammatory drugs used across dozens of conditions. They work so broadly that they can seem like a universal remedy. But administering steroids to a patient with an active infection suppresses the immune response and worsens the disease. The drug that heals in one context destroys in another.
Investment formulas carry the same risk. A strategy that produced excellent returns for a decade may fail catastrophically when conditions change. The low-PE, high-ROE stocks that defined value investing performed beautifully — until 2008, when their PE ratios shot to triple digits as earnings collapsed while prices fell. Investors who had memorized the formula without internalizing the reasoning behind it were caught unprepared.
The Core Lesson
Whether in medicine or in markets, expertise in complex systems cannot be reduced to rules. It requires inductive learning through extensive exposure to diverse cases, the humility to recognize when your mental model is wrong, and the flexibility to adapt.
The textbook gives you the vocabulary. Experience gives you judgment. And judgment, in complex domains, is what separates competence from mastery.
As one senior physician once put it: "Medicine is a history of mistakes. What seemed certain turned out to be wrong, again and again. Becoming a good physician requires patience and the humility to accept that you might be wrong. It is fundamentally different from physics or mathematics."
The same could be said of any endeavor that involves making decisions under uncertainty with incomplete information — which is to say, most of the decisions that matter.