This is a big deal, thanks for writing about it so thoroughly. You and your readers would likely appreciate Roger's Bacon's post from yesterday, Epistemic Hell (https://www.theseedsofscience.pub/p/epistemic-hell). Both of you are zooming in on the void at the heart of the field, the absence of a clear paradigm with clearly-defined entities, relationships and contexts that can guide engagement.
In Kuhn's words, "In the absence of a paradigm or some candidate for paradigm, all of the facts that could possibly pertain to the development of a given science are likely to seem equally relevant. As a result, early fact-gathering is a far more nearly random activity than the one that subsequent scientific development makes familiar. Furthermore, in the absence of a reason for seeking some particular form of more recondite information, early fact-gathering is usually restricted to the wealth of data that lie ready to hand."
For me, this speaks to the strong need to look for data outside what lies "ready to hand." And especially, the place to look for that data, I believe, is at the heart of the matter: right in the thick of subjective experience itself. How can we observe subjective experience in a way that enables us to bridge the subjectivity barrier and come away with data that can reliably reveal patterns upon skillful analysis? (I don't think surveys are the way to go.)
I've been working on this for quite some time, and will be ready in a few weeks to release the first volume presenting an innovative methodology that provides the beginning for such disciplined, scientific observation of subjective experience. I won't say more about it here -- I'm laying it out gradually in my 'stack. But I will say that it does point to a significant possibility for a new paradigm in the form of a field dimension of conscious experience.
Keep doing what you're doing, Ethan. I appreciate your essays!
So many paradigms are based on metaphors. Metaphors are not physical models but literary devices that replace a complex domain (psychology/brains) with a familiar domain (computer science/computers). It hides more differences between the two domains than it reveals similarities. The human brain is so complex and so unique in the world that any single model or metaphor will ultimately fail. For a review of metaphors used to describe cognition, see https://tomrearick.substack.com/p/metaphors-we-think-by.
As a reformed cognitive psychologist, I couldn’t agree more. I was deeply frustrated by the lack of any broad theoretical claims and agreed upon questions. Instead the field consists of mini islands of specialized areas, each with its own set of methods and largely atheoretical concerns (does x affect y?).
But neural networks have changed everything for me and—I would argue—should for much of the field. Here, at last, is a broad theoretical framework for modeling psychological behaviors. You may not agree with me that the mind is an autoregressive next token generator like large language models but at least it’s a general theoretical claim.
This is a great point. I'm kind of two minds about this.
On the one hand, LLMs are really encouraging because they can actually model behavior, in the sense that they can produce the behavior, not just abstractly describe it. And they mimic some behaviors so well that it's hard not to think that they might be a good model for at least part of the mind.
That said, it's clear that LLMs aren't a good model for everything. They don't learn like we do, so they're not a good model for human learning and memory. They don't get depressed or anxious, so they're not a good model for mental health. They don't seem to have anything analogous to motivation, so they're not a good model of why people procrastinate. They don't even use language quite like we do; as far as I know, LLMs don't go through a phase of using regular forms for irregular words, such as "goed" for "went" or "taked" for "took", but human toddlers do. So they seem like a step in the right direction but also somewhat short of a full paradigm, at least for now.
Yes, the current models are limited, essentially to language with other modalities and cognitive processes not accounted for and so this is hardly a full model of psychology/cognition. But it is still very exciting to have at least one aspect of cognition actually up and running. Also, now that we’ve cracked the code of one module, others may also follow suit using the same underlying math.
From my perspective, the ideas we come up with in science that tell us who we are are often influenced by broader shifts.
For what you name “introspection,” it was deeply influenced by the occult craze of the 19th century, the obsession with making the hidden revealed. Much like occultists went out and tried to build maps of hidden planes of reality, the early psychologist aimed to uncover the hidden world within.
Behavioralism was influenced by the ideas of man as an evolved creature, without evolution, behavioralism likely would never exist.
And the same is true for cognitive science, we began to believe we were like machines after creating the computer.
We can go back even further in history and discover that after the creation of the automaton, what we get is ideas of life being mechanical in nature. Or to the discovery of electricity, where people thought they could bring back people who had been long dead with jolts of electricity.
By that view, we might be a long time away from gaining a new paradigm. Maybe not, maybe there doesn’t need to be a huge technological advancement but a scientific one, like evolution. But judging by this pattern, the next explosive technology is ai, which, I believe (I’m not an expert on ai) is modeled on cognitive and behavioralist models of the human being (I think the ai is even punished and rewarded somehow to drive learning. Again, not an expert!)
Curious to hear your thoughts on this perspective.
You're definitely right that new ideas in science are often part of existing trends. But at the same time, a new paradigm is often a very old idea that's dredged up from the murky depths. For example, heliocentrism and atomism had both been considered by the Ancient Greeks, and were re-considered during the scientific revolution. Other times, the new idea is kind of strange and comes from an unexpected direction, like evolution. So my bet is actually that the new paradigm will come from a new application of old ideas; or possibly it will be based on an unusual insight from a distant field, like how the idea of evolution came from unexpected findings in animal husbandry.
I did my philosophy MA thesis on a defence of Kuhn’s incommensurability thesis, so I’m very steeped in his work. Still, I learned so much from your discussion that it really made my day (and more). In fact, as a Kuhnian I feel that your suggestion of the process by which fields go from containing many “proto-paradigms” to finding a dominant paradigm is definitely a promising research area. (One tiny historical quibble about ancient astronomy: the theory was never that, say angels, MADE the planets go in circles, but that their metaphysical nature (being composed of aether) was naturally to go in circles.)
This is great to hear, I'm never sure how much I'm adding to the conversation when it comes to Kuhn. I will probably write more about this in the future, thanks for the encouragement!
This is a good shift. Thank you for taking the time to write about this hidden treasure. Interesting to become aware of something we know, yet allow it to disappear into the blind spot of trust. The Paradigm plays such a massive role in the work that I do in guiding people to entertain new perspectives. If you want things to change, you must entertain changing the way you look at things. This will always be a struggle for those tethered to a paradigm they are not open to shifting. Make Sense? Good shift, Ethan, thanks.
The most obvious answer—dismissed not for lack of evidence but for failure to conform to entrenched biases—is parapsychology. Not the spooky, strawman version sold for cheap thrills, but the deep study of consciousness as a distributed field. A psychology of collective intelligences. The hypothesis is simple: consciousness is not confined to individual brains, but emerges through networks of entangled awareness, connected via an as-yet-unscienced medium that underlies physical reality.
And if that’s true, then what we’re in isn’t just a civilization—it’s a parapsychological ecosystem. A living, co-sensing domain of interconnected minds, broadcasting through physical vessels like antennas. Ideas, moods, and meaning don’t just propagate memetically—they flow through this unacknowledged substrate, influencing minds and shaping collective behavior in ways we’ve only begun to intuit.
There’s already empirical smoke. The placebo effect, for instance, is not a glitch—it’s the signal. It demonstrates how belief can ripple into biochemistry. That’s not just psychosomatic, it’s evidence of internal collective coherence: individuals as nested networks, pattern-complete systems capable of altering their state through intention. Individual humans are collectives telling themselves stories of unity.
Scale that up: the global mental health crisis is not an isolated epidemic. It’s the broadcast symptom of a decohering planetary noosphere. Civilization itself—understood as a kind of emergent mind—is showing signs of collective psychic dissonance. We’re fraying at the level of shared meaning. The question isn’t why are we here, but what does it mean that we feel this lost, together?
Physics has plenty of room for this to be real. Quantum coherence, nonlocality, hidden variables—these aren’t metaphors, they’re underdeveloped conceptual frameworks with unrealized implications. But research into these domains is unfunded, under-credentialed, and often culturally blacklisted. Not because it lacks rigor, but because it threatens the brittle supremacy of materialist orthodoxy. Fringe science is often just the frontier of suppressed curiosity.
And why does that happen? Because science is downstream of culture. And culture doesn’t know how to metabolize what it can’t monetize. If consciousness is fundamental, if the mind is nonlocal, if the self is porous and emergent, then the models we’ve built for governance, economics, education, and identity collapse.
So instead, we laugh at the weird stuff, fund the safe stuff, and write off the obvious answer with the most evidence as unserious. But that’s not skepticism. That’s fear disguised as objectivity. Meanwhile, the parapsychological ecosystem keeps humming—broadcasting distress, waiting for someone to tune in.
for funsies re mind is a motherboard, in Flow by Csikszentmihaly:
"At this point in our scientific knowledge we are on the verge of being able to estimate how much information the central nervous system is capable of processing. It seems we can manage at most seven bits of information—such as differentiated sounds, or visual stimuli, or recognizable nuances of emotion or thought—at any one time, and that the shortest time it takes to discriminate between one set of bits and another is about !/,. of a second.
By using these figures one concludes that it is possible to process at most 126 bits of information per second, or 7,560 per minute, or almost half a million per hour. Over a lifetime of seventy years, and counting sixteen hours of waking time each day, this amounts to about 185 billion bits of information. It is out of this total that everything in our life must come—every thought, memory, feeling, or action. It seems like a huge amount, but in reality it does not go that far.
So the 185 billion events to be enjoyed over our mortal days might be either an overestimate or an underestimate. If we consider the amount of data the brain could theoretically process, the number might be too low; but if we look at how people actually use their minds, it is definitely much too high. In any case, an individual can experience only so much.
Therefore, the information we allow into consciousness becomes extremely important; it is, in fact, what determines the content and the quality of life."
also,
J. S. Mill - "No great improvements in the lot of mankind are possible, until a great change takes place in the fundamental constitution of their modes of thought.”
“Hardline behaviorists argued that because behavior is the only thing that can be observed, behavior is the only part of psychology that can be studied scientifically. You can’t observe a thought, so you can’t possibly measure it or run studies on it. Behavior only.”
Watson certainly said as much, and described introspection as "unscientific". See e.g.:
"No one has ever touched a soul, or has seen one in a test tube, or has in any way come into relationship with it as he has with the other objects of his daily experience. ... It was the boast of Wundt’s students, in 1879, when the first psychological laboratory was established, that psychology had at last become a science without a soul. For fifty years we have kept this pseudo-science, exactly as Wundt laid it down. All that Wundt and his students really accomplished was to substitute for the word 'soul' the word 'consciousness.'"
I do wonder whether the advent of mass AI use will trigger a revolution in the psychology field, with the capacity to study huge volumes of human interaction data.
This is a big deal, thanks for writing about it so thoroughly. You and your readers would likely appreciate Roger's Bacon's post from yesterday, Epistemic Hell (https://www.theseedsofscience.pub/p/epistemic-hell). Both of you are zooming in on the void at the heart of the field, the absence of a clear paradigm with clearly-defined entities, relationships and contexts that can guide engagement.
In Kuhn's words, "In the absence of a paradigm or some candidate for paradigm, all of the facts that could possibly pertain to the development of a given science are likely to seem equally relevant. As a result, early fact-gathering is a far more nearly random activity than the one that subsequent scientific development makes familiar. Furthermore, in the absence of a reason for seeking some particular form of more recondite information, early fact-gathering is usually restricted to the wealth of data that lie ready to hand."
For me, this speaks to the strong need to look for data outside what lies "ready to hand." And especially, the place to look for that data, I believe, is at the heart of the matter: right in the thick of subjective experience itself. How can we observe subjective experience in a way that enables us to bridge the subjectivity barrier and come away with data that can reliably reveal patterns upon skillful analysis? (I don't think surveys are the way to go.)
I've been working on this for quite some time, and will be ready in a few weeks to release the first volume presenting an innovative methodology that provides the beginning for such disciplined, scientific observation of subjective experience. I won't say more about it here -- I'm laying it out gradually in my 'stack. But I will say that it does point to a significant possibility for a new paradigm in the form of a field dimension of conscious experience.
Keep doing what you're doing, Ethan. I appreciate your essays!
So many paradigms are based on metaphors. Metaphors are not physical models but literary devices that replace a complex domain (psychology/brains) with a familiar domain (computer science/computers). It hides more differences between the two domains than it reveals similarities. The human brain is so complex and so unique in the world that any single model or metaphor will ultimately fail. For a review of metaphors used to describe cognition, see https://tomrearick.substack.com/p/metaphors-we-think-by.
As a reformed cognitive psychologist, I couldn’t agree more. I was deeply frustrated by the lack of any broad theoretical claims and agreed upon questions. Instead the field consists of mini islands of specialized areas, each with its own set of methods and largely atheoretical concerns (does x affect y?).
But neural networks have changed everything for me and—I would argue—should for much of the field. Here, at last, is a broad theoretical framework for modeling psychological behaviors. You may not agree with me that the mind is an autoregressive next token generator like large language models but at least it’s a general theoretical claim.
This is a great point. I'm kind of two minds about this.
On the one hand, LLMs are really encouraging because they can actually model behavior, in the sense that they can produce the behavior, not just abstractly describe it. And they mimic some behaviors so well that it's hard not to think that they might be a good model for at least part of the mind.
That said, it's clear that LLMs aren't a good model for everything. They don't learn like we do, so they're not a good model for human learning and memory. They don't get depressed or anxious, so they're not a good model for mental health. They don't seem to have anything analogous to motivation, so they're not a good model of why people procrastinate. They don't even use language quite like we do; as far as I know, LLMs don't go through a phase of using regular forms for irregular words, such as "goed" for "went" or "taked" for "took", but human toddlers do. So they seem like a step in the right direction but also somewhat short of a full paradigm, at least for now.
Yes, the current models are limited, essentially to language with other modalities and cognitive processes not accounted for and so this is hardly a full model of psychology/cognition. But it is still very exciting to have at least one aspect of cognition actually up and running. Also, now that we’ve cracked the code of one module, others may also follow suit using the same underlying math.
Great article, very interesting read.
From my perspective, the ideas we come up with in science that tell us who we are are often influenced by broader shifts.
For what you name “introspection,” it was deeply influenced by the occult craze of the 19th century, the obsession with making the hidden revealed. Much like occultists went out and tried to build maps of hidden planes of reality, the early psychologist aimed to uncover the hidden world within.
Behavioralism was influenced by the ideas of man as an evolved creature, without evolution, behavioralism likely would never exist.
And the same is true for cognitive science, we began to believe we were like machines after creating the computer.
We can go back even further in history and discover that after the creation of the automaton, what we get is ideas of life being mechanical in nature. Or to the discovery of electricity, where people thought they could bring back people who had been long dead with jolts of electricity.
By that view, we might be a long time away from gaining a new paradigm. Maybe not, maybe there doesn’t need to be a huge technological advancement but a scientific one, like evolution. But judging by this pattern, the next explosive technology is ai, which, I believe (I’m not an expert on ai) is modeled on cognitive and behavioralist models of the human being (I think the ai is even punished and rewarded somehow to drive learning. Again, not an expert!)
Curious to hear your thoughts on this perspective.
You're definitely right that new ideas in science are often part of existing trends. But at the same time, a new paradigm is often a very old idea that's dredged up from the murky depths. For example, heliocentrism and atomism had both been considered by the Ancient Greeks, and were re-considered during the scientific revolution. Other times, the new idea is kind of strange and comes from an unexpected direction, like evolution. So my bet is actually that the new paradigm will come from a new application of old ideas; or possibly it will be based on an unusual insight from a distant field, like how the idea of evolution came from unexpected findings in animal husbandry.
many sharp insights in this article
I did my philosophy MA thesis on a defence of Kuhn’s incommensurability thesis, so I’m very steeped in his work. Still, I learned so much from your discussion that it really made my day (and more). In fact, as a Kuhnian I feel that your suggestion of the process by which fields go from containing many “proto-paradigms” to finding a dominant paradigm is definitely a promising research area. (One tiny historical quibble about ancient astronomy: the theory was never that, say angels, MADE the planets go in circles, but that their metaphysical nature (being composed of aether) was naturally to go in circles.)
This is great to hear, I'm never sure how much I'm adding to the conversation when it comes to Kuhn. I will probably write more about this in the future, thanks for the encouragement!
This is a good shift. Thank you for taking the time to write about this hidden treasure. Interesting to become aware of something we know, yet allow it to disappear into the blind spot of trust. The Paradigm plays such a massive role in the work that I do in guiding people to entertain new perspectives. If you want things to change, you must entertain changing the way you look at things. This will always be a struggle for those tethered to a paradigm they are not open to shifting. Make Sense? Good shift, Ethan, thanks.
The most obvious answer—dismissed not for lack of evidence but for failure to conform to entrenched biases—is parapsychology. Not the spooky, strawman version sold for cheap thrills, but the deep study of consciousness as a distributed field. A psychology of collective intelligences. The hypothesis is simple: consciousness is not confined to individual brains, but emerges through networks of entangled awareness, connected via an as-yet-unscienced medium that underlies physical reality.
And if that’s true, then what we’re in isn’t just a civilization—it’s a parapsychological ecosystem. A living, co-sensing domain of interconnected minds, broadcasting through physical vessels like antennas. Ideas, moods, and meaning don’t just propagate memetically—they flow through this unacknowledged substrate, influencing minds and shaping collective behavior in ways we’ve only begun to intuit.
There’s already empirical smoke. The placebo effect, for instance, is not a glitch—it’s the signal. It demonstrates how belief can ripple into biochemistry. That’s not just psychosomatic, it’s evidence of internal collective coherence: individuals as nested networks, pattern-complete systems capable of altering their state through intention. Individual humans are collectives telling themselves stories of unity.
Scale that up: the global mental health crisis is not an isolated epidemic. It’s the broadcast symptom of a decohering planetary noosphere. Civilization itself—understood as a kind of emergent mind—is showing signs of collective psychic dissonance. We’re fraying at the level of shared meaning. The question isn’t why are we here, but what does it mean that we feel this lost, together?
Physics has plenty of room for this to be real. Quantum coherence, nonlocality, hidden variables—these aren’t metaphors, they’re underdeveloped conceptual frameworks with unrealized implications. But research into these domains is unfunded, under-credentialed, and often culturally blacklisted. Not because it lacks rigor, but because it threatens the brittle supremacy of materialist orthodoxy. Fringe science is often just the frontier of suppressed curiosity.
And why does that happen? Because science is downstream of culture. And culture doesn’t know how to metabolize what it can’t monetize. If consciousness is fundamental, if the mind is nonlocal, if the self is porous and emergent, then the models we’ve built for governance, economics, education, and identity collapse.
So instead, we laugh at the weird stuff, fund the safe stuff, and write off the obvious answer with the most evidence as unserious. But that’s not skepticism. That’s fear disguised as objectivity. Meanwhile, the parapsychological ecosystem keeps humming—broadcasting distress, waiting for someone to tune in.
for funsies re mind is a motherboard, in Flow by Csikszentmihaly:
"At this point in our scientific knowledge we are on the verge of being able to estimate how much information the central nervous system is capable of processing. It seems we can manage at most seven bits of information—such as differentiated sounds, or visual stimuli, or recognizable nuances of emotion or thought—at any one time, and that the shortest time it takes to discriminate between one set of bits and another is about !/,. of a second.
By using these figures one concludes that it is possible to process at most 126 bits of information per second, or 7,560 per minute, or almost half a million per hour. Over a lifetime of seventy years, and counting sixteen hours of waking time each day, this amounts to about 185 billion bits of information. It is out of this total that everything in our life must come—every thought, memory, feeling, or action. It seems like a huge amount, but in reality it does not go that far.
So the 185 billion events to be enjoyed over our mortal days might be either an overestimate or an underestimate. If we consider the amount of data the brain could theoretically process, the number might be too low; but if we look at how people actually use their minds, it is definitely much too high. In any case, an individual can experience only so much.
Therefore, the information we allow into consciousness becomes extremely important; it is, in fact, what determines the content and the quality of life."
also,
J. S. Mill - "No great improvements in the lot of mankind are possible, until a great change takes place in the fundamental constitution of their modes of thought.”
also,
VibE sHiFt
It’s had a lot of big bad ideas
“Hardline behaviorists argued that because behavior is the only thing that can be observed, behavior is the only part of psychology that can be studied scientifically. You can’t observe a thought, so you can’t possibly measure it or run studies on it. Behavior only.”
I don’t think Skinner or his students said this.
Watson certainly said as much, and described introspection as "unscientific". See e.g.:
"No one has ever touched a soul, or has seen one in a test tube, or has in any way come into relationship with it as he has with the other objects of his daily experience. ... It was the boast of Wundt’s students, in 1879, when the first psychological laboratory was established, that psychology had at last become a science without a soul. For fifty years we have kept this pseudo-science, exactly as Wundt laid it down. All that Wundt and his students really accomplished was to substitute for the word 'soul' the word 'consciousness.'"
Watson’s behaviorism and Skinner’s are different.
Would love to read what you make of LENS therapy. (Low Energy Nural Stimulation).
I do wonder whether the advent of mass AI use will trigger a revolution in the psychology field, with the capacity to study huge volumes of human interaction data.
What a bizarre piece. Psychology + Medicine is called Psychiatry.