Thanks for the response. I think you represented my view well. You also have convinced me on the point about 'provisos' not being needed in true hard sciences.
I think we agree on many of the same problems. My greatest annoyance is that psychology treats tendencies, biases and effects (which are glorified averages) as if they were the basic unit of the mind we were looking for. It's not uncommon to hear psychologists explain behavior in terms of these findings (eg Y happened because of the X Effect) which is non-sensical and confuses explanans and explanandum.
But if these are not the *units* we are looking for, what are?
My answer is something like 'the reasoning process itself.' I think if we could peer into someone's mind and see what someone was reasoning about, and the reasoning process they use to think about it, we would see reasoning unroll in a deterministic way, and then behavior unroll deterministically from there.
But this view would force us to figure out what context is relevant to reasoning. To which the answer is "all possible things," which is an even larger set than “every substance in the universe”. (The view also forces us to ask about which reasoning process is used, but I believe that is more predictable and limited (eg heuristics, pattern matching, mental simulation, etc.))
Because of the impossibility of codifying the entire set of all possible relevant things, and also our inability to know exactly what someone is considering, how they are considering it, and the sets of relations they have in mind, we'll never get true prediction in the way chemistry has.
But, and to your point, perhaps like how evolution cannot predict the exact evolutionary path of a beetle but can predict niches and what might be needed to fill the niche, we may get general principles which help with certain looser forms of prediction. (Predictions which may apply to all intelligent systems, or animals, or animals+plants, or maybe just humans).
This has been less a question, and more a babbling and aimless comment, so maybe I should try to end with a question.
If reasoning itself is not the 'unit', then what? Is there some other layer of analysis you expect to be as deterministic as reason? Or, if you are trying to predict reasoning, then shouldn't you be interested in neuroscience rather than psychology?
Strong endorsement of this comment. At risk of simply repeating what you said: psychology often operates by talking about relationships and averages between variables, things like tendencies, biases and "effects". When asked to explain something, psychologists tend to appeal to these relationships and averages. But explaining a tendency in terms of the tendency is fully circular! Same thing for explaining a bias in terms of the bias, etc.
I think the only way to move past this is to understand the "units" of thought. In my opinion, you can only explain something in terms of the process behind it. Like in Catan: because a board game has units, you can take questions like, "why do players without early access to brick tend to lose?" and you can explain this tendency by working through the units step by step towards a necessary conclusion, instead of just labeling it "yes that's early-game brick deficiency failure syndrome, or EGBDFS".
That said, there are not many ideas about what the units in psychology could be. They don't seem to be much like the units behind the "reasoning" that happens in digital computers, or at least, that angle hasn't gotten cognitive psychology very far. I'm not convinced that neurons are the right units either, and I don't know if there's a distinction between psychology and neuroscience, I think the eventual paradigm might contain both.
Maybe we’ve gone wrong from the very beginning, by starting with the word "predict." To fall back to my favorite example, early chemists didn't worry too much about prediction, except in the sense of how different outcomes of an experiment would favor one model over another. The successful work tended to focus on just that: models of ordinary chemical phenomena. So maybe we shouldn't try to predict things, and should just look for the units directly.
As a computer geek with a background in computer math and logic, I want to follow up on Colin Fisher's point: "So we’ll always be shooting at a moving target, paradigm or no. And that will forever keep us from being like chemistry."
This is a rough statement of Godel's Incompleteness Theorem and its variants, Turing's Halting Problem and Kleene's Recursion Theorem. I'm going to cite the Wikipedia entry for the Halting problem. Although Godel stated it first, Turing stated it in a very clear way:
The basic idea is this: "people are changed by the awareness of their own psychology."
Let's state this in terms of the Halting Problem:
"My theory of psychology states that, given a particular human psychological state (the algorithm) and a particular environmental condition (the input), the human will "halt" - that is, settle on a particular response."
But humans, unlike chemical molecules, can do the following:
"OK, you smarty-pants psychologist, let me look up the theory you just published. OK, based on what you derive on page 36, you claim I am going to halt on this particular response. So you say. But me, being bloody minded, is going to do something entirely different. Not even the opposite - just different. So I may have halted, but I halted on a different value. So there!"
To which you reply:
"Taking Colin’s concerns as stated, if people are changed by the awareness of their own psychology, we can still ask how they are changed by the awareness of their own psychology. "
Unfortunately, the Incompleteness theorem is a hall or mirrors. Yes, you can do that, but Colin's bloody-minded subject can trump that too. You are chasing your own tail.
And in the end Colin is right. Psychology is not like chemistry because we are self-aware. Chemistry can be described as a closed system. Your attempt to formalize this change/awareness is going to drown in the swamp of Kolmogorov Complexity.
Basically, the Incompleteness Theorem and its further implications can mathematically prove that psychology is truly emergent and can never be reduced to chemistry-like laws. No way, nohow.
Give it up. Any attempt will just be adding epicycles to Skinner's Ptolemaic system.
The physics/chemistry paradigm has enough problems with complex systems like the Three Body Problem (2 = simple, 3+ = complex). If you want to explain the complex systems in human cognition, this type of decomposition into the individual molecules just won't work.
"The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty and we must rise with the occasion. As our case is new, we must think anew and act anew. We must disenthrall ourselves, and then we shall save our country.” - Abraham Lincoln
The halting problem is a pretty weird example to start with, given that we do in fact know how computers work, and we can talk about what they're made of, even if we can't determine from a description of an arbitrary computer program and an input whether the program will finish running or continue to run forever.
I don't think we're looking for a theory that can take a psychological state and an environment and predict the outcome, though it would be nice if we happened to end up with one of those anyways.
The goal of finding a paradigm is to find a way to talk about the mind and model what it's made of. Chemistry and physics are mature because we have very satisfying ways to describe the nature of matter and energy, NOT because we can always predict the outcome of a chemical reaction. We can't! And not because we can always predict the outcome of a physical interaction. We can't! But what we do have is a practical ontological understanding of matter and space and so on, and I'd like to some day have the same for psychology. What kinds of units is the mind made out of? There's going to be an answer to this question.
Prediction is enormously overrated. But it keeps coming up, so maybe I need to write a separate post just about that.
My background is in Machine Learning, especially Natural Language Processing. With a bachelor's in Physics, I know how computers work from the transistor Nand gates on up.
So your statement that we in fact do know how computers work is equivalent to saying that we are well on our way to knowing the neurophysiology of the wetware in our head.
All well and good, but psychology is not neurophysiology. That is how far the paradigm of chemistry will take you: no farther.
In fact we computer geeks are running into the exact same problem that psychology has faced due to the simple fact of Moore's law. We could come up with simple algorithmic models of simple machine learning paradigms, but everything has gone out the window the past few years simply due to scaling up.
We can build these complex Large Language Models and we don't know how they work. We computer geeks are failing in coming up with the equivalent of a psychological model of computer algorithms.
What kind of units is the mind made of? We know that: they're called neurons.
The ontological understanding of neurons does not help you understand the mind. The ontological understanding of computers does not help you to understand how an LLM works.
Yes, the Halting Problem is weird. My philosophy prof Howard Stein presented it at the end of Logic 101 my sophomore year back in 1970. I thought about it all weekend, then went to his office and tried to show where it was wrong. What I was sketching out although I did not know it at the time, was an argument based on Computational Learning Theory - the work that Solomonoff/Chaitin/Kolmogorov pioneered in the 1960s. I have been wrestling with it ever since.
Focus on chemistry and you end up with a glorified neurophysiology. That's all and no more. You have to really grapple with the implications of the Incompleteness theorem and I promise you that it will be a completely different paradigm that will leave simple sciences like physics and chemistry in the dust.
By the way, LLMs are a dead end as an approach to Artificial General Intelligence. We have no theory of AGI because it is essentially the same problem as that of psychology: the Hard Problem of Consciousness.
The best approach I have seen to date is that of Lenore and Manuel Blum. Perhaps this is what you are looking for:
Thanks for the response. I think you represented my view well. You also have convinced me on the point about 'provisos' not being needed in true hard sciences.
I think we agree on many of the same problems. My greatest annoyance is that psychology treats tendencies, biases and effects (which are glorified averages) as if they were the basic unit of the mind we were looking for. It's not uncommon to hear psychologists explain behavior in terms of these findings (eg Y happened because of the X Effect) which is non-sensical and confuses explanans and explanandum.
But if these are not the *units* we are looking for, what are?
My answer is something like 'the reasoning process itself.' I think if we could peer into someone's mind and see what someone was reasoning about, and the reasoning process they use to think about it, we would see reasoning unroll in a deterministic way, and then behavior unroll deterministically from there.
But this view would force us to figure out what context is relevant to reasoning. To which the answer is "all possible things," which is an even larger set than “every substance in the universe”. (The view also forces us to ask about which reasoning process is used, but I believe that is more predictable and limited (eg heuristics, pattern matching, mental simulation, etc.))
Because of the impossibility of codifying the entire set of all possible relevant things, and also our inability to know exactly what someone is considering, how they are considering it, and the sets of relations they have in mind, we'll never get true prediction in the way chemistry has.
But, and to your point, perhaps like how evolution cannot predict the exact evolutionary path of a beetle but can predict niches and what might be needed to fill the niche, we may get general principles which help with certain looser forms of prediction. (Predictions which may apply to all intelligent systems, or animals, or animals+plants, or maybe just humans).
This has been less a question, and more a babbling and aimless comment, so maybe I should try to end with a question.
If reasoning itself is not the 'unit', then what? Is there some other layer of analysis you expect to be as deterministic as reason? Or, if you are trying to predict reasoning, then shouldn't you be interested in neuroscience rather than psychology?
Strong endorsement of this comment. At risk of simply repeating what you said: psychology often operates by talking about relationships and averages between variables, things like tendencies, biases and "effects". When asked to explain something, psychologists tend to appeal to these relationships and averages. But explaining a tendency in terms of the tendency is fully circular! Same thing for explaining a bias in terms of the bias, etc.
I think the only way to move past this is to understand the "units" of thought. In my opinion, you can only explain something in terms of the process behind it. Like in Catan: because a board game has units, you can take questions like, "why do players without early access to brick tend to lose?" and you can explain this tendency by working through the units step by step towards a necessary conclusion, instead of just labeling it "yes that's early-game brick deficiency failure syndrome, or EGBDFS".
That said, there are not many ideas about what the units in psychology could be. They don't seem to be much like the units behind the "reasoning" that happens in digital computers, or at least, that angle hasn't gotten cognitive psychology very far. I'm not convinced that neurons are the right units either, and I don't know if there's a distinction between psychology and neuroscience, I think the eventual paradigm might contain both.
Maybe we’ve gone wrong from the very beginning, by starting with the word "predict." To fall back to my favorite example, early chemists didn't worry too much about prediction, except in the sense of how different outcomes of an experiment would favor one model over another. The successful work tended to focus on just that: models of ordinary chemical phenomena. So maybe we shouldn't try to predict things, and should just look for the units directly.
As a computer geek with a background in computer math and logic, I want to follow up on Colin Fisher's point: "So we’ll always be shooting at a moving target, paradigm or no. And that will forever keep us from being like chemistry."
This is a rough statement of Godel's Incompleteness Theorem and its variants, Turing's Halting Problem and Kleene's Recursion Theorem. I'm going to cite the Wikipedia entry for the Halting problem. Although Godel stated it first, Turing stated it in a very clear way:
https://en.wikipedia.org/wiki/Halting_problem
The basic idea is this: "people are changed by the awareness of their own psychology."
Let's state this in terms of the Halting Problem:
"My theory of psychology states that, given a particular human psychological state (the algorithm) and a particular environmental condition (the input), the human will "halt" - that is, settle on a particular response."
But humans, unlike chemical molecules, can do the following:
"OK, you smarty-pants psychologist, let me look up the theory you just published. OK, based on what you derive on page 36, you claim I am going to halt on this particular response. So you say. But me, being bloody minded, is going to do something entirely different. Not even the opposite - just different. So I may have halted, but I halted on a different value. So there!"
To which you reply:
"Taking Colin’s concerns as stated, if people are changed by the awareness of their own psychology, we can still ask how they are changed by the awareness of their own psychology. "
Unfortunately, the Incompleteness theorem is a hall or mirrors. Yes, you can do that, but Colin's bloody-minded subject can trump that too. You are chasing your own tail.
And in the end Colin is right. Psychology is not like chemistry because we are self-aware. Chemistry can be described as a closed system. Your attempt to formalize this change/awareness is going to drown in the swamp of Kolmogorov Complexity.
Basically, the Incompleteness Theorem and its further implications can mathematically prove that psychology is truly emergent and can never be reduced to chemistry-like laws. No way, nohow.
Give it up. Any attempt will just be adding epicycles to Skinner's Ptolemaic system.
The physics/chemistry paradigm has enough problems with complex systems like the Three Body Problem (2 = simple, 3+ = complex). If you want to explain the complex systems in human cognition, this type of decomposition into the individual molecules just won't work.
"The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty and we must rise with the occasion. As our case is new, we must think anew and act anew. We must disenthrall ourselves, and then we shall save our country.” - Abraham Lincoln
The halting problem is a pretty weird example to start with, given that we do in fact know how computers work, and we can talk about what they're made of, even if we can't determine from a description of an arbitrary computer program and an input whether the program will finish running or continue to run forever.
I don't think we're looking for a theory that can take a psychological state and an environment and predict the outcome, though it would be nice if we happened to end up with one of those anyways.
The goal of finding a paradigm is to find a way to talk about the mind and model what it's made of. Chemistry and physics are mature because we have very satisfying ways to describe the nature of matter and energy, NOT because we can always predict the outcome of a chemical reaction. We can't! And not because we can always predict the outcome of a physical interaction. We can't! But what we do have is a practical ontological understanding of matter and space and so on, and I'd like to some day have the same for psychology. What kinds of units is the mind made out of? There's going to be an answer to this question.
Prediction is enormously overrated. But it keeps coming up, so maybe I need to write a separate post just about that.
Yes, but...
Let me try it a different way.
My background is in Machine Learning, especially Natural Language Processing. With a bachelor's in Physics, I know how computers work from the transistor Nand gates on up.
So your statement that we in fact do know how computers work is equivalent to saying that we are well on our way to knowing the neurophysiology of the wetware in our head.
All well and good, but psychology is not neurophysiology. That is how far the paradigm of chemistry will take you: no farther.
In fact we computer geeks are running into the exact same problem that psychology has faced due to the simple fact of Moore's law. We could come up with simple algorithmic models of simple machine learning paradigms, but everything has gone out the window the past few years simply due to scaling up.
We can build these complex Large Language Models and we don't know how they work. We computer geeks are failing in coming up with the equivalent of a psychological model of computer algorithms.
What kind of units is the mind made of? We know that: they're called neurons.
The ontological understanding of neurons does not help you understand the mind. The ontological understanding of computers does not help you to understand how an LLM works.
Yes, the Halting Problem is weird. My philosophy prof Howard Stein presented it at the end of Logic 101 my sophomore year back in 1970. I thought about it all weekend, then went to his office and tried to show where it was wrong. What I was sketching out although I did not know it at the time, was an argument based on Computational Learning Theory - the work that Solomonoff/Chaitin/Kolmogorov pioneered in the 1960s. I have been wrestling with it ever since.
Focus on chemistry and you end up with a glorified neurophysiology. That's all and no more. You have to really grapple with the implications of the Incompleteness theorem and I promise you that it will be a completely different paradigm that will leave simple sciences like physics and chemistry in the dust.
By the way, LLMs are a dead end as an approach to Artificial General Intelligence. We have no theory of AGI because it is essentially the same problem as that of psychology: the Hard Problem of Consciousness.
The best approach I have seen to date is that of Lenore and Manuel Blum. Perhaps this is what you are looking for:
https://arxiv.org/abs/2403.17101
AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
Lenore Blum, Manuel Blum
Here is Lenore Blum lecturing on the theory
https://www.youtube.com/watch?v=2kmz9DS6Fjg
Lenore Blum - Insights from the Conscious Turing Machine - a machine model for consciousness
Here is Manuel Blum lecturing on the initial development of the theory back in 2018
https://www.youtube.com/watch?v=AXKI2f1AxtM&t=205s
Manuel Blum: Towards a Conscious AI