Thanks for the response. I think you represented my view well. You also have convinced me on the point about 'provisos' not being needed in true hard sciences.
I think we agree on many of the same problems. My greatest annoyance is that psychology treats tendencies, biases and effects (which are glorified averages) as if they were the basic unit of the mind we were looking for. It's not uncommon to hear psychologists explain behavior in terms of these findings (eg Y happened because of the X Effect) which is non-sensical and confuses explanans and explanandum.
But if these are not the *units* we are looking for, what are?
My answer is something like 'the reasoning process itself.' I think if we could peer into someone's mind and see what someone was reasoning about, and the reasoning process they use to think about it, we would see reasoning unroll in a deterministic way, and then behavior unroll deterministically from there.
But this view would force us to figure out what context is relevant to reasoning. To which the answer is "all possible things," which is an even larger set than “every substance in the universe”. (The view also forces us to ask about which reasoning process is used, but I believe that is more predictable and limited (eg heuristics, pattern matching, mental simulation, etc.))
Because of the impossibility of codifying the entire set of all possible relevant things, and also our inability to know exactly what someone is considering, how they are considering it, and the sets of relations they have in mind, we'll never get true prediction in the way chemistry has.
But, and to your point, perhaps like how evolution cannot predict the exact evolutionary path of a beetle but can predict niches and what might be needed to fill the niche, we may get general principles which help with certain looser forms of prediction. (Predictions which may apply to all intelligent systems, or animals, or animals+plants, or maybe just humans).
This has been less a question, and more a babbling and aimless comment, so maybe I should try to end with a question.
If reasoning itself is not the 'unit', then what? Is there some other layer of analysis you expect to be as deterministic as reason? Or, if you are trying to predict reasoning, then shouldn't you be interested in neuroscience rather than psychology?
Strong endorsement of this comment. At risk of simply repeating what you said: psychology often operates by talking about relationships and averages between variables, things like tendencies, biases and "effects". When asked to explain something, psychologists tend to appeal to these relationships and averages. But explaining a tendency in terms of the tendency is fully circular! Same thing for explaining a bias in terms of the bias, etc.
I think the only way to move past this is to understand the "units" of thought. In my opinion, you can only explain something in terms of the process behind it. Like in Catan: because a board game has units, you can take questions like, "why do players without early access to brick tend to lose?" and you can explain this tendency by working through the units step by step towards a necessary conclusion, instead of just labeling it "yes that's early-game brick deficiency failure syndrome, or EGBDFS".
That said, there are not many ideas about what the units in psychology could be. They don't seem to be much like the units behind the "reasoning" that happens in digital computers, or at least, that angle hasn't gotten cognitive psychology very far. I'm not convinced that neurons are the right units either, and I don't know if there's a distinction between psychology and neuroscience, I think the eventual paradigm might contain both.
Maybe we’ve gone wrong from the very beginning, by starting with the word "predict." To fall back to my favorite example, early chemists didn't worry too much about prediction, except in the sense of how different outcomes of an experiment would favor one model over another. The successful work tended to focus on just that: models of ordinary chemical phenomena. So maybe we shouldn't try to predict things, and should just look for the units directly.
I've thought about this for a few days before deciding to wade into this discussion. Three quick points first:
1. It's unclear to me what constitutes a paradigm, vs. proto-paradigm, vs. non-paradigm. It seems to me that the key difference is the number of people who adopt these set of assumptions, beliefs, and practices. So they are really popularity contests amongst scientists trying to study similar things.
2. The way I'm understanding the arguments, it seems like part of the quandry is whether psychology should be more akin to physics or engineering. Are we trying to better understand fundamental principles, or solve problems with our current state of knowledge? Should psychology be separated along this distinction?
3. Glorified averages are not all bad, as a group level descriptive. To reduce healthcare burden, we need people to eat better and engage in more physical activity on average. To reduce greenhouse gas emissions, we need people to fly less on average. Our interventions may not work for certain individuals, perhaps even backfire, but on the whole, we're getting what we want. But I 100% agree that averages fail horribly at explaining individual behaviours.
Now, let me preface by saying I'm a behaviourist (it's more chic to call ourselves behaviour analysts nowadays) by training, and my own paradigm falls squarely within the purview of evolutionary thinking (not every behaviour analyst agrees). Here's what I think - behaviour is what an organism does, which includes thinking and feeling, which is very much guided by selection processes. Behaviours that contact favourable outcomes tend to be repeated. Behaviours that contact unfavourable outcomes tend to stop occurring. Within such a paradigm, context is central. Behaviours can only be understood within their contexts, which include the present conditions, historical conditions, the organism's phylogeny, and ontogeny.
I gather from what Jared is saying that context leak refers to influencing factors we have not accounted for. In theory, he may be right that "all possible things" can influence behaivours. But we know that some possible things are more possible than other possible things. For example, stimuli that are salient more readily capture our attention than stimuli that are less salient. We hear our names at the party. We also know that stimuli can become more salient due to our past history with them. Learn what a symbol represents and you suddenly start to spot it everywhere. From there, we can generate testable predictions and solve certain problems. The key is to identify which possible things matter more.
Ethan, I agree with you that we can make sense of complexity by studying its parts in isolation. We make choices under complexity. But we've managed to accomplish quite a fair bit in understanding what are some of the influencing factors. We know larger-later choices are discounted in favour of smaller-sooner choices, and we know discounting is steepest in the initial time period. We know how people discount probabilistic choices. We know how losses and gains affect choices asymmetrically. And more. To me, all these form part of the context under which choices are made. The more we understand, the better we are able to predict choices under complexity - when a choice produces an outcome that is delayed, probabilistic, framed as a loss, so forth.
Now, on to the question of: Where should we place our boundaries? It seems to me that depends on what we are trying to understand and what problems we are trying to solve. Does understanding brain structures help us understand why we do what we do? Yes. How about neurons and glias? Also yes. Neurotransmitters? Yes. DNA? Genes? Protein structures? Yes, yes, and yes. Zooming out, how about individual units of behaviour, or individuals themselves, or groups of individuals, or groups of groups of individuals? And does understanding animal behaviour help us? Many believe so. Or plant and fungi behaviour? Ethan might be the only one in this camp.
I think why psychology feels so broad and unfocused is that we are a loose collective of scientists trying to understand phenomena at multiple levels. Perhaps it's bad for the branding of psychology, I don't necessarily care. It's fine for certain people to spend their whole lives studying a specific emotion, or a specific condition under which choices or made, or how individuals interact in a system, or how groups interact with other groups. I think we can advancing knowledge in many different ways, and solve many different problems. So my question is, do we need a single paradigm for psychology? Does it matter?
1. Paradigm vs proto-paradigm vs non-paradigm: I agree it's vague and ill-defined. I'm not sure it's actually a meaningful distinction in the way it's being used in this conversation. It may just be a way to express discontent. I perhaps could come up with a distinction that makes psychology pre-paradigmatic, but truth be told, I do think psychology has a paradigm.
2. That's not how I would put it. I think it's more about having sufficient causal explanations such that replicability and generalization become less mysterious.
3. I think the difference between the examples you mention and the "averages" in psychology is that the former have very specific causal mechanisms that we can manipulate directly. Whereas something like Loss Aversion is a little bit of a crapshoot whether a given experiment will find it.
Ultimately, the question I want to answer is this: why is it a crapshoot? Why are we so bad at knowing what will replicate and generalize? How come Behavioral Scientists have to test every intervention they do? Why aren't we/they better at predicting results of common interventions?
The simple answer is context. A word that basically just means, "Something is missing in our models." It's a bit of a non-answer to be honest
The longer answer is that the context is whatever our research subjects are thinking about, and how they are thinking about it. Our experiments try to get them to see things in a certain way, and they dont always obey. Presumably because they don't see our interventions in the way we see them. So how do we shift our boundaries to capture not just what researchers consider salient, but what the subjects consider to be salient? What methods can we use to observe and intervene on that subjectivity with greater consistency?
Maybe the answer is just that we need to get better at qual. I have a feeling I am more comfortable with that answer than Ethan would be. But I'm also not entirely satisfied with that answer for reasons I can't entirely articulate.
Ultimately, I want psychology to get to a point where we consider it relatively trivial to determine whether someone will be Loss Averse or not, because we will know how to see the choice as they see it. But I think our current approach of proclaiming "people tend to be Loss Averse" is leading us down paths that take us further away from being able to do that. Such statements reify the average and take us away from the subjectivity we need to understand in order to explain the variance.
Sorry if I may have misrepresented some of your arguments. I do think you're correct - a lot of psychological findings are crapshoot. And I also empathise with your frustration. I think it relates to the fact that psychology is so broad that it shelters some pretty poor science. While loss aversion may be a thing, under specific conditions that we haven't fully fleshed out, some other biases and tendencies are complete garbage, but they all fall under the same umbrella of behavioural economics, along with well-established findings such as delay discounting of which we understand the parameters fairly well. So we have the good, the questionable, and the garbage all in one package. Unfortunately, this lends the garbage legitimacy.
Nonetheless, there is good science being done that's highly replicable. As that progresses, I'm certain we will get better at filling the missingness in our models. I personally don't think it's possible to determine causally whether someone will act in a certain way or think a certain thought or not, in response to certain stimuli. We don't know whether someone skipped their coffee that morning, or whether their baby kept them up at night, or saw a cute kitty 10 s ago, or had a random stranger smile at them 30 mins ago, or had a traumatic experience with someone who looks like you, and we may not be able to model these "noise events". But I do think we can get to a satisfactory level of predictability, if we keep doing good science and uncovering fundamental principles of behaviour.
Qual can yield us subjective and objective data (as can self-report measures). I don't think subjectivity is necessarily the right answer, unless you subscribe to the belief that truth is constructed. I don't, even if I think it's important to understand and account for. I think it takes us away from building a bedrock of fundamental principles governing behaviours.
Finally, and this perhaps reflects my paradigm, do you believe the thinking itself should be the object of study? I think that it's the context in which thinking occurs that leads us closer to those truths.
I don't think it's possible to fully predict what people will do in all situations. I agree with your point that very random things can influence them.
But people do display some consistency in how they understand situations. One of the basic units of cognition is pattern recognition. We see this in my area of research with expert decision-makers (eg firefighters, medics, law enforcement). These experts will consistently understand similar situations in terms of a prototype which they identify through pattern matching. And in understanding how they have framed a given choice, it becomes clear why they do what they do. Behavior is predictable once you know how someone frames a choice. The challenge is in figuring out how they frame it in the first place. The problem is understanding pattern matching, framing, sensemaking, etc.
Take Loss Aversion as an example. I don't think its a module that exists in the brain, nor a fundamental property of how we weigh trade-offs. I think its just a manifestation of how people see a given choice, which is why the result is so finnicky. But I think it is possible to understand how people come to frame a choice by better understanding what causes people to focus on some aspects of the world over others
Ultimately, I dont think "truth" is constructed, but I am a constructivist in that I believe people don't all see things in the same way. Some people see the "invisible gorilla" in the famous experiment, and some don't. The world isn't given to us. Saliency isnt a property of the world. It's a property of our perceptual system, which is informed by our goals, values, familiarity, and our current understanding of the situation.
Because of this, for me, the unit of analysis that is most interesting is the "frame." Not the frame given by the experimenter, but the frame the subject generates for themselves in the moment. Some people see a choice in the way the experimenter wants them to, and some don't. So for me, the thing we are trying to control is not the objective context, but the frame of the participants.
(Sorry if this comment is a bit all over the place)
Thanks! I think I’ve grasped your position pretty well.
Again, I think because of my history, I tend to be more interested in what goes on in the context in which behaviour occurs. This includes both the present conditions and the ontogeny of an individual. Context used in this sense has a very clear meaning, instead of as a placeholder for “we don’t know what else is influencing behaviour”.
Expert decision makers tend to make similar choices/adopt similar frames in similar situations due to a similar history with similar situations. The context gives rise to similarities in the frames/choices, so in my view, understanding that context is key.
For loss aversion, we only have a rudimentary understanding of the context in which it presents or not. But we’ve pretty compelling evidence people tend to be more loss averse when the stakes are high. The difference in high vs. low stakes drives the difference in how people think and choose.
Ultimately, if you want to control the frame of the participant, it’s really by altering the context in which they engage in the framing, unless you’re talking directly intervening on neural activity about neuralink 5.0 down the road. That’s how I see it anyway.
As a computer geek with a background in computer math and logic, I want to follow up on Colin Fisher's point: "So we’ll always be shooting at a moving target, paradigm or no. And that will forever keep us from being like chemistry."
This is a rough statement of Godel's Incompleteness Theorem and its variants, Turing's Halting Problem and Kleene's Recursion Theorem. I'm going to cite the Wikipedia entry for the Halting problem. Although Godel stated it first, Turing stated it in a very clear way:
The basic idea is this: "people are changed by the awareness of their own psychology."
Let's state this in terms of the Halting Problem:
"My theory of psychology states that, given a particular human psychological state (the algorithm) and a particular environmental condition (the input), the human will "halt" - that is, settle on a particular response."
But humans, unlike chemical molecules, can do the following:
"OK, you smarty-pants psychologist, let me look up the theory you just published. OK, based on what you derive on page 36, you claim I am going to halt on this particular response. So you say. But me, being bloody minded, is going to do something entirely different. Not even the opposite - just different. So I may have halted, but I halted on a different value. So there!"
To which you reply:
"Taking Colin’s concerns as stated, if people are changed by the awareness of their own psychology, we can still ask how they are changed by the awareness of their own psychology. "
Unfortunately, the Incompleteness theorem is a hall or mirrors. Yes, you can do that, but Colin's bloody-minded subject can trump that too. You are chasing your own tail.
And in the end Colin is right. Psychology is not like chemistry because we are self-aware. Chemistry can be described as a closed system. Your attempt to formalize this change/awareness is going to drown in the swamp of Kolmogorov Complexity.
Basically, the Incompleteness Theorem and its further implications can mathematically prove that psychology is truly emergent and can never be reduced to chemistry-like laws. No way, nohow.
Give it up. Any attempt will just be adding epicycles to Skinner's Ptolemaic system.
The physics/chemistry paradigm has enough problems with complex systems like the Three Body Problem (2 = simple, 3+ = complex). If you want to explain the complex systems in human cognition, this type of decomposition into the individual molecules just won't work.
"The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty and we must rise with the occasion. As our case is new, we must think anew and act anew. We must disenthrall ourselves, and then we shall save our country.” - Abraham Lincoln
The halting problem is a pretty weird example to start with, given that we do in fact know how computers work, and we can talk about what they're made of, even if we can't determine from a description of an arbitrary computer program and an input whether the program will finish running or continue to run forever.
I don't think we're looking for a theory that can take a psychological state and an environment and predict the outcome, though it would be nice if we happened to end up with one of those anyways.
The goal of finding a paradigm is to find a way to talk about the mind and model what it's made of. Chemistry and physics are mature because we have very satisfying ways to describe the nature of matter and energy, NOT because we can always predict the outcome of a chemical reaction. We can't! And not because we can always predict the outcome of a physical interaction. We can't! But what we do have is a practical ontological understanding of matter and space and so on, and I'd like to some day have the same for psychology. What kinds of units is the mind made out of? There's going to be an answer to this question.
Prediction is enormously overrated. But it keeps coming up, so maybe I need to write a separate post just about that.
My background is in Machine Learning, especially Natural Language Processing. With a bachelor's in Physics, I know how computers work from the transistor Nand gates on up.
So your statement that we in fact do know how computers work is equivalent to saying that we are well on our way to knowing the neurophysiology of the wetware in our head.
All well and good, but psychology is not neurophysiology. That is how far the paradigm of chemistry will take you: no farther.
In fact we computer geeks are running into the exact same problem that psychology has faced due to the simple fact of Moore's law. We could come up with simple algorithmic models of simple machine learning paradigms, but everything has gone out the window the past few years simply due to scaling up.
We can build these complex Large Language Models and we don't know how they work. We computer geeks are failing in coming up with the equivalent of a psychological model of computer algorithms.
What kind of units is the mind made of? We know that: they're called neurons.
The ontological understanding of neurons does not help you understand the mind. The ontological understanding of computers does not help you to understand how an LLM works.
Yes, the Halting Problem is weird. My philosophy prof Howard Stein presented it at the end of Logic 101 my sophomore year back in 1970. I thought about it all weekend, then went to his office and tried to show where it was wrong. What I was sketching out although I did not know it at the time, was an argument based on Computational Learning Theory - the work that Solomonoff/Chaitin/Kolmogorov pioneered in the 1960s. I have been wrestling with it ever since.
Focus on chemistry and you end up with a glorified neurophysiology. That's all and no more. You have to really grapple with the implications of the Incompleteness theorem and I promise you that it will be a completely different paradigm that will leave simple sciences like physics and chemistry in the dust.
By the way, LLMs are a dead end as an approach to Artificial General Intelligence. We have no theory of AGI because it is essentially the same problem as that of psychology: the Hard Problem of Consciousness.
The best approach I have seen to date is that of Lenore and Manuel Blum. Perhaps this is what you are looking for:
Thanks for the response. I think you represented my view well. You also have convinced me on the point about 'provisos' not being needed in true hard sciences.
I think we agree on many of the same problems. My greatest annoyance is that psychology treats tendencies, biases and effects (which are glorified averages) as if they were the basic unit of the mind we were looking for. It's not uncommon to hear psychologists explain behavior in terms of these findings (eg Y happened because of the X Effect) which is non-sensical and confuses explanans and explanandum.
But if these are not the *units* we are looking for, what are?
My answer is something like 'the reasoning process itself.' I think if we could peer into someone's mind and see what someone was reasoning about, and the reasoning process they use to think about it, we would see reasoning unroll in a deterministic way, and then behavior unroll deterministically from there.
But this view would force us to figure out what context is relevant to reasoning. To which the answer is "all possible things," which is an even larger set than “every substance in the universe”. (The view also forces us to ask about which reasoning process is used, but I believe that is more predictable and limited (eg heuristics, pattern matching, mental simulation, etc.))
Because of the impossibility of codifying the entire set of all possible relevant things, and also our inability to know exactly what someone is considering, how they are considering it, and the sets of relations they have in mind, we'll never get true prediction in the way chemistry has.
But, and to your point, perhaps like how evolution cannot predict the exact evolutionary path of a beetle but can predict niches and what might be needed to fill the niche, we may get general principles which help with certain looser forms of prediction. (Predictions which may apply to all intelligent systems, or animals, or animals+plants, or maybe just humans).
This has been less a question, and more a babbling and aimless comment, so maybe I should try to end with a question.
If reasoning itself is not the 'unit', then what? Is there some other layer of analysis you expect to be as deterministic as reason? Or, if you are trying to predict reasoning, then shouldn't you be interested in neuroscience rather than psychology?
Strong endorsement of this comment. At risk of simply repeating what you said: psychology often operates by talking about relationships and averages between variables, things like tendencies, biases and "effects". When asked to explain something, psychologists tend to appeal to these relationships and averages. But explaining a tendency in terms of the tendency is fully circular! Same thing for explaining a bias in terms of the bias, etc.
I think the only way to move past this is to understand the "units" of thought. In my opinion, you can only explain something in terms of the process behind it. Like in Catan: because a board game has units, you can take questions like, "why do players without early access to brick tend to lose?" and you can explain this tendency by working through the units step by step towards a necessary conclusion, instead of just labeling it "yes that's early-game brick deficiency failure syndrome, or EGBDFS".
That said, there are not many ideas about what the units in psychology could be. They don't seem to be much like the units behind the "reasoning" that happens in digital computers, or at least, that angle hasn't gotten cognitive psychology very far. I'm not convinced that neurons are the right units either, and I don't know if there's a distinction between psychology and neuroscience, I think the eventual paradigm might contain both.
Maybe we’ve gone wrong from the very beginning, by starting with the word "predict." To fall back to my favorite example, early chemists didn't worry too much about prediction, except in the sense of how different outcomes of an experiment would favor one model over another. The successful work tended to focus on just that: models of ordinary chemical phenomena. So maybe we shouldn't try to predict things, and should just look for the units directly.
I've thought about this for a few days before deciding to wade into this discussion. Three quick points first:
1. It's unclear to me what constitutes a paradigm, vs. proto-paradigm, vs. non-paradigm. It seems to me that the key difference is the number of people who adopt these set of assumptions, beliefs, and practices. So they are really popularity contests amongst scientists trying to study similar things.
2. The way I'm understanding the arguments, it seems like part of the quandry is whether psychology should be more akin to physics or engineering. Are we trying to better understand fundamental principles, or solve problems with our current state of knowledge? Should psychology be separated along this distinction?
3. Glorified averages are not all bad, as a group level descriptive. To reduce healthcare burden, we need people to eat better and engage in more physical activity on average. To reduce greenhouse gas emissions, we need people to fly less on average. Our interventions may not work for certain individuals, perhaps even backfire, but on the whole, we're getting what we want. But I 100% agree that averages fail horribly at explaining individual behaviours.
Now, let me preface by saying I'm a behaviourist (it's more chic to call ourselves behaviour analysts nowadays) by training, and my own paradigm falls squarely within the purview of evolutionary thinking (not every behaviour analyst agrees). Here's what I think - behaviour is what an organism does, which includes thinking and feeling, which is very much guided by selection processes. Behaviours that contact favourable outcomes tend to be repeated. Behaviours that contact unfavourable outcomes tend to stop occurring. Within such a paradigm, context is central. Behaviours can only be understood within their contexts, which include the present conditions, historical conditions, the organism's phylogeny, and ontogeny.
I gather from what Jared is saying that context leak refers to influencing factors we have not accounted for. In theory, he may be right that "all possible things" can influence behaivours. But we know that some possible things are more possible than other possible things. For example, stimuli that are salient more readily capture our attention than stimuli that are less salient. We hear our names at the party. We also know that stimuli can become more salient due to our past history with them. Learn what a symbol represents and you suddenly start to spot it everywhere. From there, we can generate testable predictions and solve certain problems. The key is to identify which possible things matter more.
Ethan, I agree with you that we can make sense of complexity by studying its parts in isolation. We make choices under complexity. But we've managed to accomplish quite a fair bit in understanding what are some of the influencing factors. We know larger-later choices are discounted in favour of smaller-sooner choices, and we know discounting is steepest in the initial time period. We know how people discount probabilistic choices. We know how losses and gains affect choices asymmetrically. And more. To me, all these form part of the context under which choices are made. The more we understand, the better we are able to predict choices under complexity - when a choice produces an outcome that is delayed, probabilistic, framed as a loss, so forth.
Now, on to the question of: Where should we place our boundaries? It seems to me that depends on what we are trying to understand and what problems we are trying to solve. Does understanding brain structures help us understand why we do what we do? Yes. How about neurons and glias? Also yes. Neurotransmitters? Yes. DNA? Genes? Protein structures? Yes, yes, and yes. Zooming out, how about individual units of behaviour, or individuals themselves, or groups of individuals, or groups of groups of individuals? And does understanding animal behaviour help us? Many believe so. Or plant and fungi behaviour? Ethan might be the only one in this camp.
I think why psychology feels so broad and unfocused is that we are a loose collective of scientists trying to understand phenomena at multiple levels. Perhaps it's bad for the branding of psychology, I don't necessarily care. It's fine for certain people to spend their whole lives studying a specific emotion, or a specific condition under which choices or made, or how individuals interact in a system, or how groups interact with other groups. I think we can advancing knowledge in many different ways, and solve many different problems. So my question is, do we need a single paradigm for psychology? Does it matter?
Thanks for your comment! Some reactions
1. Paradigm vs proto-paradigm vs non-paradigm: I agree it's vague and ill-defined. I'm not sure it's actually a meaningful distinction in the way it's being used in this conversation. It may just be a way to express discontent. I perhaps could come up with a distinction that makes psychology pre-paradigmatic, but truth be told, I do think psychology has a paradigm.
2. That's not how I would put it. I think it's more about having sufficient causal explanations such that replicability and generalization become less mysterious.
3. I think the difference between the examples you mention and the "averages" in psychology is that the former have very specific causal mechanisms that we can manipulate directly. Whereas something like Loss Aversion is a little bit of a crapshoot whether a given experiment will find it.
Ultimately, the question I want to answer is this: why is it a crapshoot? Why are we so bad at knowing what will replicate and generalize? How come Behavioral Scientists have to test every intervention they do? Why aren't we/they better at predicting results of common interventions?
The simple answer is context. A word that basically just means, "Something is missing in our models." It's a bit of a non-answer to be honest
The longer answer is that the context is whatever our research subjects are thinking about, and how they are thinking about it. Our experiments try to get them to see things in a certain way, and they dont always obey. Presumably because they don't see our interventions in the way we see them. So how do we shift our boundaries to capture not just what researchers consider salient, but what the subjects consider to be salient? What methods can we use to observe and intervene on that subjectivity with greater consistency?
Maybe the answer is just that we need to get better at qual. I have a feeling I am more comfortable with that answer than Ethan would be. But I'm also not entirely satisfied with that answer for reasons I can't entirely articulate.
Ultimately, I want psychology to get to a point where we consider it relatively trivial to determine whether someone will be Loss Averse or not, because we will know how to see the choice as they see it. But I think our current approach of proclaiming "people tend to be Loss Averse" is leading us down paths that take us further away from being able to do that. Such statements reify the average and take us away from the subjectivity we need to understand in order to explain the variance.
Hey Jared, great to hear from you.
Sorry if I may have misrepresented some of your arguments. I do think you're correct - a lot of psychological findings are crapshoot. And I also empathise with your frustration. I think it relates to the fact that psychology is so broad that it shelters some pretty poor science. While loss aversion may be a thing, under specific conditions that we haven't fully fleshed out, some other biases and tendencies are complete garbage, but they all fall under the same umbrella of behavioural economics, along with well-established findings such as delay discounting of which we understand the parameters fairly well. So we have the good, the questionable, and the garbage all in one package. Unfortunately, this lends the garbage legitimacy.
Nonetheless, there is good science being done that's highly replicable. As that progresses, I'm certain we will get better at filling the missingness in our models. I personally don't think it's possible to determine causally whether someone will act in a certain way or think a certain thought or not, in response to certain stimuli. We don't know whether someone skipped their coffee that morning, or whether their baby kept them up at night, or saw a cute kitty 10 s ago, or had a random stranger smile at them 30 mins ago, or had a traumatic experience with someone who looks like you, and we may not be able to model these "noise events". But I do think we can get to a satisfactory level of predictability, if we keep doing good science and uncovering fundamental principles of behaviour.
Qual can yield us subjective and objective data (as can self-report measures). I don't think subjectivity is necessarily the right answer, unless you subscribe to the belief that truth is constructed. I don't, even if I think it's important to understand and account for. I think it takes us away from building a bedrock of fundamental principles governing behaviours.
Finally, and this perhaps reflects my paradigm, do you believe the thinking itself should be the object of study? I think that it's the context in which thinking occurs that leads us closer to those truths.
Yen
I don't think it's possible to fully predict what people will do in all situations. I agree with your point that very random things can influence them.
But people do display some consistency in how they understand situations. One of the basic units of cognition is pattern recognition. We see this in my area of research with expert decision-makers (eg firefighters, medics, law enforcement). These experts will consistently understand similar situations in terms of a prototype which they identify through pattern matching. And in understanding how they have framed a given choice, it becomes clear why they do what they do. Behavior is predictable once you know how someone frames a choice. The challenge is in figuring out how they frame it in the first place. The problem is understanding pattern matching, framing, sensemaking, etc.
Take Loss Aversion as an example. I don't think its a module that exists in the brain, nor a fundamental property of how we weigh trade-offs. I think its just a manifestation of how people see a given choice, which is why the result is so finnicky. But I think it is possible to understand how people come to frame a choice by better understanding what causes people to focus on some aspects of the world over others
Ultimately, I dont think "truth" is constructed, but I am a constructivist in that I believe people don't all see things in the same way. Some people see the "invisible gorilla" in the famous experiment, and some don't. The world isn't given to us. Saliency isnt a property of the world. It's a property of our perceptual system, which is informed by our goals, values, familiarity, and our current understanding of the situation.
Because of this, for me, the unit of analysis that is most interesting is the "frame." Not the frame given by the experimenter, but the frame the subject generates for themselves in the moment. Some people see a choice in the way the experimenter wants them to, and some don't. So for me, the thing we are trying to control is not the objective context, but the frame of the participants.
(Sorry if this comment is a bit all over the place)
Thanks! I think I’ve grasped your position pretty well.
Again, I think because of my history, I tend to be more interested in what goes on in the context in which behaviour occurs. This includes both the present conditions and the ontogeny of an individual. Context used in this sense has a very clear meaning, instead of as a placeholder for “we don’t know what else is influencing behaviour”.
Expert decision makers tend to make similar choices/adopt similar frames in similar situations due to a similar history with similar situations. The context gives rise to similarities in the frames/choices, so in my view, understanding that context is key.
For loss aversion, we only have a rudimentary understanding of the context in which it presents or not. But we’ve pretty compelling evidence people tend to be more loss averse when the stakes are high. The difference in high vs. low stakes drives the difference in how people think and choose.
Ultimately, if you want to control the frame of the participant, it’s really by altering the context in which they engage in the framing, unless you’re talking directly intervening on neural activity about neuralink 5.0 down the road. That’s how I see it anyway.
As a computer geek with a background in computer math and logic, I want to follow up on Colin Fisher's point: "So we’ll always be shooting at a moving target, paradigm or no. And that will forever keep us from being like chemistry."
This is a rough statement of Godel's Incompleteness Theorem and its variants, Turing's Halting Problem and Kleene's Recursion Theorem. I'm going to cite the Wikipedia entry for the Halting problem. Although Godel stated it first, Turing stated it in a very clear way:
https://en.wikipedia.org/wiki/Halting_problem
The basic idea is this: "people are changed by the awareness of their own psychology."
Let's state this in terms of the Halting Problem:
"My theory of psychology states that, given a particular human psychological state (the algorithm) and a particular environmental condition (the input), the human will "halt" - that is, settle on a particular response."
But humans, unlike chemical molecules, can do the following:
"OK, you smarty-pants psychologist, let me look up the theory you just published. OK, based on what you derive on page 36, you claim I am going to halt on this particular response. So you say. But me, being bloody minded, is going to do something entirely different. Not even the opposite - just different. So I may have halted, but I halted on a different value. So there!"
To which you reply:
"Taking Colin’s concerns as stated, if people are changed by the awareness of their own psychology, we can still ask how they are changed by the awareness of their own psychology. "
Unfortunately, the Incompleteness theorem is a hall or mirrors. Yes, you can do that, but Colin's bloody-minded subject can trump that too. You are chasing your own tail.
And in the end Colin is right. Psychology is not like chemistry because we are self-aware. Chemistry can be described as a closed system. Your attempt to formalize this change/awareness is going to drown in the swamp of Kolmogorov Complexity.
Basically, the Incompleteness Theorem and its further implications can mathematically prove that psychology is truly emergent and can never be reduced to chemistry-like laws. No way, nohow.
Give it up. Any attempt will just be adding epicycles to Skinner's Ptolemaic system.
The physics/chemistry paradigm has enough problems with complex systems like the Three Body Problem (2 = simple, 3+ = complex). If you want to explain the complex systems in human cognition, this type of decomposition into the individual molecules just won't work.
"The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty and we must rise with the occasion. As our case is new, we must think anew and act anew. We must disenthrall ourselves, and then we shall save our country.” - Abraham Lincoln
The halting problem is a pretty weird example to start with, given that we do in fact know how computers work, and we can talk about what they're made of, even if we can't determine from a description of an arbitrary computer program and an input whether the program will finish running or continue to run forever.
I don't think we're looking for a theory that can take a psychological state and an environment and predict the outcome, though it would be nice if we happened to end up with one of those anyways.
The goal of finding a paradigm is to find a way to talk about the mind and model what it's made of. Chemistry and physics are mature because we have very satisfying ways to describe the nature of matter and energy, NOT because we can always predict the outcome of a chemical reaction. We can't! And not because we can always predict the outcome of a physical interaction. We can't! But what we do have is a practical ontological understanding of matter and space and so on, and I'd like to some day have the same for psychology. What kinds of units is the mind made out of? There's going to be an answer to this question.
Prediction is enormously overrated. But it keeps coming up, so maybe I need to write a separate post just about that.
Yes, but...
Let me try it a different way.
My background is in Machine Learning, especially Natural Language Processing. With a bachelor's in Physics, I know how computers work from the transistor Nand gates on up.
So your statement that we in fact do know how computers work is equivalent to saying that we are well on our way to knowing the neurophysiology of the wetware in our head.
All well and good, but psychology is not neurophysiology. That is how far the paradigm of chemistry will take you: no farther.
In fact we computer geeks are running into the exact same problem that psychology has faced due to the simple fact of Moore's law. We could come up with simple algorithmic models of simple machine learning paradigms, but everything has gone out the window the past few years simply due to scaling up.
We can build these complex Large Language Models and we don't know how they work. We computer geeks are failing in coming up with the equivalent of a psychological model of computer algorithms.
What kind of units is the mind made of? We know that: they're called neurons.
The ontological understanding of neurons does not help you understand the mind. The ontological understanding of computers does not help you to understand how an LLM works.
Yes, the Halting Problem is weird. My philosophy prof Howard Stein presented it at the end of Logic 101 my sophomore year back in 1970. I thought about it all weekend, then went to his office and tried to show where it was wrong. What I was sketching out although I did not know it at the time, was an argument based on Computational Learning Theory - the work that Solomonoff/Chaitin/Kolmogorov pioneered in the 1960s. I have been wrestling with it ever since.
Focus on chemistry and you end up with a glorified neurophysiology. That's all and no more. You have to really grapple with the implications of the Incompleteness theorem and I promise you that it will be a completely different paradigm that will leave simple sciences like physics and chemistry in the dust.
By the way, LLMs are a dead end as an approach to Artificial General Intelligence. We have no theory of AGI because it is essentially the same problem as that of psychology: the Hard Problem of Consciousness.
The best approach I have seen to date is that of Lenore and Manuel Blum. Perhaps this is what you are looking for:
https://arxiv.org/abs/2403.17101
AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
Lenore Blum, Manuel Blum
Here is Lenore Blum lecturing on the theory
https://www.youtube.com/watch?v=2kmz9DS6Fjg
Lenore Blum - Insights from the Conscious Turing Machine - a machine model for consciousness
Here is Manuel Blum lecturing on the initial development of the theory back in 2018
https://www.youtube.com/watch?v=AXKI2f1AxtM&t=205s
Manuel Blum: Towards a Conscious AI