n this episode of the Modern Pain Podcast, host Mark Kargela discusses the philosophy of science as it pertains to pain research with philosophers Rani Lill Anjum and Elena Rocca. The conversation revolves around their book, which critiques traditional scientific approaches and emphasizes the importance of understanding philosophical biases in research. The trio explores how empirical methods often fail to address the complexity of individual pain experiences, the ethical dilemmas in excluding certain populations from research, and the significance of patient stories in clinical practice. This episode aims to provide clinicians with a deeper understanding of the philosophical underpinnings of the science they use in their own practices.
NEW TEXTBOOK - Philosophy of Science
CauseHealth Book - Rethinking Causality, Complexity and Evidence for the Unique Patient
Unpacking the Complexity of Evidence YouTube Playlist
*********************************************************************
📸 - Follow us on Instagram - https://www.instagram.com/modernpaincare/
🐦 - Follow us on Twitter - https://www.twitter.com/modernpaincare/
🎙️ - Listen to our Podcast - https://www.modernpaincare.com
____________________________________
Modern Pain Care is a company dedicated to spreading evidence-based and person-centered information about pain, prevention, and overall fitness and wellness
[00:01:34] Mark Kargela: Hello, friends. And welcome back to another episode of the modern pain podcast. If you're anything like me, when you first started reading research and for me well into my career. I never considered some of the ways we think about what we consider knowledge and what we think is the best way to gather it and determine what is causing what in our practice.
Getting into the philosophy of science has been a huge help for me. It has helped me understand the limits of some research methods and philosophies. When it comes to understanding the complexity of pain we see in our face-to-face encounters with people.
This week, we had two philosophers, Rani Lill Anjum. And Elena Rocca onto discuss these issues and how their newly released book really gets into these issues to help us better recognize the challenges we face in science.
Ronnie spoke to their past work on the cause health project and how frontline clinicians became very interested as it spoke to the challenges they saw in some of the foundational science in their own practices.
[00:02:24] Rani Lill Anjum: First we we wrote from the CauseHealth project, we wrote this book, Rethinking Causality, Complexity and Evidence for the Health Sciences. And what was really [00:02:34] interesting during that project was to see how also practitioners in the health sciences got so engaged in quite deep philosophical discussions about like the foundation for their own practice.
[00:02:48] Mark Kargela: Empiricism was discussed and how a strict adherence to empirical methods. It brings us challenges in science.
[00:02:53] Rani Lill Anjum: but these are ideas that the pure empiricists want to avoid. So instead of speculating about why something happens, they just want to see what actually happens. And what we're trying to do in the book is to say that this doesn't work in science. it doesn't work as a program to say that we're going to assume nothing.
[00:03:13] Mark Kargela: We touched on philosophical biases and research and how they often go unspoken about in conferences.
[00:03:18] Elena Rocca: If you have this idea of which evidence you tried, you trust the most or the thing that you value the most or that you're afraid that the error you're afraid to make the most, this will not change because of new evidence at least. And we call them [00:03:34] philosophical bias. But what we can do is to talk clearly about these these ideas that go unspoken in these meetings where I am, at least.
[00:03:41] Mark Kargela: If you work with patients dealing with persistent pain, you likely already know that many of them are never represented in research studies. We're using to make clinical judgments. In regards to the care we are providing for them.
[00:03:51] Rani Lill Anjum: When we have science generated from data that are supposed to be kept like clean, and to get like same cause same effect under the same conditions or similar conditions, and they exclude most of the population or at least half of the population, then the question is like, how relevant are these results for this population that is excluded in the study?
[00:04:15] Mark Kargela: This episode is a great way to start better understanding the philosophy of science as it relates to the science you are using in your own practice. , hopefully it will get you thinking and put research in its best context within every unique N equals one encounter in your own practice.
Please consider subscribing wherever you're watching or listening to the podcast.
And thank you again so [00:04:34] much for spending some of your valuable time on our podcast. Now onto the episode.
[00:04:38] Announcer: This is the Modern Pain Podcast with Mark Kargela.
[00:04:43] Mark Kargela: Rani, Elena, welcome to the podcast.
[00:04:46] Rani Lill Anjum: Thank you, Mark, for having us.
[00:04:48] Elena Rocca: Thank you very much, Mark.
[00:04:50] Mark Kargela: It's great to have you both. I've had the pleasure of speaking with Rani before this first time I've spoken with Elena and I've read a lot of the work of both of you. So I'm excited to, to have you here.
Could you both Rani, we'll maybe start with you as far as introduce yourself, where you're at and what your role is, what you do. And then we'll move it over to Elena.
[00:05:07] Rani Lill Anjum: My name is Rani Lil Anjum. I'm in Norway at something that is even called Norwegian University of Life Sciences.
I work here as a philosopher. One of two or three philosophers maybe. So this is a science heavy university, but I teach introduction to philosophy to bachelor's students. And then I teach philosophy of [00:05:34] science and research ethics to PhD students. And then I teach a course that I developed together with Elena for master's students in environmental sciences and natural resource management, which is basically philosophy of science, but we don't tell them that it's a philosophy course, but they will notice eventually.
[00:05:54] Mark Kargela: Elena.
[00:05:54] Elena Rocca: Yes, I am working also in a university, Innovation University, which is very close to Rani. But it is called Oslo Metropolitan University, and I am a associate professor in pharmacy. And I teach pharmacy students a little bit. I'm a bit involved in theory science, but, I mainly teach clinical skills. And before that, I was working with Rani at the project called Cause Health. Maybe you have heard about that, maybe not, but it's there that we have developed the idea of the course and then of the book. And I have a background in pharmacy and then in molecular biology for the PhD. And I've [00:06:34] become more and more interested in the theory of science and philosophy of science, especially in How do we think cause and effect when we talk about the clinic and the use and misuse of medicines because it's because of my original background, but also many other things for example poisons in the Environment and so on.
[00:06:55] Mark Kargela: Awesome to have you both here and you spoke of the Cause Health project we've had some of the folks at the Cause Health project on The podcast we had a master class. We'll link it all into the show notes, especially the Cause Health book I think which is still open access for folks to take a look at and read Which is an amazing resource
one thing I recommend people do, and we spoke to this a little bit before we went on and recorded today, is there's just a lack of understanding of what goes into research and what we consider knowledge and how we accumulate it and what kind of methods we use and what kind of philosophies underpin that.
I'm wondering, I mean, this could be obviously you teach probably whole semester courses that discuss this so we'll try to [00:07:34] distill some of this into a 45 minute podcast, but I'm wondering if you can speak to a little bit about the book, what kind of prompted the book and Kind of what your goals were with it.
[00:07:43] Rani Lill Anjum: First we we wrote from the CauseHealth project, we wrote this book, Rethinking Causality, Complexity and Evidence for the Health Sciences. And what was really interesting during that project was to see how also practitioners in the health sciences got so engaged in quite deep philosophical discussions about like the foundation for their own practice.
So we tried to look at some very basic underlying ideas in medicine and health sciences that made people disagree, or maybe even Maybe they agreed, but they were talking past each other because one would use concepts with different meanings when one had different backgrounds, for instance, what does it mean to for something to be person centered?
But also, [00:08:34] that there is this whole philosophy of science underlying also clinical practice. So we got very inspired from that feedback that we got in the close health project. So we thought, Or actually it was Eliana's idea that we could do something also for our students who are not health scientists or or medical students, but they are science students who try to learn about sustainability.
What is the sustainable solution? What is the sustainable system and. What they experience is that they have teachers from different faculties and different departments or even just different sections who say opposite things from each other. So someone says this is an important result, it shows that this is not sustainable, and then someone's going to say no that's not a very good scientific result because it's not done with the right methods.
So you shouldn't trust that type of study, so you can just dismiss it, or that [00:09:34] one even had different types of goals. So different sustainability goals to think about. So that's why we took this type of starting point that there are philosophical assumptions in science and research and practice that people are usually not aware about.
And we're trying to bring this to the front and to show people how it relates and how it influences how science is done, but also how different research traditions and research cultures start from different philosophical assumptions. So we call them philosophical bias.
[00:10:08] Elena Rocca: , I can just add that I see these all the time in my job, in my profession that maybe now I'm biased because I've been thinking about this so long with you, Rani, and I've been, I mean, after writing two books, it's obviously, it's obvious that, I see the issue more clearly than others, but I think every time, almost every time I have discussions and I go to seminars, I see this issue of When we're trying to use evidence or science based facts.[00:10:34]
to for science based decision making, then there's always this question of when is evidence enough to justify decisions, to justify actions. So what kind of value do we give of importance? Do we give to what evidence? I can just give an example from the last seminar I've been before the summer, it was a seminar for with a lot of public health researchers and experts.
And they were talking about screening programs. And a person showed data from the screening program of mammography, which is in Norway at the moment. There is a screening program for a mammography screening program for women from 50 on. And so the question Is that right to do? Should we have this screening program?
Is it useful or is it not? And is of course an ongoing question because, we've got that amount of resources and we have to use them well for for [00:11:34] interventions that are effective. And so, of course, we know there is a well known understood mechanism by which If a modified cell is seen with a mammography.
So it is removed and that prevents cancer at least for a certain period of time. So there's nothing mysterious about that. Single women can experience this. But when you look at this data that is researchers showed at the level of the entire population, then there is no evidence whatsoever that the mammography screening prolongs life at the population level, not even one day.
So if you if you compare populations where the reason the mammography screening inact, so there's not even one day in average of life prolongation. so there you have two types of evidence. They're quite [00:12:34] clear, they point to different directions. So if you're a decision maker, would you think that it's valuable to do the screening for the population and for the single women or not?
I mean, and this, in my opinion, is a question of what evidence you value the most, what evidence you trust. It's also a question of what kind of consequence is the worst. For you, if you make a mistake. And these are all things that we kind, we call philosophical bias, because these are not going to be settled by any type of new evidence.
So, If you have this idea of which evidence you tried, you trust the most or the thing that you value the most or that you're afraid that the error you're afraid to make the most, this will not change because of new evidence at least. And we call them philosophical bias. And it's not that, we can do, I don't know what we can do about that practically, like to [00:13:34] choose the best, like what should we go for?
This, I don't know, but what we can do is to clear, talk clearly about these these ideas that go unspoken in these meetings where I am, at least.
[00:13:46] Mark Kargela: I would definitely agree. I think philosophical bias, I don't think a lot of just like, I definitely everyday clinicians understand that philosophical bias that goes into it.
You spoke to some of the challenges with like population research where we're talking about big groups and means and things and averages where, you know, sometimes the uniqueness of some of the people like we see face to face in clinic gets lost within those, means and averages and things that I'm wondering if you could speak a little bit to that philosophical bias of Maybe the dominant paradigms of research and philosophy that we've operated under, traditionally.
I mean, there's some newer ones and it's not that, we haven't had, qualitative studies and looked at narratives and themes and different things around unique people, but a lot of it's been more population driven, statistics and really, I'm wondering if you could speak [00:14:34] a little bit to that bias that you see that exists out there, maybe in that aspect.
[00:14:37] Rani Lill Anjum: Well, I can say a bit about for instance, causal evidence that one might wonder if something works. And then the typical way to think about whether something works is to check, is there a causal link between generally this intervention and generally the outcome that you expect? And that type of evidence has to be generated from a population.
you have to see how often do you get the outcome that you would expect when you have the intervention. And but of course when you have that type of population evidence, then in the clinic, you have to think, does this work for my patient? And that is the single case. So you have a lot of general evidence which is working really well if you think that causality has something to do with a regularity or a correlation.
So maybe you want to see, for instance, a regularity between the intervention and the [00:15:34] outcome. And in addition, you want to see a statistical difference maker, so that if you compare when you get the intervention to when people don't get the intervention, you should more often get the positive effect when they have the intervention.
So that's the difference making concept. So for these two philosophical concepts, the statistical evidence is the best. And the comparison of when you have a control, so randomized control trials is the perfect type of study. If you assume this idea of causality. Of course, there are many things that make it difficult to provide the perfect randomized control trial which is that ideally you would have complete evidence.
You would have every instance, and then you would check for everyone whether or not they would get the outcome. And since that's not possible, what we try to do is to have very big data set. In addition, you have the problem, of course, which is just a practical problem, not a [00:16:34] philosophical problem, that you don't want to test, for instance, risk populations.
You don't want to test it on on children or people who use many many different interventions at the same time, because then you don't know which of these, for instance, medications or exercises actually did the causal work. So you have practical problems, but Anyways, the problem is that when you have population generated evidence, then if this is the perfect type of evidence, you have to assume that every person who gets the intervention is a statistical average, someone where this population is representative.
And if the population is not representative, you might get a different outcome than the statistical average outcome. This is something that Elena and I have become more and more interested in. And Elena, she could also tell you because now she's been working also a lot with pharmacovigilance, drug safety.
looking at [00:17:34] unexpected effects. So this is something we have done a bit together when it comes to causality, that if you see the same cause gives the same effect under the same conditions or normal conditions or average conditions, or then you can make a prediction that this is what's going to happen In similar cases, but then the problem is that everyone is different.
And this is what you meet in the clinic, you might meet people who are different from the people who was in the study, and you might even meet people who got an adverse effect. So not at all what was tested. I mean, ethically I'm quite interested in the fact that we think about ethics when we recruit people to a study.
So we don't want to harm people in the study. But then of course, when you minimize the risk in the study, then it looks like the treatment is much safer than. It might be if you gave it to everyone.
[00:18:31] Elena Rocca: I wanted to to introduce a bit, the [00:18:34] idea of personalized medicine, because this is now what everyone is going for. And one could think if we're going to personalize medicine, then we're going away from the statistical philosophical bias or what we call frequentism. Maybe I have to explain it just a second, a little bracket.
When we say frequentism, we talk about the tradition where we infer the probability of something to happen by counting how many Times the same thing happened in in equal conditions in the past. So, and this is the more or less this is what underpins the, this idea of clinical trials and the statistics, so, et cetera.
And when we talk about personalized medicine I wanted to point out that we still have. a different philosophical bias within that too because the idea of personalized medicine can be approached in different ways. One way is for instance, if [00:19:34] we think that, we have a patient a woman who has, Maybe epilepsy has had epilepsy for a long time and now she's getting pregnant.
She is pregnant, maybe three months and she has a series of parameters. And so with personalized medicine, you would try to get all the parameters, are relevant from your previous knowledge and then see plot them in a way, in an algorithm. Or in a prediction model that is built in a with data from a lot of other similar women.
And if you would say like, because you use all the personal parameters of this person, this woman, then you tailor something that is personalized to her, but that this is based on previous data. So frequentist idea. Another thing one can do for instance, which for which we have a big tradition here in Norway is to to really monitor what happens in the person.
For instance, one can, [00:20:34] if now I decide, to keep treating this woman who's sick, she needs to be treated in pregnancy. And we have an idea of what happens with medicines in pregnancy. We have an idea. We know that normally pregnant women need much higher dose of certain medicines. But you know, what to do instead of just augmenting the dose is to monitor how much medicine is in the blood of the woman.
How is the woman feeling? What's the secondary effects or what targeted effects does she have and then adjusted those accordingly. And that's based on the person itself, like on what's happening in the person. But of course, you use a lot of knowledge, still use a lot of knowledge. And this is the this is what is difficult, especially in the health science and medicine.
You need to use your previous knowledge that you built in other cases for a new case. This is always something either that you use the statistics or [00:21:34] that you use your theoretical knowledge. You need to infer for a new case and predict. But yeah, that there are two different ways of thinking.
One is more based on statistics, and I would say frequentism. And the second, maybe one could argue, is more based on propensities, which is the what characteristics this person has. that can make so that she responds in a certain way to the medicine. For example, how much of the dose of the medicine is actually in the bloodstream, for instance, and how is this person feeling?
[00:22:08] Rani Lill Anjum: Yeah, because then when you talk about propensities and individual propensities, you would think about what are the properties of this person and what are the dispositions of the intervention and how do they interact together. And once you talk about this, it's not necessarily so that you have to think in every single similar cases, this leads to that you [00:22:34] would think what happens in this unique case, and then the general claim will maybe sum up what happens in the individual cases and that's the thing about statistical type of evidence is like, okay, it's the statistical evidence showing that you have like A universal law, or is it just the average of all the very, very different individual responses and interactions.
And because if you say for instance, that, uh, 30% of women will require this in this dosage, I mean, or it's like, or this is the this is the best dosage for most people. , there will always be some individuals where it doesn't fit. So then the question is like, okay. Okay, should we continue to talk about averages and statistical averages, or should we be transparent about the big diversity that is lying behind the statistical data?
Because sometimes it seems like something is, like, super safe, and [00:23:34] therefore it's like very unlikely that it's going to harm you, statistically. But then if you think individually, there are many good mechanistic reasons for saying you might be a risk group. So for this positionalist it's always really important to get causal evidence from the individual and understand how the treatment works and not just count how often it works without understanding for whom and under which conditions and at what stages could it work.
[00:24:04] Mark Kargela: I love how you brought in dispositionalism, and I know I've found with what Elena was speaking to as well as far as I think it gets to a little bit about what you had spoke to in the book a little bit about observation as well as far as like empirical observation versus some of these things that might not be perfectly measurable or quantifiable in a human that we can understand.
Observe with our senses and put a number on it and have that make us judgements where there might be these, like you said, these dispositions that lie within people that might be, or within [00:24:34] situations. I'm wondering if you can speak to I guess maybe the little bit of a, the tussle between empirical, observable events versus some of these dispositions that may not be perfectly, something that's easily observable in a very tightly controlled research setting. I wonder if you can speak to that, especially when it comes to some of our complex pain patients. If you're working with complex pain, they often do not fit the standard of what population studies are as far as what founds a lot of these frequentist statistics and things that you speak of.
I'm wondering if you could speak to a little bit of that challenge between balancing the empirical observable versus the, some of the challenges of the uniqueness of the person that might be in front of us in the clinic.
[00:25:17] Rani Lill Anjum: It's important to, to know that when we are within the scientific or research discipline where it's a lot of focus on data and to get the data and to find big data. It's. It comes from an assumption that you don't have [00:25:34] to accept.
So empiricism is the idea that science and research should be all about the observable facts, and that if we had all the facts, we would also know all the theories that are correct. But because we don't know all the facts, we never know the full theory. So it's better to just stick to the facts. So then instead of saying that this is in general happening because there is a mechanism or there is a law, one just says, okay, the only thing I can say is that we observed when we did this happened for so and so many in this population.
So you spend a lot of time just describing. And. What's happened in in medicine and healthcare, but also in many social sciences over the last decades is that one started to do this type of purely empirical studies and trying to put less emphasis on theory. So theories of mechanisms and explaining why something happens is really difficult.
So for instance, [00:26:34] if you want to say, why do people behave this way? the way they do. Is it because of so why do we develop the character traits that we have? Is it because of our upbringing or is it because of what happened in the stone ages? Is it because of social constructs or historical situated explanations, or is it because of brain structure?
So all of these theories give rise to different types of disciplines and and research programs, but they all start from a non empirical assumption. So, what they are trying also to prove. So you might try to prove that it has something to do with genes, or you might want to prove that it has something to do with brain structure.
But these are. ideas that the pure empiricists want to avoid. So instead of speculating about why something happens, they just want to see what actually happens. And what we're trying to do in the book is to say that [00:27:34] this doesn't work in science. it doesn't work as a program to say that we're going to assume nothing.
We're just going to get the facts. So, this is also something that inspired me from from Elena, because she was talking about in her own research, how disagreements, for instance, in science wasn't settled just by facts. So we started to look at everything that is not observable in science and research.
So we try to explain to people why this kind of positivist idea that science should stick to the facts and then politicians and philosophers and and religion could talk about what we ought to do and how things might be. So, I mean, one of the things that we that we argue in the book is that, for instance, what is the best method.
I mean, facts. I mean, it's not like you can do a study comparing different methods. to see which one gets it right. Because the thing is, you use [00:28:34] a method because you trust it to be scientific. So if you look for statistical data, you're going to get statistical data. And then you will trust the statistical data that you got, if you did it right.
And if you're looking for mechanisms and you do experiments, you're going to do experiments and you will find Maybe the mechanisms and you will trust it. So it's not like you can test how good the method is. And that's the same with the tools that you use. So for instance, if you think that, okay, pain is not observable, it's possible to experience it, but it's a very personal experience.
And when people say they are in pain, you don't know how much pain or is it in their head? Is it in their body? So ideally we would be able to make a scan and see where the pain is. So if the pain could light up somewhere. we could have evidence, empirical evidence that it exists and not just listen to what people claim.
And there is a move now to try to make, I would say a bit maybe a bit fake, quantitative empirical, uh, results [00:29:34] by telling people to report on a scale from one to 10, maybe, or one to five. And then you get numbers and then you get the frequencies and then you can like I don't know. I don't know why this is seen as more acceptable empirical empirically.
[00:29:49] Elena Rocca: I just while you talked I got, took notes because I was inspired exactly, I had two things I have to say, and one is exactly that, which is that actually, and this is also one key point that you want to, we want to express in this new book, is that this is the entirely or a lot discipline specific.
So it is not that we cannot say that it is a universal thing, that the frequent is bias. The, or the, in, in this case, the empiricist bias is the dominant, or at least the bias that downplays the mechanism. It's not in all disciplines. For instance, in I come from a PhD in in molecular, well, it is, it was in them.
let's [00:30:34] say molecular biology, but it was applied in order to understand the function of some genes. And in that case, what you would do, it would, it was the typical correlation experiment. You would knock out a gene from the genome of a mouse and see what happens in the mouse, the phenotype.
And that is a typical correlation, but that would not be enough for a PhD, it would not be enough to publish a paper. What constantly we were told we should find out was why. You have to find the mechanism, you have to find the mechanism, otherwise you don't have enough to publish a paper. So in that case, you would want to know what happens, why, how this gene is essential for that particular trait.
So it is discipline specific, and this is also something that we want to show in this with this book it's someone has tried to talk about where this philosophical bias or basic assumptions that someone has caused them come from. And there are different theories. [00:31:34] Do they come, are they like innate?
Someone said. Do they come from your profession? Did you come, do they come from what you studied? I mean, we don't have a, an answer to that. We can just acknowledge the difference theories and we can acknowledge that they are most of the time discipline specific. And then the second thing I was inspired by you, Rani, to say is that when you talked about empiricism and about things you can, cannot observe one problem is also that Many times, especially in medicine and in the pharmaceutical sciences and which involves pain.
A lot of pain because there is a lot of medicalization of pain. When you want to measure something, we talked about statistics, and then you have to have a way to operate, give a number to a, to an observation. What we call operationalization and then, it's not that you don't observe things, it's not that you don't see, for instance that the person changes [00:32:34] their mood because of pain or is unable to concentrate because of pain, let's say but it's difficult to translate that into numbers.
And then it's difficult to get to the statistics so that. it's difficult, like it becomes not observable in a way because you cannot number it in this tradition. But I'm sure that clinicians many clinicians would say that these things are perfectly observable and especially the patients who experience this.
For instance, we have written and we have submitted now an article together with Christine Price, which is a chronic pain patient. Where she tells about her experience with a medicine that was used to cure her chronic pain. It was back in 2000. At that time, there was a medicine, it was an old Class antidepressant that was used also as a painkiller.
And she said that she had a series of untargeted effects, what you call side effects, from this [00:33:34] medicine. And she was very sure that that was linked to the medicine. But these were very difficult. So, for instance, she would have difficult vision. strange disturbances at the vision, but of the type that even not the optician could see in a way, like could measure, then you cannot measure, and then it becomes invisible.
But it's not that it wasn't observed. It was observed by her very well.
[00:33:59] Mark Kargela: I'm wondering if you, cause I love, I like how you're talking about the operationalization of things. And one of the things that always drives me crazy, clinically is we take this massively complex experience of pain that often has so many individualized factors. Obviously there's some, standard, more researched empirically, driven data and factors that we know of too, but also very unique to the person factors of their life, their story, their narrative.
Tina Price is one person we've had on we've spoke to a bit in the past. She's amazing and a great advocate for patients in the [00:34:34] space. I'm wondering, cause you, you've, you guys do a great job in the book of pointing out some of the challenges of science and methodology and some of the philosophical assumptions, obviously the philosophical assumptions of this operalization operationalization is more of the empirical, I needed to have some sort of quantifiable numbers so I can run frequentist statistics and be able to run some of these, data driven approaches with it. Could you speak to either Elena or Rani to, and, and Rani,
know you've touched upon this a little bit with your discussions on dispositionalism and stuff, but what are some like for clinicians who are trying to, maybe, okay, maybe we can't perfectly, put this in a mean or average.
What are some other maybe philosophical. Kind of things maybe you can touch into more dispositional or things that can help a clinician when maybe these things aren't perfectly observable and Some of the great examples that Elena spoke to with like, obviously Tina was definitely observing them.
They weren't perfectly Measurable. So in the science I sometimes they become these [00:35:34] invisible kind of phenomenon the metaphysical maybe I'm wondering if you could speak to how So, clinicians can maybe think a little bit differently, philosophically, around some of these things that aren't perfectly quantifiable, I guess, would be my roundabout way of asking that question.
[00:35:52] Elena Rocca: Well, it's about the philosophy behind it. I think that what at least I learned from Rani and helped me when I was thinking about, my issues, which at the time was not clinics, but it fits the clinics as well, is this idea of causality as And uniqueness so that what Rani says is that it is the single instance of causality that is it she says is the true maker of the causal law is it true Rani this is what you say so and this was really for me the first time I heard it was like a experience because and then for the first time I thought that you can [00:36:34] think in two ways and usually this is the second way We're used to think or I am, which is like, I know about a causal law or a generality.
In a way, this is what this single, this make, this what make this single observation causal. For instance, I know that now I am very sorry for all these drug exper examples, but this is what I know about, but I know that IBU proof and causes stomach ache.
many times. And then I see a patient with that case Ibuprofen patient with stomach ache. So that is the generality makes the single case in a way true because I know about that. But what's this now you have to correct me down here because that's very philosophical. That's not my main field, but I want to say that causality happens in the single case.
that it is what happens in that patient that makes the general law or the generality in a [00:37:34] way valid. Can you explain it a bit better maybe than me? Because I didn't feel I could say it very well.
[00:37:41] Rani Lill Anjum: No, but it's I would say maybe when we talk about causality in general terms, we are maybe saying something about a mechanism, but then a mechanism will always be complex.
There will be different factors and different properties and different circumstances that will that will be relevant and and interact. So I'm thinking, for instance all the time I have known like menstrual pain. I have known that some people I know struggle with extreme pain and for maybe two to three weeks every month and and the normal.
The normal response from the doctor is like, okay, maybe you take some painkiller and rest a bit, but they will assume that people have a very low pain threshold. And of [00:38:34] course, more recently it's become more and more known that this is a serious, it's a serious illness. And I heard the same. I think we talked about this yesterday, Elena, with the pregnant women being nauseous, and then some experience that throughout the whole pregnancy, they are extremely nauseous, and they're so nauseous that they keep throwing up everything they eat and they get very sick.
And also they have not been taken seriously. And I think this idea that there should be a kind of average, normal response that is. what the science is about. And then some people are just, uh, overreacting or maximizing their experience to get more attention or whatever. I mean, this is the problem with the MP.
Like when, so even though something is observable, it's observable to the individual, but scientifically you have to be able to observe it [00:39:34] as an as a neutral as a neutral observer from the outside. And this is what I remember Steven Mumford and I, we were talking about causality from the empiricist perspective, where one would say, well, the only thing we know about causality from David Hume's perspective is that when you look at the billiard ball table, ball A, hits ball B and then the ball B starts rolling.
But we don't see that there is a power or a force. But then if you think of what you experience on yourself, you will experience if someone push you or if you put, if you are like, fall onto something or a rock hits you or you will feel all the powers on you. But the thing is from the outside, He pretends that we're all on the outside of the world looking in trying to get evidence that other people have the same type of experiences as us.
And it's also sometimes called the problem of other minds that you [00:40:34] don't have access to other people's mental states. So we don't know if people feel the same. as me when I say I'm really sad or I'm in love or whatever. And this is also how we treat animals because they cannot tell us anything.
We are just like, well, fish, they do all this wiggling when you fish them, but it's probably not pain because They don't cry. Or even if you cry, it's like input output. So if we could see something in the brain, if we could measure something, it would be more reliable scientifically.
And I think when we are stuck in that type of idea about the world, we are not doing ourselves a favor because we get so much relevant and important knowledge about what's happening when you get access to people's experiences. So we talked about this in the course health book how important it is with patient stories, and to hear about patients experiences [00:41:34] leading up to their troubles.
So whether it's how they got ill, or. what happened when they got what they thought was a bad reaction from from a medicine, for instance. So the stories are not something that people are trying to lie to us or in or influencing our decision. It's actually causally information from the dispositions perspective, because they can tell us something about all these different elements that came together and produce this result.
So instead of doubting, that people are having this reactions. It's actually us denying them a place in the evidence, but it's also because we think of evidence as something that happens in a study. And that's what I don't like about this anecdotal evidence term, because it sounds like if you mark, tell me something that you experienced it's just an anecdote.
But if you answer a question there, after being part of [00:42:34] a study, then it's evidence. And I don't understand the difference, because it comes from you anyway, and it's about your experience. So why aren't these anecdotal stories? Why are they not considered more important as evidence generating, like knowledge generation?
So this is maybe also what You were talking about Elano when you talked about our paper with Christina Price that sh her experiences because they were so unique. They weren't believed. They weren't believed to be proper because the evidence from the population said that this is this is not from the medication, so it cannot be a side effect.
So. We were missing out on information about side effects because people like her were not trusted. And today we know more because they also did the collected more evidence. So I just wonder, when does something become evidence, is it when there [00:43:34] is a scientific community that says, okay. Now it's evidence.
Now we have enough to start counting.
[00:43:40] Mark Kargela: And that's a great, there's a whole chapter. I know you dedicate in the book about some of the challenges of like science as part of a community and how, our structured communities can sometimes weigh to what we rubber stamp as this is data and this is now declared science versus.
Other communities that might have different viewpoints on it. Definitely I would recommend anybody who's listening to check out the book. It's great and it definitely will help you thinking, which I think any good philosophical book. Does as far as some of the things and foundations of science that I think we operate under kind of implicitly without really explicitly realizing some of the things that go into it.
One of the kind of fascinating parts of it was, I think chapter four of your book spoke to some of the gender issues in science that, that go on in this concept of the reference man. And how maybe some of these populations were not very representative of. a diverse world population. I'm wondering if you could speak to a little bit of what you [00:44:34] we're speaking to in regards to some of those gender issues and challenges in, in science and research that you saw.
[00:44:39] Rani Lill Anjum: I was actually really shocked when I heard about how there is a data gap in medical sciences that so I read this book, by, Carolina Criado Perez Invisible Women, and she is exposing data bias in the world design for men.
And what she was talking about in medicine is that even Of course, we know that pregnant women have been excluded from a lot of medical trials, because there is the danger of harming the fetus during pregnancy. But I wasn't aware that, of course, Elena knew this already, that women's hormones interferes with with with chemical interventions so that you get very different results from women and maybe much more variety in results from women than you would do with men.
And but what I didn't know also was that even when it [00:45:34] comes to animal models, one would prefer to use male models. mice, for instance, and even on the cell level, they would use cells from males. And I was just, I'm involved in a project which has to do with looking at health effects on radiation from 5g.
Not that it's proven that there is a health effects from that but they're studying it. And they were talking about how they wanted to do studies on old people and young people and the workers and the different types of populations. The study on the young people, they, they explained how people would be using these devices and walking around in the city in high radiation areas.
And it looked like a very good study but then they said, of course, they couldn't have. women who were on hormonal birth control, and they couldn't have women who were at a certain stage, I don't know the name for that, of their [00:46:34] menstrual cyclists. So then I looked up how many young people actually use hormonal birth control, and it was most of them.
So it means that, and also if you then exclude the people from in their menstrual cycle, it just means that there couldn't be very many women in that study. And I mean, when we have science generated from data that are supposed to be kept like clean, and to get like same cause same effect under the same conditions or similar conditions, and they exclude most of the population or at least half of the population, then the question is like, how relevant are these results for this population that is excluded in the study?
And I mean, this the more I read about about this data bias the worse it gets because I also didn't know. until very recently that almost all of the behavioral science and and psychology studies [00:47:34] that they were done on American students in the faculties where the studies were carried out.
So, Heinrich and and colleagues, they looked at these types of studies and they found that all of these all of these Results that we use were from American students or maybe from the Netherlands. A couple of the studies were from there and they called this type of of students, they call them weird samples.
So weird is an acronym for Western educated industrial, industrialized rich rich and democratic. Did I mention that twice? Was it correct? You got populations and and Heinrich and colleagues, they say, well, how many globally do they represent? And they found the number was 12, 12 percent of the global population.
And so this is like, I mean, the more I hear about how narrow research actually is and how little [00:48:34] distribution there is of the of where we collect results and who benefits from the results and who pay the cost of results. So now they call also a certain type of of studies. I think they call them parrot shoot.
Parachute researchers, people who travel, for instance, to African countries, they do the samples, and then they go away, and then they publish in American journals or European, Anglo American journals. And no one who were collected the data from actually ever hear about. the results or benefit from the results.
So it's like, this is what feminist philosophers of science have said for decades, that science is all about the interest of the powerful people globally. And and in general, so you have the yeah, so if you think of what diseases do we know anything about? Very little, for instance, about tropical diseases, because who has them?
Well, [00:49:34] we don't know any, but malaria, for instance, English men would get it when they when they went to Africa. So it's very important to cure. So you see, it's like who decides? this is where we should put our money. So that's what Sandra Harding says that if those who are powerful in society also dominate science, then we have a democratic problem in science.
And I think that's really important to recognize because we think science is this ideal Objective and neutral interest free thing.
[00:50:05] Mark Kargela: Elena, I have to imagine you see a bit of that in pharmaceutical research with your pharmacology background as far as, I know, just obviously working in the U. S., in the U.
S. healthcare system where it's very much a cost containment and really trying to You know, maximize the fiscal, responsibility of the situation, and people often get left in the dust, unfortunately, which drives me crazy, but that we could have a whole podcast series on that issue. But I'm [00:50:34] wondering where what in your perspective is with some of the same things Rani says when it comes to like pharmaceutical challenges we see across the world where we have it being done with very tight populations that aren't probably best representative of, the breadth of humanity who might be served by some of these medications.
[00:50:52] Elena Rocca: Yeah. As I said, there's of course, a lot of I mean, maybe the mother issue is in the pharmaceutical companies with this because of what Rania already said in in clinical trials. Of course now there are many more methods. Raising which are more observational and that try to include or include these populations that are often excluded or always excluded.
For instance, pregnant women. But also sick or children. So they're included in these observational trials because there you don't, there you're allowed to do it because you don't give any intervention. So you just observe what's [00:51:34] happening. So one thing to say is that of course, the countries that have the possibilities, the best possibility for these kind of trials are the Scandinavian countries or at least countries where the system is so well developed that you have registries, if you're talking about registry studies, or at least if we're talking instead about other types of study, which are core studies for you.
follow population for a very long time, then you also need to have the economical possibilities to do that. So we're still in that we go back to that point where we have very little part of the population, world population represented for now. But I hope that we're going in the direction, in at least better direction at least for monitoring safety of drugs.
So we have a big database of observations that happens in the clinic. So I would really like to say, if clinicians are listening now, that really, there is an important way of [00:52:34] evidence going from the clinic to the science and not only from the science to the clinic. So everyone who works every day with clinical observation can fit the science through systems that are global.
So for instance, you can report your observations to global database that this is really precious evidence. But of course, these databases are still well developed, it's only in certain countries and underdeveloped in others. So we say it's global, but it's really not representative of the global population, but there is a big effort.
from WHO to implement good structures also in the rest of the world. So in this sense, I hope we're going in a good direction. Other things that I get I get a bit provoked by are these sentences that are a little bit made. that everyone says. For instance I look at this master student who read this master thesis and they say in [00:53:34] their introduction that women, pregnant women are excluded from clinical trials because of ethical reasons.
Now, I am not an ethicist at all, but Why is it unethical to include people in trials if they say that they want to be included? Of course, there is a problem of hurting the fetus, but we know that in many conditions, the fetus is hurt. if the condition is not treated. So, the ethical choices are you cannot really reduce them to these sentences that are made and then everyone accepted.
Oh, it's ethically problematic. It's ethically okay to exclude people from from science, from trials. I mean, it's not that simple. This leads me to another question we talk about in the book, which is values in science and what, and a bit conflict of interest. So what is the right thing to do?
Depends on, [00:54:34] of course, who thinks who does the science. And there's a lot of talk about that. A lot of philosophers talk about that. And what we highlight in the book is this idea of thinking what if I, make a mistake. Like, what is the mistake I'm most worried about? So, and here you can see, when you think about it in that terms, you see the difference, for instance, between a clinician and the public health person.
If I go back to my initial example of the mammography, a clinician would most often be worried about the patient. To, to miss that important information and get cancer at such a advanced stadium that there's nothing to do. while a public health person would be mostly worried about mislocating the resources because these are the resources that have to be allocated.
And these things are there. But I think that it helps to find ways to talk about it. And [00:55:34] these we call also philosophical bias about values and there are more technical names about that. I don't know. In the depths of it, but I think that finding ways to communicate this interdisciplinary when you're talking to each other, because when we, when I was with that public health people, I would still say, but how is that possible?
How do you interpret these numbers if we know that it works? And then they would think we don't know, but it has to be right. I think it has to be wrong, like,
[00:56:04] Mark Kargela: The book is great. Cause it really brings up a lot of these issues where you can have some of these mental tussles yourself, you can see some of the conflicting.
worldviews and different things that are out there from different perspectives that people may take as they're looking at data, be it a population health person or the clinic on the front line. And I think you all lay out a lot of those challenges and that you people can read too. We'll definitely link the book in the show notes and also the Cause Health book that both Elena and Rani both contributed to.
That's a great, [00:56:34] both of them are great books. I'm still getting through the philosophy book. I'm really enjoying it so far. So again, I would recommend y'all. I could spend more hours chatting with you both because these topics are always fascinating to me, but I want to respect both your time and thank you both for your time today and the contributions you're making out there because we greatly appreciate it.
[00:56:55] Rani Lill Anjum: Well, thank you so much for interviewing us and for having us and helping us spread spread more philosophy to the right people.
[00:57:02] Elena Rocca: Very nice chat.
[00:57:03] Mark Kargela: I thank you both. And it's one of those topics that came into my world probably 10 years into my career.
And is one that I think I wish we would have students understanding this before they enter. clinical practice because it would help them navigate this tug of war of big data and unique people a lot more smoothly. Not that it's always a smooth thing because there's always going to be challenges and different variables.
But again, thank you all for listening. If you're watching this on YouTube, make sure you subscribe. If you're listening to this, we'd love if you could subscribe and maybe even leave a review and [00:57:34] maybe share this with some of your colleagues who are trying to have some of these philosophical discussions.
And navigate that in their own world. So we'll leave it at there this week. We will talk to you all next week.
I am a philosopher at the Norwegian University of Life Sciences (NMBU), working in philosophy of science and medicine. I'm interested in how philosophy influences scientific methods and practice, and what we have called philosophical bias in science.