Stuart Ritchie is Lecturer at King's College London, where he studies behavioural genetics in relation to personality and cognitive ability. In this conversation, we don't talk about any of that though but instead focus on his book Science Fictions, a book about how science goes wrong, and the topics covered therein.
BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith. New conversations every other Friday. You can find the podcast on all podcasting platforms (e.g., Spotify, Apple/Google Podcasts, etc.).
Timestamps
0:00:41: Trying to replicate Bem (2011) Feeling the future
0:09:58: Wy write Science Fictions?
0:17:24: How to (get people to) adopt open science practices?
0:36:31: Stuart will pay you if you find errors in Science Fictions
0:46:44: Should scientific journals have an automatic way for reporting errors?
0:56:52: Gorecki, Boulez, and cultural references
1:01:45: Scientific fraud: Stapel, Macchiarini, and Hwang
1:31:05: Will many small steps improve science sufficiently or do we need a revolution?
Podcast links
Website: https://bjks.buzzsprout.com/
Twitter: https://twitter.com/BjksPodcast
Guest's links
Book website: https://www.sciencefictions.org/
Google Scholar: https://scholar.google.de/citations?user=9TsCy3IAAAAJ
Twitter: https://twitter.com/stuartjritchie
Ben's links
Website: www.bjks.page/
Google Scholar: https://scholar.google.co.uk/citations?user=-nWNfvcAAAAJ
Twitter: https://twitter.com/bjks_tweets
References and further links
Bem, D. J. (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology.
Leung, A. K. Y., Kim, S., Polman, E., Ong, L. S., Qiu, L., Goncalo, J. A., & Sanchez-Burks, J. (2012). Embodied metaphors and creative “acts”. Psychological Science.
Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., ... & Vazire, S. (2019). Preregistration is hard, and worthwhile. Trends in cognitive sciences.
Quintana, D. S. (2020). A synthetic dataset primer for the biobehavioural sciences to promote reproducibility and hypothesis generation. Elife.
Ritchie, S. J., Wiseman, R., & French, C. C. (2012). Failing the future: Three unsuccessful attempts to replicate Bem's ‘Retroactive Facilitation of Recall’ Effect. PloS One.
Ritchie, S. (2020). Science fictions: How fraud, bias, negligence, and hype undermine the search for truth. Metropolitan Books.
The Halloween challenge at Goldsmiths I helped out with: https://www.theguardian.com/science/2012/oct/31/halloween-challenge-psychics-scientific-trial
Stuart will pay you if you find errors in Science Fictions: https://www.sciencefictions.org/corrections
Pierre Boulez's notation for piano: https://www.youtube.com/watch?v=cD2SwVZBI80
-
[This is an automated transcript with many errors]
Benjamin James Kuper-Smith: [00:00:00] One thing that, uh, I think you have the record on for the guests I've contacted so far, I think you've been the fastest with email. Is that, is that
Stuart Ritchie: the case? That's very unusual. You were just very, you were just very lucky. I think you were just extremely, extremely lucky to be looking at my computer at the time.
Yeah. Other people will not have had a similar experience. Yeah.
Benjamin James Kuper-Smith: Okay. Okay. So that's, I was, I was gonna ask like how you managed to do that, but I guess it was just
Stuart Ritchie: Oh, pure luck. Pure luck,
Benjamin James Kuper-Smith: pure luck.
Stuart Ritchie: Yeah.
Benjamin James Kuper-Smith: Okay. Uh, well let's just pretend you're very good.
Stuart Ritchie: Yeah. I could have told you. Yeah, I'm totally on top of things.
Benjamin James Kuper-Smith: On important thing is to schedule when you do.
Stuart Ritchie: Yeah. Not in this list.
Benjamin James Kuper-Smith: Okay. Um, so as I mentioned before we started recording, I, you know, I've read your book Science Fictions and that's mainly what I wanna talk about and I'm not familiar with the main research. Interestingly, the, whilst reading Science fictions, I realized I have heard of one of the studies.
Stuart Ritchie: Hmm.
Benjamin James Kuper-Smith: Which is, so in for context, I did my. [00:01:00] Undergraduate Goldsmiths College. And in about 2012, I believe, I think it was 2012, I was helping Chris French with his ah, Halloween challenge, right? So that's, uh, so he does animalistic psychology, kind of trying to explain why people might think they had some sort of animalistic experience, um, and trying to explain it with psychological principles.
And he has this Halloween challenge where he invites someone in who says they're clairvoyant or something, and he says, okay, make your predictions. We'll test them in a kind of more rigorous way. And as we were kind of, I was just helping out with that and as we were, um, you know, setting up the thing and sort of in some context, he then mentioned, yeah, we would, we, there's this like, real problem right now kind of with, um, publishing replications because we've, you know, we've run this study about trying to replicate bams results and the original journal didn't wanna publish and et cetera, et cetera.
And, um, yeah, as I read the book, I found out you were the first author on that.
Stuart Ritchie: Yeah, I, I, um, I, I, I remember when I read the [00:02:00] paper and then I went a lot the, the original Daryl Bam paper, which was published in the Journal of Personality and Social Psychology, which is a, like a, a, a good journal where you would, you would want to publish normally.
It would be quite good to have that on your cv. Um, and, uh, I went, I think having read it, I went to, um, the, um, Edinburgh Skeptics in the pub, which used to be, uh, uh, well, I think it still exists, but it, we used to go each week and there would be a talk and, and and so on. And Richard Wiseman was there and, and he's another one of the, eventually was the authors on that paper.
And I said, have you seen this new study where they claim that that social, um, that the para, the Parapsychological phenomenon are, are, are, are true. Um, and of course it wasn't the first one by any means. I mean, there'd been, there'd been very many parapsychology papers published in mainstream journals, but this was the kind of the most recent one.
And he said, yeah, do you want to replicate it? And so, and then we got in touch with Chris French. And we said, you, let's do this in three different places. Let's record, uh, let's say, let's replicate the, the, the, the, the exact same [00:03:00] final experiment from the, from the Ben paper, which was the one with the biggest effect size.
We thought we would do that one because we thought if it's got the biggest effect size, then maybe there's more chance we can find it. And, and, and, and so on. Even though, you know, we had, we had more participants, so we had a higher statistical power in our study than, uh, than, than he would've had. So we should have been able to replicate it, you would think.
Uh, and so I did 50 participants in Edinburgh. Chris did 50 participants at Goldsmiths, and, um, Richard did 50 participants at Hartfordshire. And, um, we put 'em all together and we found absolutely nothing. So there's no psychic effect in our, in our, uh, study specifically. The experiment was one where you had, um, you had people, uh, looking at a list of words that appeared on the screen, and then just one at a time.
And then they did a test. So some kind of test where you, you, uh, just a memory test where you just have to write down as many words as you can recall from the list. And then the computer showed half of the words. Again, it randomly selected half the words and showed them again, and then that was it. [00:04:00] And what Bem had claimed in his paper was that the words that the participants were about to see again, so the words that, like they, they didn't know that the computer was gonna show them those words again, but the words that they were about to see again, were remembered better on the task than the ones that they were never.
I'm gonna see again. So, so it's like, I always say it's like you study for an exam and then you sit the exam and then you kind of go home and study again afterwards. You open up your textbook again and study again after you've done the exam. And somehow that post exam studying goes back in time to help you get a better grade on the exam.
Um, uh, uh, which is kind of mind blowing, but, but the idea is that we all have these like low level psychic abilities where we can kind of sense what's gonna happen in the future. And it's not as if we can like, make a clear prediction or like, uh, uh, uh, contact, you know, contact, uh, uh, the dead or any other things that psychics claim, but it's just like a low level psychic ability that we all have anyway.
That's why he claimed in the paper and, but we didn't [00:05:00] find anything. We found that the words were just as they were randomly remembered in the way that you would expect if there was no such thing as psychic powers. But, and, and that's all fun and there's a whole discussion to be had about.
Parapsychology and whether that's worth pursuing and, and, and, and, and so on. I think that's an interesting, that's an interesting question. Like do you pursue something that you think is a silly question? Even like if those experiments. Had shown, had been replicable and had shown clear results. Right.
There's no other explanation really, other than that something psychic is going on. There's no, there's no way that the experiment could have leaked or anything. I mean, it's all very, very straightforward. It's just like, if people could, if people could remember the words that they were about to see, or pre remember the words they were about to see, I don't know what word you wanna use to describe remembering something that's in the future.
Yeah. Um, uh, then there isn't really another explanation, like it's so straightforward that it has to kind of be evidence for psychic powers. Now some people would say psych, some psychic powers are so unlikely that there must be something going on. There must be some weird computer [00:06:00] glitch or something.
But I think that experiment is so straightforward that, that we would all have to just put our hands up and say like, we can't explain this using normal means. So that's a whole other question. But what we found was when we submitted the replication study to the same journal that published Darryl Be's original paper, they told us that not only was the paper rejected, but that they wouldn't even consider it because.
Replication studies will just never be published by this journal. This journal just is never considered replication studies where they're positive, negative, anywhere in between. They're just not interested. What they want is something new every time. And to me, this illustrated a big problem with, with science, the wanting something new every time.
And I think the wanting something new every time really is one of the fundamental problems that we have with the way that the structure of science, the structure of academia and so on, is right now and, um, across, across loads of different fields, not just, not by any means parapsychology or psychology in general.
[00:07:00] Um, but that, that, you know, that early experience in my PhD, um, I guess the word you would use is it radicalized me, uh, somewhat about the, uh, the, the, the structure of the scientific incentive system.
Benjamin James Kuper-Smith: Yeah. So that was early on in, in your PhD?
Stuart Ritchie: Uh, it would've been, yeah, probably like mid, like mid PhD or something, like a second year of a three year PhD, I guess.
Benjamin James Kuper-Smith: What I find interesting is how these perspectives on replicability, reproducibility open science, I feel like change depending on when you had your education. Um, I mean, I, I, I've, I think I've said this before, um, that on the podcast that I applied for more PhD positions than I wanted to before I got this one.
And I always asked about, you know, how they felt about like open signs and that kind of stuff because it was something that was important to me, and this might be a coincidence, but I felt that there was a kind of, if people were roughly mid forties or older. You'd get something like, whoa, it's important to, you know, whatever.
And if they were younger [00:08:00] than that, they'd say something like, yeah, we, I do that. We do that in this lab. It's important to me.
Stuart Ritchie: Yeah. I think it's when you made your career, right? It's when you, uh, it's the stuff you're used to. Yeah. People, people, uh, get very attached to their way of doing things and they don't want change them.
And that, that's the case across all different contexts, not just, not just science, but in every, in every area. People get used to the way they do, the way they generally do stuff. And, um, uh, a lot of open science ideas are asking people to spend a lot more time, uh, setting up their experiments. Like pre-registering them.
Yeah. Or, uh, and once they've got the data, they've gotta put it all online and, and, and make it, make it readable and all that sort of stuff. There's a lot more effort and in, in, in, in some, in some, uh, uh, uh, ways of doing open science at least. And, um, uh, and so you're telling people to kinda upend the, the stuff that they've done that's very successfully got them into a, a Exactly.
A, a position in, in, in a university. Maybe got 'em a professorship or something and, you know, why would anyone want to do that? The answer actually is that because it makes the science better, but unfortunately that's one of the arguments I make, is that [00:09:00] lots of the things scientists do, you would think would be aimed at making the science better, but in fact are aimed at things like making their CV look better.
Uh, or, or, or making their, making themselves more famous or notorious or well known in their field or respected or whatever. Rather than doing robust, rigorous, high quality research.
Benjamin James Kuper-Smith: Yeah, I agree. I mean, I should always add to that one that of course there are people older than mid forties who do very good urban science.
That's refugees.
Stuart Ritchie: Of course there are. Of course there are. But in general, I think that's, I mean, yeah,
Benjamin James Kuper-Smith: yeah, yeah. Having said that, I mean, I think it's kind of selecting, right? You, you're kind of, I mean, the people who are older and that they've, they've been selected because they were good often using an old system.
Whereas I guess as you, you know, as you said, if there, if there is an incentive system for rewarding open science and these things, then the people who did that would've published less and
Stuart Ritchie: totally. Or, or they would've had to adapt and, you know, maybe they've got it in them to adapt. It's just that they never, they never had the system that pushed them in that direction.
Benjamin James Kuper-Smith: Yeah. Okay. So, I mean, this [00:10:00] is, so about the book. I have a very, very broad question, which is, uh, especially talking about, you know, how, how the incentive is to write as many publications as you can, et cetera, et cetera. So why did you take the time to write this book? 'cause that must have been something that, you know, that must have been, come with the cost of, let's say, at least a few papers.
It's
Stuart Ritchie: a good question. Yeah, it di it, um, it certainly did slow me down Dr. Dramatically, and, and, uh, and, and doing lots of the kind of subsequent stuff about it, you know, the publicity and so on, slowed me down massively when the book came out. Um, I, I think, I think in, in, in, in, you know, writing it and then coming up to doing, to, to, to, to, to writing this book.
I had kind of just, and I, and I'd done pretty well in terms of publishing stuff. Like I, you know, I've, I've published lots of papers and, uh, my H index is. Looking good and all that sort of thing. If you, if you care about that, which now I don't really like, I like, I like I, I had, I'd kind of tried that, I'd seen that you can do that and you can get lots of publications and it's very good and it's feels satisfying at the time.[00:11:00]
But I guess in some kind of an hedonic way, I, I, I had sort of, I just don't really care that much anymore about burnishing my CV or H index and, and, and, and, and, um, I know that's not very like useful advice for other, for other people. But, um, I certainly got to the point where I just like, was like, okay, I've kind of done this now and I can see that this is possible.
Um, and I can see all the ways that it's, that, that it, that it distorts science and makes science less good as well. Is it, you know, this big rush to have loads and loads of papers on every possible topic and, and, and, and in every possible journal and well, as long as it's got high impact factor and all that.
So yeah, I think I had, I think I had, um. I had just given, given up in some way. Uh, uh, uh, and, and you know, I still, I'm interested in, in, in, in publishing stuff, but I, but I, I'm happy to go much slower now to publish stuff that's really good and really, and really thoughtful and high quality, hopefully, and making a decent, you know, large scale contribution rather than, you know, just jumping at any opportunity to, to, to, to publish stuff.
Benjamin James Kuper-Smith: Okay. Is [00:12:00] it, is it because you'd already kind of, as you said, your hr Nicks was looking good, so you had the, the opportunity, you had the freedom to not care about it, was that
Stuart Ritchie: Yeah, maybe, maybe. I, I certainly didn't feel any external pressure anymore to, uh, you know, I had, um, come to the end of a postdoc and stuff, but, um, I, I think I'm, uh, uh, you know, by, by, uh, in personality, just not very, um, con I dunno, conformist or something I would say, or, or something like that.
Like, I, I, I, I always, once I've got into a situation, I think, well, I, you know, I now need to sort of, um. I have quite a contrarian way of, of, of, of dealing with things and, and, and, and, and I think once I, once I had, you know, got this, got my current job and so on, I thought, is this, does this make sense? The way we do stuff is this, is this, right?
And that's what kinda came into the book was, was questioning a lot of, a lot of, a lot of the way that currently science has done and the incentives and, and that the implicit and explicit pushes that we get from our employers to publish more stuff, get more money, constantly be asking, you know, begging, begging people for, for, for, for money and for grants [00:13:00] and, and and so on.
And it's not good. It's not good. And, um, it really does, uh, shake your confidence in the whole process of, of doing, of doing scientific research once you dig into, you know, the meta science and some of the stuff I talk about in the, in, in the book. Um, and it almost can be paralyzing in some sense. You can almost be sort of like, you know, I, I don't wanna do anything, you know, if I'm at risk of making a mistake or, or I'm at risk of, you know, producing bias research in some way.
Maybe you don't wanna do anything. And I think that's a kinda a bad reaction to, to to, to this because there are things you can, there are things you can do, but I, I have felt that on a few occasions that sort of almost like panicky, kind of like, oh my God, is this gonna be any, you know, is this, is this contributing to the problems?
Or, or, or, or actually doing something about them.
Benjamin James Kuper-Smith: Right. I mean, how, apart from writing a book about it to, I don't know exactly raise, like they dunno exactly what your goal was per se with writing the book. But other than that, like what do you do as a individual researcher?
Stuart Ritchie: Yeah. Well, um, uh, the goal of writing the book [00:14:00] was to have the conversation.
And I think, I think a huge part of this, this replication crisis thing is just getting the word out there. I meet people who have never heard of a lot of these who are scientists, who have never heard of a lot of these controversies. Um, you tell them about a very well known case of scientific fraud. You think it's well known, but actually, actually it's not at all because it's known by the people in the bubble of.
Yeah, people who are interested in scientific fraud and replication crisis and so on. But not necessarily by, you know, your average, uh, scientist who's, who works in a lab or whatever. So you meet them and tell tell'em about it and they're amazed and so on. And I thought, well, that's a good opportunity. And then actually, I think the general public should know about this stuff as well, given that it's in most, well, many, any cases, they're money being used to, to do a lot of this research.
Um, and they need to demand higher standards of scientists. I think they have a, they have a picture which has been cultivated by scientists themselves of this kind of objective truth seeking process that, uh, you know, the peer review, uh, process deals with issues of any, any, any mistakes or errors and so [00:15:00] on.
And you get these, these things which go out into a culture called peer review papers. And you can, you know, uh, you can refer to peer reviewed papers, say, and you're in news articles and so on. And that means, in some sense that what you're saying is true and it buttresses all your clients and so on. I think people need to know that that whole system is, is really leaky and has loads of issues.
Of, uh, of, um, uh, the, well, you know, the things I talk about, bias, negligence, uh, uh, uh, fraud hype in, in, in, in, in the book. Um, and that the image we have of this pristine process of science is actually in many ways, is actually in many ways, uh, uh, distorted by these, by these problems, which I think not enough people know about.
And then, you know, you have something like that came just after I finished writing the book, the, the COVID Pandemic that really does show that science can be, can be, uh, uh, um, very badly distorted by all sorts of different pressures operating on it from, from within science, from outside, from politics, from society, and so on.
Um, uh, and that's not to say that the, the [00:16:00] main principles of doing science are bad. I'm, I'm, I think science is the best thing that we've ever invented as a human species. But it's, but, but, but the, the, the kinda situation that we do it in academia, even industry, whatever, is kind of suffused with. Problems that push it in the wrong direction.
And so, and so, yes, to answer your question, the po the, the, the, the point of writing the book was to get that word out there to some extent, as an individual researcher, uh, I like to think that I am, you know, encouraging people in our department to do more open science stuff. Um, we've got an open science club, which I didn't find, but you know, I've been supportive of it over the years.
We've got an open science kind of journal club thing in our department. Uh, I've been trying to push training for PhD students. So instead of just like the self-selecting thing where there's an open science club, and all the people who are interested in open science go to that and everybody else just goes on with their normal day, um, you want to try and get supervisors of PhD students trained in these kind of techniques, like pre registration and [00:17:00] open data and all the other things that we, that we would, you know, consider to be open science and the PhD students themselves trained in, in those issues as well.
So, so starting off at the earliest stage, earliest career stages and kind of pushing towards, you know, um, kinda getting people educated in. The stuff that will really make a difference, hopefully. And, and, and instead of, I think there's a huge problem at the moment of, and this is the problem I had with, you know, for instance, there was a, a thing a few years ago about, um, reviewers, uh, saying, I will not review this paper unless the data are open and it's a very well-meaning thing.
But my worry is that if everyone who's interested in open science signs up to that, then you'll just get a set of people who are less interested in open science. Maybe the less good people, the people who are like less interested in rigor and, and replicability and so on, reviewing the papers that don't have the, the, the, the open data.
So you'll have this weird selection process where there's a set of papers that are done by the open science people and there's a set of papers that are, you know, and reviewed by them, uh, and a set of papers that are done and [00:18:00] reviewed by non-op science people. And that doesn't sound like a, a good kind of structural change to the, to the, the, the, the system that we have.
Benjamin James Kuper-Smith: Yeah, you're right. That's, that's a tricky one. I mean, is a good approach to just kind of. Uh, not nudge in the, in the, in the, you know, behavioral economics way, but in a more colloquial sense to, to push people by like asking for smaller things. I mean, I remember, um, I once helped my super, uh, one of my supervisors with a peer review.
And, um, we, uh, you know, the, the, the figures they had were at least some of them were bar plots and just with another, I mean, standard error of the mean or whatever, but it was just bar plots and, but it was the kind of study where they could have very easily just added individual data points, like, you know, just to show what the mean is made of, and that kind of, and so we said like, you know, you know, this is really the kind of study that could, where you could do this very easily.
Yeah. It's not, you know, if you have thousands of people, it doesn't really make necessarily sense. Uh, but with that it was very possible. And then. You [00:19:00] know, they added it and then like sometime later we saw a talk by one of the researchers. And you know, I think all of the plots had individual data points in event.
Um, not, I think not only from that study, but also from later ones.
Stuart Ritchie: Nice. So you had induced the change in their general
Benjamin James Kuper-Smith: Yeah. I mean, I'm not sure how convinced the person was per se about it, um,
Stuart Ritchie: but they realized it was a thing you have to do to please reviewers. Yeah.
Benjamin James Kuper-Smith: Yeah. It seemed a bit more like that than anything else, but,
Stuart Ritchie: but yeah, you're right.
I think getting people to do small things like that, um, is be, has to be better than nothing. Right. And, and this is what I often say to people who are worried by the whole like, edifice of open science and there's all this stuff that, there's all this stuff that they, they should apparently be doing and, and, and, and so on.
Maybe you don't, you know, maybe you don't need to do it all. Um, you know, just pre-registering your, your paper is better than, is better than doing nothing. Um, oh. Just dropped a, dropped a pen. Sorry. Um, uh, uh, just, just doing, just doing, uh, pre registration is better than doing nothing. [00:20:00] Just, uh, putting your code online is better than doing nothing.
You know, uh, um, you, you don't need to put all your data online. There's, there's a, there's ways you can share stuff. You can embargo things. There's all sorts of different options, and I think getting people on board. With open science will involve convincing them that they don't need to, you know, completely, they don't need to.
It's not this horrible, you know, grueling process necessarily. It does add extra time, but it can be made easier. And I think that's one of the big things that the Center for Open Science are doing. They're making their website and so on, um, full of tools that help people do what they want to do in an e you know, as easily as, as as possible.
Um, one of the big problems for implementing any kind of change is that there's all this inertia, right? And you want people to just easily be able to, to, to, to, to make the change you want. And at, at the moment, we, we, we don't encourage people to, you know, to, to, to do these things. And we, and they don't know, like people will come up to me and say, how do you pre-register something?
I've never, I've never encount this before. I, I, I, it sounds really [00:21:00] difficult. And you can tell them. Well, you just, you can just make a Word document and, and, and, and, and post it online if you want. Exactly. It can be minimal. It can be a preregistration can be as minimal as you want. It can be, it can be a one sentence, you know, uh, uh, plan.
I mean, that's not very useful. The better preregistration will be the ones that are as detailed as possible. Of course. Yeah. But like the best one where you simulate all the data beforehand, you have all the code written out and, and, and so you've got everything ready. And so you just collect the real data, plug it into the code, and press go.
Hardly anyone is gonna be able to, is gonna be able to have all that planned out in advance before they touch any of the data. Um, but you can do just short of that. You can do somewhere on the spectrum between writing one sentence and writing a huge, big, huge, big thing. So just, uh, encouraging people to realize that just making small changes in what they do, they don't need to be completely extreme.
They don't need to be massively revolutionary, can improve the quality of their science and certainly make the whole thing more transparent for someone who comes along and tries to work out what. Is wrong with their paper if they find them in error or, or [00:22:00] just wants to do the same thing again.
Benjamin James Kuper-Smith: Yeah.
Yeah. I agree. I think that's like one big thing there is that it's not a, you have to be a hundred percent perfect or not do it at all. Hmm. Um, but there are degradations in the middle. I mean, that's why, for example. But yeah, that's one thing that I think that isn't well known. For example, that on, if you use the, uh, I keep forgetting whether it's the Os F for the COS, um, the open
Stuart Ritchie: science framework is the website, but it's run by the Center for Urban Science.
Benjamin James Kuper-Smith: Okay. Yeah. Um, anyways, on the OSF, then the, um, you know, the pre-registration forms that you can, you know, you, you don't have to do the, the really long one. You can do the as predicted one. You
Stuart Ritchie: can if you want, but you don't have to.
Benjamin James Kuper-Smith: Yeah. And also I think, like one thing that I'm also realizing now is that not, not all of your studies have to be a hundred percent of what you can do.
I, for example, like I try to do all of those things. That's, you know, I also spend a lot of time my first year on PhD learning those, that stuff, you know, so we have this one study that's for example with, uh, prisoners and it's about psychopathy. So we have psychopathy scores from a specific prison.
Stuart Ritchie: [00:23:00] Mm-hmm.
Benjamin James Kuper-Smith: And, um, you know, some of those are fairly high,
Stuart Ritchie: right?
Benjamin James Kuper-Smith: Imagine. And so if we were to release the, the entire dataset, then you would have like age psychopathy score and whatever from a specific prison. So probably be pretty easy to figure out.
Stuart Ritchie: Well, you would have to, you would have
Benjamin James Kuper-Smith: to do some kind of anonymization such the news.
Yeah, exactly. So there's, yeah. I mean, some of those you can find in the news, right? Some of those crimes.
Stuart Ritchie: Yeah.
Benjamin James Kuper-Smith: Um, obviously you can literally find the name of the person, um, yes.
Stuart Ritchie: Yeah. That, that, yeah. And, and, and there are, there are other situations where you wouldn't want to make the data public, like.
Benjamin James Kuper-Smith: Yeah,
Stuart Ritchie: you've, you've created a new strain of, uh, the coronavirus that infects people vastly more than the current one does.
Like you, you, you can imagine the sort of situation, you don't wanna make that DNA sequence public. Um, uh, and there was a situation a few years ago with, with, uh, with bird flu where, where people did this and there was a big controversy about what they had put online. And I think they had to amend what they had put online.
'cause they were worried that, you know, a, a terrorist or some kind of state actor that would, would come along [00:24:00] and, and, and use the data in, in, in, in the wrong way. So, so yeah, from everything from like a identifying participants all the way to, you know, encouraging terrorists to produce bio weapons, there are some reasons, but I think the vast majority of cases where you've just run an experiment on some undergrads in your psychology department, say in, in the world of psychology, or you've, or you've run some kind of experiment in a, in other, in other topic that's not.
Overtly dangerous or political in some way. Um, I think most people could, could do a hell of a lot more about making their data available.
Benjamin James Kuper-Smith: Yeah. And also, even if, I mean, so the, the kind of question that I have is that I, I know there is this idea of synthetic datasets,
Stuart Ritchie: right?
Benjamin James Kuper-Smith: Um, that Quintana's written about.
Mm-hmm. He's written this prime in eLife. And by the way, for those who dunno, I put the references of any paper we mentioned in the description so you don't have to search for it. Uh, so I'll put that in there too. Um, you know, that's, for example a way that would be, I'm considering doing, but for example, there's the case where I'll probably say like, I, it seems like the main package or software tool is an R I'm not [00:25:00] very familiar with our, I I'm not sure I'm gonna learn that one for this particular thing that is really worth it.
So I
Stuart Ritchie: think Sure. But, but if you were really d if you were, if you were really desperate to do it, the, the, the, the, the, the, um, yeah. The tool is out there.
Benjamin James Kuper-Smith: Yeah. But yeah, the point also being just that, I mean the tool is there, but you don't have to use to everything all the time. Just
Stuart Ritchie: Right. Absolutely. Just Absolutely.
And I think, um. Michelle Newton has made this point before she said that, you know, you can, you, you can just do, you can get people into open science by, by saying you can do some things. It's like a pick and mix. You can do some aspects of it and they all will make your research better in some respects, even if it's not, you know, the flagship, most open, transparent paper ever.
Benjamin James Kuper-Smith: Yeah. And there's also, uh, uh, maybe last part on the, on this or the pre-registration, there's also, I think this, uh, commentary, I dunno whether it's Brian Ek or who exactly wrote this, uh, with some co-authors. Mm-hmm. Um, it's called something like Ation is a Skill or something like that you have to learn.
Stuart Ritchie: Okay. Right. Totally. I,
Benjamin James Kuper-Smith: which is, you know, like the [00:26:00] first one I wrote wasn't as good as the second one I wrote.
Stuart Ritchie: Yeah, yeah, absolutely. You learn, oh, okay. I shouldn't, I shouldn't, uh uh, um, you know, say this particular thing or I should be more clear about this. This is something which is gonna come up, you know, when we get to look at the data and I really should prepare for that.
Yeah. And no completely. It's all, it's all that way. And we do feel like everything has to be completely perfect each time. As long as you're transparent about what you've done and you said, you know, we've, we, you know, this is how we wrote things up and this is the stuff that we pre-registered and this is the stuff that we didn't, and, and so on.
As long as you do that, then, um, that's the principle of things to be transparent. Uh, the principle of all of these things is to make the research more, more transparent.
Benjamin James Kuper-Smith: Okay. So here's a, here's a question now that just came that I just thought of. That's, I mean, this is, I think a fairly common, uh, problem, but
Stuart Ritchie: Sure.
Benjamin James Kuper-Smith: So basically what we're doing right now is we're making a harder, so let's think of the case of like scientific fraud, right? So if we set up all these, um, open science things, we're now basically just putting more work for people who wouldn't, who don't commit fraud, [00:27:00] right? Mm-hmm. In, in some sense, like, it, it feels like, you know, if you are the kind of person who really worries about getting all of this stuff right, and tries to get open code, open data, et cetera,
Stuart Ritchie: it's gonna take you a lot longer.
It's gonna be a lot of effort. Yeah.
Benjamin James Kuper-Smith: Uh, yeah, but, and you're not gonna commit, you're not the person who's probably gonna commit fraud, I would imagine, unless it becomes so much the norm that everyone has to,
Stuart Ritchie: yeah, I mean, there have been some cases where people have put their data online and it's been fraudulent.
I mean, that, that has, has, that has, it's rare, I think, and I think it does take quite a certain type of person to put on a, put a fraudulent debt set up online because, you know, you gotta, you gotta think that no one will ever look at it, or, you know, be so arrogant to think that no one will ever notice the stuff that you've, that you've done.
But yeah, I think that has been done before. But yeah, cl clearly it's gonna be an incentive against it.
Benjamin James Kuper-Smith: My, my point is point that we we're making it harder for honest scientists right now to do their job. Or is this, but maybe being honest is brief phase.
Stuart Ritchie: Being honest is being, is is, you know, putting in some more hard work, making things, making things open and transparent isn't, isn't [00:28:00] easy or straightforward.
And, uh, but I don't think science, I don't think anyone ever advertised science as being open, as being easy and straightforward to, to, to, to do. Right. Yeah. That's fair enough. The people who are doing easy and straightforward research, um, are often the people whose research doesn't end up replicating. You had a lot of very, very straightforward, uh, social psychology priming studies in the early two thousands.
For instance, you know, you had tons of really straightforward stuff there where you would just come up with some variable metaphor and then you would, you would test it. And it's a lot of nonsense. Of course. Like, like, uh, uh, they, they would test this, this metaphor, uh, uh, about, you know, my favorite example being the, the experiment which was published in psychological science, I think in like 2000 and.
Actually, actually I'm not gonna, I'm not even gonna attempt the, the, the, the uh, uh, the date. 'cause I'm not a hundred percent sure, but like, maybe I'll put it
Benjamin James Kuper-Smith: in.
Stuart Ritchie: Yeah, you can explain. The idea is thinking outside the box is a metaphor, right? So they've got people to come into a room and there's a cardboard box in the middle of the room and they do [00:29:00] a creativity test either sitting in the cardboard box or outside the cardboard box.
And, um, and if they're, if, if they're inside the box, they have lower creativity scores than people who are sitting outside the box. Um, now that sounds like the sort of thing that people came up with in the pub or something. They just like came up with a silly idea and tested it and so on. People who are doing, and by the way, the effect size is just gigantic.
And there's absolutely no way that it's that it's that it can, I mean maybe it's because you're in a
Benjamin James Kuper-Smith: crammed box.
Stuart Ritchie: Well, I mean the, the, yeah, it's probably not because of the metaphor.
Benjamin James Kuper-Smith: Yeah.
Stuart Ritchie: But the idea is that, the idea is that, um, that I was trying to get across is that like straightforward, fun stuff like that?
It's probably crap. You know, like the good, the good, the good research takes a long time, takes theoretical, uh, uh, input takes, takes, you know, um, a lot of twists and turns before you touch any of the data. Uh, and that's actually, you know, one of the, one of the great things about doing, um, doing research this new way with pre-registration.
And, you know, we've got a registered report in with my PhD student, uh, uh, right now, and I was saying to her, you know, [00:30:00] this is, this is obviously how science should be done. We're having a conversation now with the reviewers on our plan. They, they like the general idea, but they've added some new, you know, twists to it and so on.
This is obviously how you should do research because you're sending in, you send in your full paper and all the reviewers can do is kind of give you some post hoc suggestions that maybe you should have done this, maybe you should have done that. But that's, you know, that's all after the fact.
Benjamin James Kuper-Smith: Yeah. Yeah.
Stuart Ritchie: Uh, doing it this way is clearly the best way to do things. It's harder, it takes a longer time probably because you can't just do your silly box experiment and send it in. Um, but it's, you know, it, it really feels like you're, you're doing, you're doing proper science. Where it it now, I don't really wanna go back to the, the, the standard way of publishing.
Benjamin James Kuper-Smith: Yeah, definitely. Um, how is, uh, just a curiosity, like I'm, I haven't written a register report yet. I've, I've helped review one or two.
Stuart Ritchie: Mm.
Benjamin James Kuper-Smith: Um, I think in. The one I can remember right now. I think we massively improved the paper. It would've been
Stuart Ritchie: Right, right.
Benjamin James Kuper-Smith: Totally complete waste of [00:31:00] time before because they had so many problems with it.
Right. Right. And, um,
Stuart Ritchie: but it doesn't mean you, it doesn't mean it can be chucked out. It means, it means it's, uh, you know, it is possibly saveable before they, before they do the paper, before they actually collecting the data. Yeah. If they hadn't had your input, they would've published a really rubbish paper, uh, somewhere.
Or they would've, they would've not necessarily published it, but they would've written it up on the experiment and sent it off somewhere and it would've been crap. So, yeah, totally.
Benjamin James Kuper-Smith: So I think that's, but um, so I haven't, so I have, I have that perspective, but not the perspective of, you know, being the author.
Just, I'm just curious, like, what's, how's that been for you? I don't know. That's a very vague question, but I'm not entirely sure
Stuart Ritchie: what in extremely positive. No, extremely positive. So, so far for this, for this one that we're working on right now, we've, we've in, in many ways just felt really grateful that like, we did this because, oh, I'm so, I'm so glad that this was pointed out.
'cause we didn't think of that particular thing that they've, that they've done. We didn't, we didn't think of this issue. Um, not that it would've necessarily been a mistake, but you could totally imagine making a mistake and then someone points it out in the registered report process. Um. [00:32:00] Not, yeah, not just that there was a mistake, but just like they, they had some better ideas, they had some things to add on that made the paper much better and we feel super grateful to them.
And you know how you always write in reviews, like, we are really grateful for the reviewer suggestion, blah, blah, blah. You always write that every time. But we genuinely are this in, in this case. Um, we're not just, we're not just saying that to, to kind of placate the reviewers. We genuinely are, are, are really impressed at how, you know, how many good things they've added to the, to the, to the paper.
And, and as I say, obviously this is how you should be doing science.
Benjamin James Kuper-Smith: But from a practical perspective, like how much, so for example, I am now, uh, I've got like one and a half years left of my PhD basically.
Stuart Ritchie: Mm-hmm.
Benjamin James Kuper-Smith: Um, for example, I wonder whether it's almost too late for my PhD for me to. You know, submit this, get feedback, you know, may, I don't know, but just trying to get at the quiz, the problem of kind of time.
Yeah. So how long has this taken you? The
Stuart Ritchie: problem is that you can't, yeah, it's, it's an interesting, it's an interesting problem, isn't it? Because like, basically the way that PhDs are set out with a ticking, the ticking clock at the start means that you can't really do research this way, or, or in many cases you won't be able.
Well, I could have, [00:33:00] I could've
Benjamin James Kuper-Smith: done earlier, but
Stuart Ritchie: now you could start off this way. Yeah, yeah. But I, but it's, it's clearly not what they were thinking about when they set up the whole idea of doing a PhD in three years and all this sort. Like it's clearly not what, um, uh, was, was, was in people's minds. Um, yeah, I think you either start off this way or, or do the other stuff that's, or do the other stuff that's, that's, that's short of doing a.
Registered report, like, you know, doing full preregistration and so on. Um, but not necessarily doing the full, doing the full thing. Uh, and people should recognize that that's better than not having done those things and just writing up your paper and not preregistering it and so on. It's just that the registered report provides an extra level of like belt and braces, um, uh, security on the robustness of the results and just in general the quality of the paper, I think.
And so it is, it is unfortunate to have missed it. And I'm thinking back to loads of stuff I've done in the past where I was like, I wish I had, you know, run this past reviewers before I, before I did the experiment. 'cause I'm sure they would've picked out a couple of things, which only occurred to me after I'd done the analysis and after you've done the analysis, it's like the old, the, the, the Ronald Fisher thing [00:34:00] about, you know, you show your experiment to the statistician after you've done it, and he, he can't necessarily tell you how to improve it, but he can tell you what it died of.
Like that's the, that's the, the classic, the classic quote. And, and it's so true. And, and that insight, that insight is, is what really fundamentally is kind of at the bottom of the, the registered report idea, which is after, after the fact. It's way too late.
Benjamin James Kuper-Smith: Yeah. Yeah. Um, one question about educational resources.
Uh, so one thing that I still find is that I, maybe the success, but I'm not aware of it, but there isn't, there should be like one good open science book or something that people could just read. Um, I mean, I, for example, had to like, you know, read lots of different papers and, you know, this kind of messy search through how to do this stuff and that kinda stuff.
And is there any like one good resource that kind of just guides you through and says like, this is pre-registration, this is this, this is this, this is this. And kind of Yeah. Guides you to Yeah. Kind of a good starting point in a way.
Stuart Ritchie: There are, um, there are open science [00:35:00] reading lists. Some of them are probably a bit out of date by, by now, I'm trying to think of like the, the, the, the, the, the best one.
I can't really off the top of my head. I can, I can send you some, uh, potential examples, but yeah, there are reading lists that have been created by like people who have gone to, you know, people who have been part of, uh, uh, organizations like the. Society for the improvement of psychological science, for instance, uh, the sips, um, group, uh, who have kind of collated together stuff.
But I think a good place to start is to look on the open science, the Center for Open Science site. And if you, if you just put in open science reading list, you'll find you'll find a whole, a whole bunch of, a whole bunch of stuff. But I think, um, I tried to include a lot of the stuff in my book as well about, uh, ideas of course, but, but my book is aimed d it's a different public rather than, yeah.
It's not like a, it's not like a, here is how you should be doing it. Um, type textbook thing, or I
Benjamin James Kuper-Smith: think, I think like your book is really good to have an awareness of what some of the problems are.
Stuart Ritchie: Mm.
Benjamin James Kuper-Smith: But then if you actually want to do science, I think you need an additional thing. Yeah.
Stuart Ritchie: No, no, totally, totally.
You totally do. Um, but yeah, there are some, there are some reading list out [00:36:00] there and there are reading lists. For some of the other specific issues that you get in some fields. Like for instance, there's one about measurement in psychology, like there's a measurement reading list and uh, and, and so on that you can kind of, um, you can kinda dig in on one specific topic and the kinda new thinking that's related to the open science movement, but it's not necessarily the core, the core stuff that applies, that applies to universally to science.
Benjamin James Kuper-Smith: Okay. Cool. So I guess we'll put one or two in the description.
Stuart Ritchie: Yeah, yeah.
Benjamin James Kuper-Smith: I can send you, yeah. The top of my head off can search for the specific thing
Stuart Ritchie: Yeah.
Benjamin James Kuper-Smith: That they're interested in. Okay, cool. Uh, maybe one slight, random question. I'm not sure I've seen a. Book or website for book where the author says, I'll pay you if you find my errors in my manuscript.
How can you, can you briefly talk about what that is and how that, how that came about?
Stuart Ritchie: Yeah. Well,
Benjamin James Kuper-Smith: it's an idea how and how much this has cost you so far.
Stuart Ritchie: Well, yeah, thankfully not too much. So after I wrote the hard back version of the book, or, or you know, I wrote the, wrote the manuscript, which then was released in hardback, I realized that I would have a chance to [00:37:00] do corrections, right?
You have, when the book is reprinted as a, uh, uh, as a paperback for instance, you can get, you can do corrections. They're also different editions and so on. But, you know, it's not a, it's not over. Once the book is, is published, if it's first obviously its first edition, that would be, that would be silly. So you do have the opportunity to make some changes.
And so I thought since I. Spend lots of the book, the vast majority of the book, criticizing other scientists. It's only fair if other people can criticize me. And obviously there are gonna be lots of areas of subjective disagreement where it's actually just like a, you know, we, we disagree on this and we can have a debate about it and it's about in interpretation and so on.
But there are still, there are gonna be some things that are just wrong, just objectively untrue that I've, that I've said because I've, it was, it was an oversight. I didn't pay attention. I looked at the wrong number, I made a typo, whatever it is. Uh, and, um, this is inspired by the idea of the, the bug bounty in computer science where you, um, pay people for finding bugs in your, in your, in your code.
Um, [00:38:00] there are various other people who have done this for their scientific papers. I don't think I've seen anyone do it for a, like, a popular book before.
Benjamin James Kuper-Smith: No, I
Stuart Ritchie: haven't seen it.
Benjamin James Kuper-Smith: Yeah.
Stuart Ritchie: And, and I got the, I got the, i, I got the idea that, you know, people can go on the website and send me a comment and say, look on page, you know, uh, uh, 154, you got this wrong.
And people have, and uh, several people have. I split it into major errors for which I pay 50 pounds and minor errors for which I pay five pounds. Minor errors. There's been quite a few people have picked up things like me saying that a journalist writes for one newspaper, but in fact they write for another one, uh, or, um, getting a date wrong or, or something like that.
But there has also been, I think there's one major error, which is where I, I just made a complete error of fact where I said that a study had not been done. And in fact it had, um, I was just wrong about that. I just didn't know. Um, it was just a screw up on my part. And so I'm happy to pay and I give people to, I, I give people the option to take the money themselves or to.
Give it to a charity of their choice. Or if they can't choose one, then I'll just give it to the, one of the malaria charities. Um, uh, and I [00:39:00] think no one has kept the money so far. I think they've all given it to charity. Um,
Benjamin James Kuper-Smith: that's gonna be a another question, like,
Stuart Ritchie: well, it's like a pressure thing now. You see everyone else's given to charity.
You put the
Benjamin James Kuper-Smith: name right. You put like this person's Yeah, I
Stuart Ritchie: put their name as well. Yeah.
Benjamin James Kuper-Smith: The money is given to this foundation.
Stuart Ritchie: Yes, exactly. Uh, so they've, so, so there's been, there's been quite a few, including a very embarrassing one in the first line where I got the, the, uh, there's, there's a questionable interpretation of the date that I put in the first line in the book.
Um, uh, which I consider to be a minor error, but, um, still it's majorly embarrassing because it's in the first line. Yeah.
Benjamin James Kuper-Smith: Yeah,
Stuart Ritchie: but it shouldn't be embarrassing, right? The point of this is that
Benjamin James Kuper-Smith: Yeah, of course. Yeah.
Stuart Ritchie: The point of this is that everyone clearly makes mistakes. I read a lot of books and I review a lot of books for, you know, various newspapers and stuff through, through each year.
And I find errors in every single one. Um, not just disagreements, but objective errors, where I think that's just, that's just an, yeah, that's just, they've put the wrong name down there or they've said the wrong thing. I just happen to know that that's, that, that's objectively wrong. And normally you have no recourse, right?
Normally you have no recourse to do that. Sometimes you do a book review where someone says, this book is full of [00:40:00] errors, and that's kind of part of the book review, but it's not kind of formal thing that you, that you have set up where you can contact the author and say, I think you got the, you know, something wrong on, on that particular page.
Um, and so, um, I'm hoping that this will catch on and more people will, will, will, will do this when, uh, when their books come out and they'll, and they'll just say, look, hands up, I got, I got this completely wrong, and, uh, we need to correct it.
Benjamin James Kuper-Smith: I'm always surprised. Um, so I'm, I mean, I'm also interested in writing and then I've been thinking about publication business and that kind of thing.
And what really surprised me was thinking about was, is that there isn't some easy way that people, you know, that you somehow, like, so basically I think what you have is people have to write an email. Is that correct? And so like on page,
Stuart Ritchie: it's on, it's on the contact. There's a contact form on the, on the website where they can, they can send a, yeah, send a
question,
Benjamin James Kuper-Smith: but it's basically question basically an email, right?
Where they tell you like where,
Stuart Ritchie: yeah, it goes to my email. Yeah.
Benjamin James Kuper-Smith: Yeah. But I'm always surprised that there isn't, that, that, you know, not a major publisher as far as I know, has put something up where people can just like, oh, I guess you have to then, then you could just read the book for free. [00:41:00] But, uh, where you could, where you see the book as an ebook and you can just click through, uh, or search for certain words and then just comment on it and say like, this is wrong.
You know? I'm always surprised that that hasn't happened because especially
Yeah,
Stuart Ritchie: but I think there's, there's reasons. I mean, first of all. Publishers don't, I mean, I did this off my own back. I mean, the, had the pub I, the publishers were, were pleased for me to do it. That I, you know, I talked to 'em about it first and they thought it was a good idea.
Um, but it's not, you know, it's not really, they, they're not incentivized to do this because it's not, it doesn't really look great for publishers to say, here's all our, uh, here's, our books are riddled with errors, by the way, and we want you to correct them. Like, I can see how that would look bad if you didn't have this mindset of everyone makes mistakes and we wanna correct them.
Benjamin James Kuper-Smith: Yeah, yeah.
Stuart Ritchie: But not just that. I think just, you know, this could be misused in some cases, or, or there would be certain books that would be kind of really focused on for this kind of thing. And you can see how it would become like, almost like one of those Ed Wikipedia editing wars where, you know, there's a certain controversial book comes out and everyone comments on it and nobody knows which are the comments that they should take seriously and, and, and so on.
So you can imagine, and the [00:42:00] publishers
Benjamin James Kuper-Smith: just
Stuart Ritchie: ruined, and the publishers, why would they want to spend their resources on like mod moderating that kind of, that kind of thing. So, um, I kind of feel like, um. Some of the books on culture war topics would, would go, you know, the number of comments would be through the roof.
Benjamin James Kuper-Smith: Yeah. But yeah, I guess, I guess your, your book is, is the kind of, in terms of content and topic, unique and lens itself to this kind of thing for being like a cool additional thing rather
Stuart Ritchie: than Yeah, yeah. It, it fits with the general ethos of the book, which is we want to correct things and make things right.
And in fact, now I've done it, it would be ironic not to have done something like that, to, to be really, really clear about the errors and, and, and be open about them and, and have this whole thing I talk about in the book about organized skepticism and all this sort of thing. Um, whereas for some books it might.
Be less relevant or, uh, or, or, or the authors just simply wanna move on to the next thing rather than focusing on the response.
Benjamin James Kuper-Smith: Yeah. I mean, also in some cases it just doesn't really matter. Right? Like, you're know, like for example, I mean, let's be honest, like, I think one of the errors I saw it was that you apparently early on said that Cass Stein [00:43:00] had won the Nobel Prize rather than Richard Thala.
Stuart Ritchie: Yeah.
Benjamin James Kuper-Smith: And you, I mean, who cares?
Stuart Ritchie: Well,
Benjamin James Kuper-Smith: like it a mistake. I mean, they might, but it's, I'm sure they care. It's not gonna change the story, anything
Stuart Ritchie: about this. Yeah. They, they, so yeah, they wrote a book together and then one of them won a Nobel Prize. Um, whereas, uh, I said that the other one had won a Nobel Prize.
And, uh, yeah, I, I just, I, it was just a kind of oversight, one of those moments where you just write the wrong name down, you're not paying full attention. Um, it doesn't matter. It doesn't matter. Yeah. Um, uh, uh, you're right. And, and, um, there are some things which are so minor that I'll not pay out. Like if someone finds like a, like a, a, a, a non-consequential typo, then I'll just correct it and not, but I think things should be corrected.
I think the general attitude, I'm trying to push this general attitude of. When you make a mistake, you should correct it rather than, like, I've seen authors who have had objective errors pointed out in their book just in the last few weeks. I won't name names, but I've seen, you know, there's a, there's an author right now who's got a controversial book out right now where there are errors, there are objective errors [00:44:00] in it, and she is kind of on Twitter saying, is this all you can find in my book?
And sure it's a minor, it's a minor issue, but you know, when you make a mistake, you should say, uh, sorry about that. I made, I made a mistake and I'll, I'll correct it rather than, yeah, try harder next time. You're desperate to find mistakes in my book. And there may well be like, but it doesn't matter if, if people hate your guts, and that's why they're discovering errors in your book.
If they are real errors and they're genuinely objective errors of fact that you've put in your book, it doesn't matter if the person pointing 'em out hates you and, uh, you know, you should correct, you should correct those errors. Because what matters is getting things right. Um, and it fit, I mean, it, it fits in this whole general attitude of science, right, of this general attitude of science as being I do.
We care about what's true. Or do we care about finding new, exciting things and moving on to the next thing and all that? It's exactly the same issue really. Uh, if you boil it down as the issue with the Darryl Bam thing, uh, where they wouldn't publish our replication study, uh, if you don't care about what's true, then you'll end up in a situation like this journal where you don't publish any replication studies ever.
If you care about [00:45:00] what's true, then you'll, you know, in this case, correct the record by having a replication study published, um, which maybe shows that the original paper you published was not, um, a hundred percent replicable. So, so, um, I think it's the same issue at base, which is kind of being intellectually humble enough to say, look, I know there's gonna be errors.
But, you know, um, uh, uh, there is a mechanism for correcting those. And then this fits into this whole thing where I've experienced a minor amount of this in just a few things that I've done in o over the years, trying to correct errors in scientific papers. But there are people out there whose whole job is scientific integrity, who are trying to correct errors in scientific papers, or, or they have it as a major like side interest of theirs is contacting editors, contacting authors, contacting universities to try and get mistakes, uh, corrected in papers.
Um, and what they get back is bullshit. And I mean, bullshit in the Harry Frankfort sense, where, you know, there's lies, there's truth, and there's lies, and there's bullshit. And truth is people who care about getting things [00:46:00] right and liars care about the truth because they want to flip it round and convince you of something else other than the truth.
But bullshitters don't care. Bullshitters don't care about the truth. And if you don't care about the truth, uh, and you're editing a journal, or you've written a scientific paper. Um, then your scientific paper is bullshit. And by, by that, by that definition, you're just writing it to impress people or to make a, to make some sort of political point or to, uh, uh, make it look as if you're cool and smart and you've got a nice CV or whatever.
It's bullshit. In that case, it's not actually an attempt at the truth. So, um, uh, in order to stop science from turning into bullshit, we need to have some mechanism to actually say, look, we care about what's true here, and we'll correct things if they're wrong.
Benjamin James Kuper-Smith: Actually should journals have something like this?
I mean, so actually, I mean, so for context, I, I talked to Joe Hilger. Oh
Stuart Ritchie: yeah. Oh wow. Yeah. He's had a, he's had a long experience of
Benjamin James Kuper-Smith: Yeah, exactly. And we talked, uh, largely about the Zang, um, yeah. Affair as he calls [00:47:00] it.
Stuart Ritchie: Yeah.
Benjamin James Kuper-Smith: Which is, you know, fantastic. I mean, not fantastic. It's very interesting to read and very frustrating.
Yeah. Um, it's fantastic in that sense, uh, that it's well done. But, um, you know, for example, I, the reason I contacted him because I found something in a paper where, uh, I'm very certain that there's some, like some stuff just went wrong there. Right. It seems more like negligence than anything else.
Stuart Ritchie: Okay.
Yeah.
Benjamin James Kuper-Smith: Um. But I'm pretty sure that two figures can't be both correct. They seem mutually exclusive. Okay. And, um, part of why I asked him to, to talk about this was also I wanted to hear like what he thought I should do next. Hmm. Because it's, it's a, it's not a super influential paper, but like, in a, in a limited context.
I mean, I'm, I'm being intentionally vague here because I haven't Sure, sure, sure. Contacted the authors or anything.
Stuart Ritchie: Yeah, no, no. You don't wanna
name
Benjamin James Kuper-Smith: the into this part. Um, yeah, just, just so you know, um, so, uh, you know, I didn't know what to do about this because it felt like I should definitely say something because like, this is just, this, just stuff that's wrong in me.
Of course. Yeah. And it's, it's one of the first. [00:48:00] Papers on a certain thing. And, um, and he basically just asked him what I should do, and he basically said like, yeah, just ignore it. You know, like he basically said like, you know, you've got, if you wanna stay in science and do your career,
Stuart Ritchie: it's just not worth the effort.
Well, he has, he has left academia and it's a huge loss. '
Benjamin James Kuper-Smith: cause Yeah, he actually left it like two months after we talked or something.
Stuart Ritchie: Yeah. Talk this up. Yeah, he's, he's, he's brilliant and it's a, it is really sad to see it happen, but I can completely understand because the frustration, I've only been involved in some very minor cases personally, uh, trying to get things corrected and it really is a kind of random, you know, or, or fairly arbitrary process where sometimes you'll get a correction and, and it'll be no problem.
Sometimes there'll be huge resistance in you, you'll get a correction. Sometimes you'll get resistance and, uh, uh, uh, and nothing will happen. Sometimes you'll get, you'll, you'll, you'll just hear absolutely nothing. Or in fact, in one case. We heard back from the authors very quickly and said, yeah, we're really sorry about this mistake.
We'd love to correct it. We're gonna correct it for you. And then they just never did. Yeah, just never, nothing ever happened. [00:49:00] It's been four years. That's the best way
Benjamin James Kuper-Smith: to do.
Stuart Ritchie: It's four years, nothing happened. We contact editors, they don't how to deal
Benjamin James Kuper-Smith: with stuff.
Stuart Ritchie: Yeah. Editors didn't care. Authors didn't care.
There's no correction. We found objective errors in the paper. No one gives a fuck. It's been cited like a hundred times this paper, no one cares. And, and, and, and that in that case, you know, is, is is the opposite of how, of how science, uh, should, should work, obviously. Like, like anyone would, would, would, would, would say, now there might be some reasons that the people have, they're very busy or something.
They, they, they, they haven't been able to consider it. But if you edit the journal and, and, and, or you publish the journal or whatever, there needs to be something in place where you, this is why editors can put an editorial expression of concern in an article, right. They can, they can say, we're concerned about this article.
Um, maybe the authors haven't been contacted yet or haven't made a decision about what they want to do, but we just want you to know that we think there are some issues with this paper. That's why. The editorial expression of concern exists as a, as a, as a thing, you know, as, as a thing that editors can do.
Um, and I think they should be using it a lot more. Uh, um, I
Benjamin James Kuper-Smith: didn't know this was a thing.
Stuart Ritchie: [00:50:00] Yeah, so you can, you can flag, you can flag as an editor, you can flag an article with an expression of concern. And in many cases, the expression of concern is basically it happens a few days or weeks before the retraction, right?
It, it, it, it is basically saying, someone's pointed out to me that there's major fraud in this article. We're doing the investigation right now, but for now, you should know that this article's under suspicion and then it gets retracted. But I think it should be much more common for editors to say, you know, there's some serious concerns about this, about this paper.
It may be the case that we remove this expression of concern in the future because. The person answers the, the question and it's completely fine. Um, and there's, there's some more complex issue why the data look a bit suspicious. And this happens, this happens sometimes. There's a guy on, on, on, on Twitter.
I saw, um, being in touch with some people, uh, who did a trial of a drug for COVID. And he looked at the data, they looked at the data, looked weird in various respects. He contacted the authors and they completely convinced him beyond a shadow of a doubt that the, the data are actually fine. And there was a [00:51:00]particular process regarding correlated variables that made the data look a bit weird in some respects, but they're not weird really.
They're completely consistent with the trial having happened. They were able to provide him with evidence that the trial happened and, and and so on. And he was totally convinced. Actually, you know, I originally thought it was fraud, but it's not. That can happen. But there are many cases where the data never existed in the first, like the, the trial never existed in the first place.
The data had been made up. And editors can flag that, even if they haven't been able to get the authors to agree to retract it. And of course you don't. In the final instance, need to have the authors to agree anyway. You can, you can retract a paper without the author's agreement.
Benjamin James Kuper-Smith: But, so, um, here's a not exactly suggestions then, but I'm just curious what you think of it.
So, one reason also why I haven't con like, done anything about this yet. I mean, there's a few reasons, but one of them is also that like, I'd have to write this, you know, I have to, basically there's, there's, there's no, as far as I'm aware of real template for this, I'd have to like, write this email from scratch, think about how I address this.
Mm-hmm. Mm-hmm. How I talk to them. Um, in this case, [00:52:00] they, I think this was done a few, but not too many years ago. Uh, so it's fairly recent still. Sure. But now the people are all, uh, uh, at prestigious universities and, uh, professors, et cetera, so, right. And as an early career
Stuart Ritchie: researcher, as a early career researcher, you don't want.
Be putting yourself into the situation where you're
Benjamin James Kuper-Smith: like, I wonder, so here's the kind of wondering, like what you think of that. So basically something like you have on your website, but a bit more systematized and automatic where people can just do that about papers. Like why have, I mean, I guess because it's work, that's why they haven't done it.
But would that make sense or is that just
Stuart Ritchie: Well, there are already, there are already a few mechanisms like that. So there's, there's the website, there's pub peer, uh, which is an anonymous website where you can post comments on any paper you want. Uh, that are
Benjamin James Kuper-Smith: Yeah, but I mean, something more like, so, so, um, just to maybe specified a bit more.
Hmm. I don't want to, um, uh, it seems like there were honest mistakes and they can probably fix, or, well, I don't think they, they do it, but, um, [00:53:00] I'd like something where I could just automatically kind of ideally, like I'd, I'd have something where you have the, the website with the paper and then you could click something like.
Comment something on this anonymously, and then I could just say like, Hey, this and this, and this is the case with this paper. That doesn't make sense. Yeah. Yeah. And then I'd be done, you know, then I'd be done in like five minutes.
Stuart Ritchie: I mean, rather
Benjamin James Kuper-Smith: than, I mean, some,
Stuart Ritchie: some journals do allow comment sections on their, on their paper.
Like some of the nature journals have a little comment section under, under each paper that you can do that. But the problem is, again, uh, it gets abused in some cases, like papers that have perhaps politically inconvenient results or controversial results in some way. They get hundreds and hundreds of comments from, you know, either activists or people who are upset with the paper for whatever reason.
And then some papers that are really bad, but have politics that people agree with, uh, don't get any comments at all. Or, or, or just one or two comments and no one pays any attention to them. Again, as I've said though previously, it doesn't matter, like if the comments are, if the comments are, um, are actually pointing out [00:54:00] genuine objective errors.
It doesn't matter why people have pointed 'em out. If they're objective errors, then you gotta correct them. But, um, I think the, I'm only saying this in the, the reason such a thing doesn't exist or, or isn't more widespread is that I think journals would be worried that, you know, certain papers would get tons of tons and tons and tons of comments and they would have to then devote resources to moderating those comments.
And you, you can see how that, you know, they're not incentivized to do that at all, even if it would help us get towards, you know, better quality results in the long run.
Benjamin James Kuper-Smith: Yeah, I mean, especially like the, the case that I have, I think ultimately there's probably something slightly wrong with the data. They made some, maybe they excluded trials, didn't say they did.
Okay. It seems like a fairly benign era, I think. And even
Stuart Ritchie: if, well, I mean, one, one option you have is, is to, is to write a letter to the journal. Like some journals publish. Actual, you know, commentary letters. Some of 'em have stupid arbitrary rules, like you can only publish this within one year of the original article being published and all.
It's like stupid things like that where like, oh, an error, but it's
Benjamin James Kuper-Smith: incorrect.
Stuart Ritchie: There's a statute of limitations on scientific errors apparently. But, [00:55:00] um, but uh, uh, you know, some journals do have that option. Um, and, and, and the nice thing there is that you get a publication out of it. 'cause everyone loves a publication, so, uh, uh, uh, you actually get, you know, something on your cv.
Benjamin James Kuper-Smith: Yeah, I mean, I'll see, I think I'll, I'll pro, I still want to contact the authors first and just see, um, that's probably the best what they do. That
Stuart Ritchie: probably
Benjamin James Kuper-Smith: outta curiosity to see what they do. And
Stuart Ritchie: Nick, Nick Brown, the, uh, fraud, fraud busting, uh, researcher and I, we were talking about this a while ago and, and some other people.
About writing an article of like, what are the steps you should do when we want to correct a mistake in a scientific paper. Uh, you know, start with the authors, escalate to the editor, maybe the action editor, then the main editor of the journal, then their university, their university's scientific integrity office.
The, their, the, the, you know, there's a whole, there's a whole
Benjamin James Kuper-Smith: prime minister.
Stuart Ritchie: Yeah. The, uh, well, you know, before, before you get to the Prime Minister, you can, you could talk to the funding sources maybe, like, is, is, you know, do, do they have any, any way of dealing with, uh, uh, [00:56:00] errors? Um. Not that we expect it would work in, in every or indeed most cases.
But, but you know, um, people don't know what to do at all. And again, going back to the issue that we talked about before about open science, like just making things slightly easier to do is, is gonna Yeah. You know, I think that
Benjamin James Kuper-Smith: would be super useful. I mean, I, you know, another one of the reasons I contacted Joe is also I, you know, I, I pointed out to some people in the lab, the errors gave a, like, lab presentation to like really show like what I think was wrong with it.
Yeah. And everyone, yeah. This doesn't seem good. Dunno what to do though
Stuart Ritchie: next. Yeah, exactly. Exactly. Exactly. Let's move on to the next thing. 'cause we don't actually
Benjamin James Kuper-Smith: Yeah. Like, which just, I know it seems important, but like, I dunno, what, what
Stuart Ritchie: would you do about that in this
Benjamin James Kuper-Smith: case?
Stuart Ritchie: Well, yeah, maybe, we'll, um, maybe this conversation has encouraged me to go and talk to Nick about writing that paper again.
Benjamin James Kuper-Smith: Hope so. Uh, okay. This is just a very random comment I wrote down is, uh, I mean this is right towards the end, but I didn't expect, uh, in a book about [00:57:00] science. Problems with science, let's say, uh, to read about, uh, Ky Bully and Bully. So congratulations on that.
Stuart Ritchie: Yeah, I thought it was a good, I thought it was a good example of someone who is obsessed with novelty, uh, uh, uh, uh, and attacking, attacking, you know, something which is boring.
I guess the, the way that the analogy doesn't quite work is that the, the, the boring symphony is extremely popular, whereas in science, uh, uh, the boring stuff is not. But the point was that among his particular clique, uh, uh, among the kind of avant-garde musicians, that boring stuff was of no interest, with no interest to anyone.
And, and he, that it was shit at, its at its premier. And, um, and uh, I thought it was a nice example of like scientists shouting. At papers like mine when I did the replication study that, oh, this is just something dull. You know? We want something new. We want something that really pushes the boundaries every single time.
And, you know, you can't, you can't reliably push the boundaries every single time. You've gotta stop and [00:58:00] consolidate, uh, at some point and make sure that you're getting things right. So that was the analogy I was trying to draw there, is that, um, in science, in in art, you might be able to constantly push the boundaries, but in science you've gotta consolidate at some point.
Yeah,
Benjamin James Kuper-Smith: yeah. No, it's just, I, I thought it was also an, an interesting analogy because I guess not only the piece, but also the person I would say, I mean, I don't know exactly what the. It's not even that modern anymore, right? They're both,
Stuart Ritchie: no,
Benjamin James Kuper-Smith: they're both dead, right? I think so.
Stuart Ritchie: Uh,
Benjamin James Kuper-Smith: um,
Stuart Ritchie: not sure. I think they are both dead.
Yes.
Benjamin James Kuper-Smith: Yeah. But
Stuart Ritchie: bull is
Benjamin James Kuper-Smith: died anyway, like within that kind of scene. Pierre Lee is a much, I'd assume, much more famous and much more established and, uh, respected figure than, what's his name? Hendrik? Uh, yeah.
Stuart Ritchie: Yeah. Um, yeah, he, uh, I, well he had different, I mean, he was a, his, I think his, his c composing stuff is, is, is so difficult and, and, and, and, um, Avantgarde that people
Benjamin James Kuper-Smith: bully.
Stuart Ritchie: Yeah. [00:59:00] That people don't, people don't really get it and don't really understand it. Whereas his, as a conductor, he was extremely well respected and Yeah, yeah. And, and, and kind of everywhere. But, um,
Benjamin James Kuper-Smith: no, I mean, like, I, let's put it this way. I've, I considered becoming a classical musician and I, um, in my teenage years, and I.
Like a lot of 20th century music.
Stuart Ritchie: Hmm.
Benjamin James Kuper-Smith: Um, I will listen to a lot of it, but I have to say with Pier, that's something where I feel like, yeah, I have to educate myself before this. Yeah. I have to, like, sometimes it's piano music, I think like, yeah. It's, I mean, it's kind of interesting to listen
Stuart Ritchie: to. Yeah. I, I, I try, I, I, I, uh, I always try and listen.
I always like, try and keep an open mind for it. I, I, I've never really clicked with his, with his stuff. I, um, there's a scene in, there's a scene in The Sopranos where they're trying to. They're trying to portray one of the characters as like a kind of very middle class person. And he's like sitting, I think he's reading Robert Nozik or something like that and he's listening to, uh, the, one of the piano pieces by, um, [01:00:00] and um, uh, it's like, it's like a classic example of the kind of upped themselves middle class person.
Benjamin James Kuper-Smith: Yeah. But, uh, this is, um, also like why I didn't expect this is because this seems like a pretty, I'm not sure how many people really understood that analogy perfectly or like got the context of it. Is that something that I was just curious about the publish. Did they comment on that and say like,
Stuart Ritchie: they
Benjamin James Kuper-Smith: took loads of no scientists knows they about these people.
Well, Ky they might, but
Stuart Ritchie: I think actually there was a review that said I didn't understand the Gretzky thing at the end. There was a review that said that, and I was like, well, think a bit harder. But, um, but uh, uh. I will say one thing I was quite grateful for was that the editor, um, took out from the first draft to the initial first draft, took out basically every stupid joke or cultural reference that I had made.
There were, he left a few in that one being one, but, uh, the book is way better and it's a nice example of like. Writing is that you, you write initially [01:01:00] and it's almost like you write for yourself 'cause you're like, oh, I'm quite pleased that I managed to make that cultural reference there. And then the editor just comes along and just wipes all that stuff out, just gets rid of all of it.
And uh, and, and because it's annoying and it dates the book and people don't quite get it and, and, and so on. So, um, if you've seen the first draft, there was a lot more stuff like that in there. Uh, and, and, and, and, you know, that particular part was, was left in. But um, there was a lot more of just stuff like that, which I was upset at the time when I saw it getting deleted.
But I'm very glad that it's gone now and, you know, on second thoughts, it was a much better idea to get rid of all that stuff.
Benjamin James Kuper-Smith: Okay, good. Yeah, I mean, I, I was kind of by coincidence I got that one, but yeah, probably wouldn't have got most of the others, uh, anyway. Um. In the book, you have these four main problems, fraud, bias, negligence, and hype.
Stuart Ritchie: Yeah.
Benjamin James Kuper-Smith: Um, for some reason, I'm not entirely sure why, but for some reason I find fraud really fascinating.
Stuart Ritchie: Hmm.
Benjamin James Kuper-Smith: There's something I find really fascinating about people putting in lots of [01:02:00] effort to pretend to be someone they're not. Uh, there's something I find really fascinating with that. Um, so I'd like to talk a bit more about that part.
Stuart Ritchie: Hmm, sure.
Benjamin James Kuper-Smith: I think the, the kind of three most interesting cases, I think are the, um, sta, Marini and Huang. Not sure I'm pronouncing all those correctly, but, um, can you like brief, I don't know, maybe like a minute, however long you want, but briefly kind of summarize each case, then we kind of use that as a starting point.
Stuart Ritchie: The reason I, uh, the reason that the first kind of. Of the four problems was fraud. Is that, is that, uh, it, it's the most kind of lurid and exciting, uh, aspect of this. I, you know, I love the, I love to read stories about this and it's like it's, you know, the same, the same, uh, part of your brain that makes you interested in true crime, uh, stuff.
You know, it makes you interested in scientific fraud. I think, uh, it's the same kind of like, I can't believe someone did this. It's really scary to think that the people are doing this. Uh, and I love to, you know, everyone loves to watch a Netflix documentary about true crime or, or whatever. Uh, and this is the [01:03:00] science version of true Crime.
Benjamin James Kuper-Smith: Yeah.
Stuart Ritchie: And, um, so yeah, I mean the, the, the Stapel case happened at the same time as the Darryl Bim thing was happening when I was trying to replicate this, this, uh, parapsychology study. It was an example of a professor, a social psychology professor at Berg University in the Netherlands who had written loads of papers on all sorts of different topics.
Uh, and some of them had gotten into not just psychology journals, but, but I mean, one of 'em got into science, so like a genuinely prestigious, genuinely prestigious, uh, journal that, that anyone would want to get published in. Um, and after a great deal of, uh, stuff was published, it turned out that he had made it all up.
Like, like he, uh, he had, um, he would regularly meet his co-author, uh, his colleagues in the, in the, uh, the coffee room. And they would say, oh, I'm kind of interested in this question, you know, XY whatever variables. And he would say, ah, I actually ran a study on that just the other week. Uh, I'll bring you in the data [01:04:00]tomorrow.
And he would go home that night, open up his laptop and sit at the kitchen table into the early hours of the, the, the morning writing, uh, typing in the numbers that he wanted into an Excel spreadsheet. Um, and then he would come and say, look, here's the data. And he would give it to his colleagues and they would say, wow, these are amazing results here.
He would sometimes give the data to his PhD students. Uh, uh, and several of them wrote PhD dissertations, which are entirely based on fraudulent data that they didn't know about that had been given to them by him. Um, but they started to become suspicious because, like, why was he running the experiments and not getting his PhD students to do it?
Is he not a busy professor? Why is he not delegating the experiments to them? And, and there were, you know, and the results just seemed a little bit too good to be true. It turned out that he, you know, after, after investigation, you know, he had, he kind of admitted that, that he had, that he had fraudulently made up the results in about 50 something papers, uh, of which have been retracted.
Um, and he is, I think now this fifth or sixth most retracted researcher of all, of all time that we, that we, you know, that we know about. That's the crazy thing, right? [01:05:00] Yeah. It's,
Benjamin James Kuper-Smith: he's number five.
Stuart Ritchie: I think he's six now. He's been oh six. He not number one he's picked, he's been picked off. Yeah. No, exactly.
There's, there's, there's, there, there are way worse people might be
Benjamin James Kuper-Smith: close.
Stuart Ritchie: Uh, yeah, absolutely. Uh, and so. That was an amazing example and that happened around about the same time as the Bem thing. And that was kind of caused psychologists to sort of look inward and go like, why didn't nobody realize for so many years that his research had was, was so crap?
And it turned out some people had tried to replicate his research but had never got be able to get it published because people don't like to publish replication studies or didn't back then. Uh, and very few people had ever questioned it. They'd kind of nodded it along. Yeah. You know, research on people who have a messy desk are, are, are more likely to be racist.
That was one of the, that was one of the studies that he did. Um, just like cool stuff like that. Yeah. Uh, uh, it gets a headline in the newspaper. It gets resources, you know, coming in from funders or to the university and
Benjamin James Kuper-Smith: I mean, it really fits into the whole. Line of research that doesn't replicate. Right?
Stuart Ritchie: Yeah. Oh, it's in terms
Benjamin James Kuper-Smith: of like, yeah,
Stuart Ritchie: no, [01:06:00] it was, it was exactly the way it
Benjamin James Kuper-Smith: sounds and
Stuart Ritchie: yeah, yeah, yeah. Easily understandable. Easy to make a headline, easy to do. Um, uh, it turned out, even though it was easy to do, he didn't do it. Uh, he, uh, he made up the, he made up the, the studies. Um, and, uh, so that was an amazing example of, you know, scientific fraud in psychology.
And there were several other examples that came out, uh, as you know, after that in the world of social psychology as well as, as well as other areas. So that happened around about the same time as my experience with the Darryl BIM replication. Um, later, well, a actually the, the, the WW case, which is a, a stem cell case, was a, was a much bigger, um, example of scientific fraud.
It was in, in South Korea. Um, that was in about 2005, six. I think that all came out that this, this researcher who was literally like mentioned on postage stamps, uh, in, in South Korea, he was so famous. Unbelievably famous. And he had apparently cloned was, was beginning to like [01:07:00] clone human fes with stem cells and be, you know, regenerate organs.
He did clone a dog. That was actually a genuine thing. He cloned the first ever dog, uh, to be cloned. Um, which was called s Snappy, I think was the dog's name. Snuffy. Um, 'cause it's like s uh,
Benjamin James Kuper-Smith: Snoopy or what, but,
Stuart Ritchie: well, it was like, it was used, maybe they pronounced it Snoopy, but it was like SNU, which is sold national university.
Benjamin James Kuper-Smith: Oh,
Stuart Ritchie: okay. So, yeah. Um, uh, I think, um, something like that. Anyway, um, uh, anyway, he had this, a massive empire of fraud where he had been giving bribes to politicians and, uh, uh, taking the money from research grants and letting from
Benjamin James Kuper-Smith: his grant, right?
Stuart Ritchie: Yeah, yeah. And giving, giving his wife. The money to buy a car and chalking it up to, and saying this was lab apparatus that was needed.
And, uh, uh, uh, you know, endless. He, and, and because he was, because he was so, so popular and, and that seeming, you know, seemed to making all [01:08:00] these medical breakthroughs that would help people with, you know, problems, you know, diseases and so on, uh, uh, in their, with, with their organs and so on. He was actually, you know, people wouldn't believe it for a long time.
People were out protesting against the, the whistleblowers and, and, uh, leaving endless angry comments online whenever the, the story was, was, was said that he was being persecuted in some way. Um, I think he narrowly avoided prison time and ended up doing research at some much more, doing research at some much more.
Uh, low grade university after it all kind of, after it all came out that,
Benjamin James Kuper-Smith: that's a crazy statement, right?
Stuart Ritchie: Yeah, yeah. He was prison
Benjamin James Kuper-Smith: time and is now at a less
Stuart Ritchie: protective university. Yeah. He was continuing to publish research. Yeah. Yeah. No, it speaks to a broader issue, which is like, how do you punish, how do you punish people who commit scientific fraud?
In some cases, in some less extreme cases, like in the US for instance, sometimes the punishment is you can't apply for a research grant for one year after the, after the fraud's been discovered. It's like, is that really a, a deterrent? Yeah. Is that really much a punishment?
Benjamin James Kuper-Smith: They'll just take one year to write a better [01:09:00] grant.
Stuart Ritchie: Right? Exactly. Some people lose their jobs, some people, you know, uh, uh and end up not, not publishing a science anymore, but some people move on to different university and continue publishing research there. Um, so, so. That was, that was an, that was an interesting example of like, he was a massive big deal.
He was a huge, huge, huge success story. And it turned out that he was just faking all the data. The pictures in his papers had been photoshopped and, and and so on. And there were never, there were never the, the, the stem cell breakthroughs that he had claimed. And then, and then the third one you mentioned, or the, the other one you mentioned is, is the case of Paolo Macchiarini, who, um, is, that's much more recent in 2016 or so.
That kind of all came out, I think, um, he was a surgeon at, well, he was recruited by the Karolinska Hospital, which is part of the Karolinska Institute, which is the university in Stockholm that, uh, it gives out the Nobel Prize for physiology and medicine. So it's a big deal to get a phone call from, from, from, from there.
And you'd think they would have very high standards and the scientists that they, uh, recruit, but they don't, it turns [01:10:00] out, and, uh, or at least they didn't in this case, they recruited a guy who claimed to be able to do a full, uh, well claimed to claim to do a, a, a windpipe transplant. Of a of a, of an artificial wind impact.
So rather than taking the organ from a dead body, which is rare to get. And giving it to someone who's maybe got cancer or some kinda issue that's caused their windpipe to have to be removed, um, or damaged in some way, instead of taking it from a dead body, you can build one in a, you know, he, he claimed that you could build a windpipe in this special kinda, almost incubator machine that you would seed with people's stem cells so that when you put the organ in, it wouldn't be rejected by the immune system.
Uh, and he had breakthrough after breakthrough after breakthrough, on, on, on, on, on this, saying that the patients that he had operated on and given them these artificial, uh, tracheas were, um, were, were extremely healthy and recovering well and so on. And the, the, it turns out that the scientific papers just contain.
Uh, falsehoods about the medical records of the, of the participants, right? So [01:11:00] they, he claimed that they were recovering well, but in fact, some of them were dead. But like before the papers were even published, some of them had died, uh, recovering very, very poorly, huge infections, uh, uh, you know, going, needed to go back for further surgery.
Um, horrible, really tragic, harrowing stories of people with, you know, like pus filled wounds, like holes in their chest, like pouring out, uh, uh, all these fluids and so on, because he had just done such a terrible job of, of these, of these operations. And, you know, that sort of thing is, it's bad enough, but another kind of side to the story was that the way that the institutions covered up for him.
So not just the Kalinsky Institute, which. After they had employed someone to check whether he was a scientific fraud and writing incorrect stuff in his papers, by the way, it wasn't just these papers. There were other papers where he had falsified data on like rats that he had, uh, that he had done an experiment on.
Um, he
Benjamin James Kuper-Smith: was also the guy who had like a second wife or something.
Stuart Ritchie: That's right. Well, that's another, I mean, that's like the third [01:12:00] sighting the story. Yeah. But the second side is that the institution's covered up from the Carolinas Institute rejected the reports of an independent, uh, uh, researcher that they brought in to check whether he had committed fraud, who said, by the way, he has committed fraud.
They brought in this independent researcher who said, uh, he's definitely committed fraud, but they said, we, we, we've actually done our own investigation, which you can't see, and we think it's all fine. The Lancet, the journal that he published in. Published this kind of crowing editorial saying he's been exonerated and hasn't committed scientific fraud, uh, by the way, you know, um, uh, you know, people should be a lot less suspicious, blah, blah, blah.
Um, and a few months later after loads more stuff came out, they had to climb down embarrassingly and say, actually we, we have published a fraudulent paper by this guy. And, um, uh, yeah, and as you say, there's like, his personal life was full of weird conman style stuff as well, where he was married to, uh, a woman with kids, but was also dating this NBC news producer at the time, who [01:13:00] he told that he would marry her and told that he was the Pope's doctor and that the Pope would do their wedding and that Barack Obama was coming and that Elton John was gonna give, was gonna do the music like.
Like obvious fraud conman type stuff in his personal life and in his science. And unfortunately, you know, the personal life stuff obviously was extremely hurtful for the person who he was kind of victimizing. Yeah. But in the science, it actually caused people to die, you know? And, and, and, um, he got sacked, but then started working in Russia, uh, continuing to do some research, but then I think he got kind of, hi, his, uh, his funding got spiked in Russia as well.
And I don't actually know where he is currently and what, what, what he's doing right now. Uh, certainly hasn't been publishing stuff I don't think for, for, for a while. Although he did go on publishing after all the allegations came out. So three examples of where, you know, um, people have made up data or heavily manipulated data or, you know, um, uh, done other sort of nefarious stuff with data.
It's, it, it's kind of the [01:14:00] opposite of what you would want to happen in, in the case of the, the, the macchiarini thing. You would want the institutions of science to kind of open up and say, look, we are investigating this guy. We're gonna be very careful. We think there's some serious problems here. Uh, but instead they defended him way after it was reasonable to, to, to, to, to do so.
Um, and you can see why, right? Nobody wants to believe that someone that they've published or that they've employed has committed scientific fraud and may even have killed patients with the dodgy scientific experiments he was doing. Um, you can see why they would have, you know, uh, uh, sort of mindset against believing that.
Uh, so yeah, three stories there of, uh, of, of kind of upsetting, uh, scenarios where it took way too long for the scientific fraud to be discovered. And, um, I'm sure there's lessons there about how maybe scientists and, and you know, I say this in the book, it, it doesn't sound nice, but maybe scientists should have a little bit less trust.
For each other. Uh, the whole point of doing science is that you take nobody's word for it and that you, you know, are dispassionately looking through the [01:15:00] data at all times and so on. In, in, in a lot of these cases, people just took, uh, you know, d direct apple's data sets on, on, on faith, uh, and, and believed them way, you know, be before they, before they had actually checked anything about whether the data were actually realistic or not.
Benjamin James Kuper-Smith: Yeah. Um, by the way, is there for, for, for start, is there anything, I mean, um, you know, I'm not really interested in the, in the people's personal lives, but what I find interesting is that Mai and Huang. Both seemed to just be, how you say, general fraudsters or whatever. Um, it seemed like science was just one branch of kind of what they did.
Um, is there anything rista or not? I don't, I dunno if any
Stuart Ritchie: heard about this. Can read, you can read his, uh, memoir, autobiography, whatever you wanna call it. Um, uh, which he, which he published. Um, after all, everything all came out and I think, um, no, I don't think he, I think, you know, he was married and, and, and, and, and so on, and didn't seem to have any, um, [01:16:00] didn't seem to have any like, weird things like trying to marry a second person at the same time like Macchiarini did.
Um, so no, I don't think there's anything like him, but, but I, I suspect, um. It is like one of these situations where if you find that someone has done something like fraud or plagiarism or whatever, you can check their other stuff and find that they've, that they've done it elsewhere in the case of Wang and, uh, and uh, uh, macaroni that bled out into their personal lives.
I'm not sure quite the same thing happened with with Stapel, but it was certainly the case that it bled out into almost all the research that he ever touched.
Benjamin James Kuper-Smith: Yeah. Yeah. I mean, like, what I'm kind of interested in, and I kind of why I asked that question is also, it seems so, you know, we can, we can do lots of things to prevent fraud, right?
I mean, you mentioned a lot of the techniques.
Stuart Ritchie: Hmm.
Benjamin James Kuper-Smith: Um, and some of the people invented them. Um, in, I mean, you mentioned the book and some of the people we already mentioned in our conversation, um, you know, you can, you can do the, the grim testing mm-hmm. By, um, gen Brown and I used actually, I mean, I tried to do that for the paper that I found, but I was too stupid [01:17:00] to use it.
But I used the same kind of logic and principles behind it to do that. And that I found something like that too with that. Okay.
Stuart Ritchie: But,
Benjamin James Kuper-Smith: um, you know, you can do that and you can have open science. There's all these things you can do. But I wonder like with with people like Juan and Ian Star, it seems to me like is there anything you can do about people who so systematically and effortful defraud a system?
I mean, it seems like, you know, that's just gonna happen, right? People are just gonna, yeah.
Stuart Ritchie: Yeah. I think we'll always have people like this around, but I think we can, uh, we can set up a system to, to deal with it more adequately. So, I mean, one thing is that we can have random or systematic checks with those kinds of things.
You're talking about the, the fraud spotting algorithm type things. When people submit articles to journals, that's one thing I think just get, just asking people to post their data online is, is, is, is gonna be helpful. If not, I mean, it's not gonna catch every case. And as I say, there have been cases where people have posted data online and it had turned out to be fraudulent, uh, which is amazing.[01:18:00]
Uh,
Benjamin James Kuper-Smith: although just briefly, I, I think the first question I asked Hilgert is. Could you do it? Could you fake a data set and publish it and not get away with it? Is control.
Stuart Ritchie: Yeah. No, of, of course, of course. There'll always be like that. And, and, and the scary question, which I, you know, I, I mentioned towards the end of the fraud chapter in the book is there must be people out there who we will just never find.
'cause they've successfully covered their tracks. I mean, we'll never know.
Benjamin James Kuper-Smith: We found those three, or, I mean, we, I didn't do anything. The reason those three people were found is that they were big, famous people, interesting lives, right.
Stuart Ritchie: Lots of attention, lots of attention on them, and so on. And
Benjamin James Kuper-Smith: they were, exactly.
There must be lots and lots of people who in a way are smarter, who just go for the successful route
Stuart Ritchie: Totally.
Benjamin James Kuper-Smith: And not the super successful route.
Stuart Ritchie: I totally think that that is, that, that is, uh, not just, not just, not just possible, but, but, but probable that we're missing out on loads of cases of that. Um, and
Benjamin James Kuper-Smith: if they're listening right now, congratulations.
Stuart Ritchie: Yeah. Well done. Yeah. Thanks, man. I hope you're not doing medical research. I hope you're doing some like, area of social psychology or something that, that's never gonna impact the real world.
Benjamin James Kuper-Smith: Yeah.
Stuart Ritchie: Um, [01:19:00] uh, but, but, um. I think we, we, you know, the, the, the system we have at the moment, um, where, for instance, peer reviewers are under endless time pressure to, to, to get stuff done.
They're not paid anything. They're, um, uh, themselves got loads of stuff to do, loads of, loads of research and, and, and teaching and whatever else. They're rushed off their feet the more time they spend. You know, doing, uh, uh, all this other stuff, the, the less time they've got to do peer review. And so, um, I think we've gotta change the way that we do peer review to allow people to have more time to take it easy and to, to, to just think about the, the, the look at the data, think about the paper.
Um, having fully replicable workflows and stuff is gonna help there as well. Just being totally transparent. If you've got nothing to hide, then you can, you can do, you can, you can share everything with, with the, with the people that are reading your paper. Um, yeah, I think there are lots of things like that, that you could, you could institute right at the very start to make it less likely that people would get away [01:20:00] with it.
But I think there will always be smart fraudsters and so on. Um, but I think the answer to that is to do things like encouraging independent labs to replicate findings, uh, and, and to, and, and, you know, all these kind of collaborative efforts where people have come along and done a survey of. Replication studies, I think, have, have been really useful because they've, they've uncovered a few cases where, you know, maybe the original study was not, was not particularly, you know, well done or was, was was p hacked or whatever it is.
So, uh, I think changing the, the, the general culture towards one where people are being transparent and are thinking all the time about, you know, I want to just, I wanna just, I just wanna double check this. I wanna just double check this. Um, we'll make it, we'll, we will not eradicate fraud, but it'll be a, an atmosphere where fraud is less likely to kind of grow.
And another thing which I've previously mentioned is, is actually punishing people where it's found. Right. So not just, you can't just, you can't. Apply for another grant for a year, but, but, but actually making really severe punishments. Um, uh, and I'm not quite sure what [01:21:00] those would be in, in, in, within science, but in some cases where people have misappropriated actual, you know, money, they, they could be criminally prosecuted if they've, they've taken taxpayers money and, and basically defrauded the taxpayer, then, uh, I'm not sure why that's different from someone who steals, um, from a bank or, or, or from someone's wallet in the street or whatever.
Um, uh, I'm sure why, yeah. Which, just
Benjamin James Kuper-Smith: basic tax fraud or something.
Stuart Ritchie: Right? Exactly. Exactly. You're, you're stealing from the, you're stealing from the taxpayer. So yeah, I think there's the kind of the stick end where you want harsher punishments, I think. And there's also the carrot end, and also, by the way, naming and shaming people and stuff is, can be very effective.
And then there's the, there's the, there's the, the kind of more friendly end, which is making the, it's not friendly, but it's, it's the more positive end. It's making science more open and transparent in order to. Shine some light on all those corners where the frauds, the fraud is happening and, uh, and, and make it less likely that people can get away with it.
Um, but, but it's not, it's not, it's not easy. And there's no, there's no actual quick fix to this, to this question. Um, you'll always have people who are looking to [01:22:00] defraud the system, even in a pandemic. We've seen people committing what amounts to scientific fraud, um, in order to publicize one treatment or another, or, uh, or criticize a treatment that they don't like.
Um, we've seen examples of that. So even in a case where the stuff that they're doing will directly lead to, you know, uh, medical treatments and maybe, you know, people's lives being at risk, they're still committing fraud. So this is not something where we can just appeal to people not to do it or whatever.
We have to change the way that the system works, um, to make it less likely.
Benjamin James Kuper-Smith: One thing I'm, I'm asking myself is, so this is, I mean this is a more general point, is basically how much effort do we want to put into fighting fraud? Um, I mean, like, maybe one a different way of thinking about this is that, for example, I think that in Germany, um, um, right now the, the sys, the, the kind of general administrative system is usually set up to make it really hard to commit fraud.
But also you have a lot more work for everyone else, basically. So you have your, the administrative system is a lot more [01:23:00] burdensome for everyone.
Stuart Ritchie: Mm-hmm.
Benjamin James Kuper-Smith: And you know, that means you only have, let's say, a 1% chance of an error happening. Um, but it's a lot, lot more work. For example, in the uk I find in general, it seemed to me people were more happy to say, let's say we have a 95% chance of no error happening, uh, no for being committed or something.
So it's. More likely that something bad's gonna happen, but you like halve the amount of effort that everyone has to put into the system. So I'm curious, like what do you, how much do we actually want to try and get rid of fraud, and how much is it just a waste of time almost in?
Stuart Ritchie: Yeah, it's an interesting question that, um, has come up in the context recently of, uh, government, the UK government buying COVID tests.
Um, for instance, so like at the start of the pandemic, the UK government bought loads and loads of COVID tests, including some from this particular company, which it wasn't necessarily fraud, but it, well, maybe it was, I don't know. It's hard to repeals intentions, but there were a company where [01:24:00] no one had ever actually tested the, the stuff that they, you know, that the tests that they had, whether they actually measured COVID, turned out they didn't.
Uh, they were very low quality. Um, big claims were made for the accuracy of these tests, which were just not based on reality. And yet we paid the millions and millions of pounds of taxpayers money in that emergency situation. However. It's fine to be defrauded a little bit probably because you're, you're casting the net widely and you're trying to get lots of different technologies in to try and address this, you know, pressing emergency problem.
But I think that's, that's a very specific situation where you, we just, we needed to get this right. Any minimal quality control would've caught the fact that there were no actual efficacy trials of these tests, and no one really knew whether they worked or not. Uh, so that's, you know, one example of where you might not care because you're in such a rush and you just want, you just want to get the right thing done.
But I think when it comes to the scientific record, we can't let these things lie. The scientific record, there's a principle here, right? The scientific record is supposed to be a record of true things, whether. They might be mistaken because [01:25:00] they were, came about through luck or chance or whatever, then that's, that's fine.
But it's a record of all the, the things that we did, um, that we, that we think are true, or at least are how the experiment was, was, was, was carried out. So we need to have much better ways of dealing with it. And I think it really does matter, um, if we, if we care about science, not just because medical papers might get out into the literature and make doctors or surgeons do things differently and kill their patients, obviously that's a big problem.
And that has happened in some fraud cases, I think. Um, or in cases like the Andrew Wakefield's fraud paper where the, uh, you know, it, it, it gave lots of resources to the anti-vaccine movement. In terms of intellectual resources, you know, uh, uh, it sparked off a huge scare. But I think there's a, there's a, there's a larger principle at stake, which is we need the scientific record to be correct.
We need the scientific record to be, uh, uh, an accurate, uh, description of what was done. And if we'd stop caring about whether it's right or not, then we get back into the realm of like just being bullshitters again. Um, uh, we, we, you know, we, we, we are [01:26:00] allowing people to come along, a new generation of scientists to come along and read this literature that's full of, uh, of, of, of, uh, of, of results, which, yeah, may or may not be correct, and that's not good enough.
Benjamin James Kuper-Smith: So maybe the example, um, uh, let's say that, um, I mean, in principle of course, I agree, like with, uh, you know, we don't want fraud. Sure. But I wonder to what extent it's just the, the effort of eliminating fraud actually in the, in the big picture, uh, reduces. How much good we're doing. I mean, let's, like, one very set example is that when we, uh, in the, in the old lab we were in, in Hamburg, we wanted to do an online study.
And, um, for some reason it seems no one at our institutes had actually done that. I mean, we were a more neuroscience lab, uh, institute.
Stuart Ritchie: Hmm.
Benjamin James Kuper-Smith: Um, where people do use MRI rather than behavioral, lots of payroll testing. But so we were the first people who had to kind of justify to the, um, the, the budget department or whatever, why we're using, why we're spending this money to this company called [01:27:00] Prolific that they'd never heard of.
Stuart Ritchie: Oh yeah.
Benjamin James Kuper-Smith: And you know, this is the kind of thing where it would've taken us maybe two minutes, let's say five minutes to just, you know, use our own credit card, put the money over, and then, uh, get it back afterwards. But because it wasn't like officially. Allowed or something, we had to first, you know, ask for it to be allowed for us to do that basically.
Mm. And then, and then, so basically what happened is we probably spend like an a week's worth of effort of trying to figure out how we can contact them, writing the kind of thing and justifying why we're doing this. Then every time we want a new payment, we have to ask them again and they have to put it in.
Then it takes another week for them to do it, et cetera, et cetera. Right? Sure. So it's this really lengthy process.
Stuart Ritchie: Yeah. Yeah.
Benjamin James Kuper-Smith: And at some point I went just like, I don't care if someone like defrauds the system for like a few thousand euros as long as I don't have to deal with this.
Stuart Ritchie: Yeah. I, uh, what you're describing there is not necessarily like a good process of catching fraud.
It's a it that, what you're describing there is, is, [01:28:00] is. Bullshit institutional red tape. Right? And, and, and no one's in support of that.
Benjamin James Kuper-Smith: Yeah.
Stuart Ritchie: No one cares about institutional red tape. Uh, no one wants there to be institutional red
Benjamin James Kuper-Smith: tape. But isn't it, isn't it though a way to eliminate fraud to say like, okay, these research funds, we know exactly what you're using it for.
Hmm.
Benjamin James Kuper-Smith: Um, and if, and, you know, I can see why they do it,
Stuart Ritchie: but it's an argument for streamlining that process. It's an argument for making that process easier. Uh, I don't know the specifics of how it works, so I, you know, I can't think of a particular way of making it, of making it better, but it's, so that's an argument for doing that rather than not having checks at all and not caring about fraud.
Because the problem is these are very like, tail end cases, but every so often you will get someone coming along like macaroni or, or, or, or Wang or whatever, who will, um, who will commit, you know, massive scientific fraud. And, and it will not just, you know, could, it could kill patients. It could, uh, um, uh, pollute the scientific literature and it could also make it look really bad for.
Scientists in general and, and, and, and kind of [01:29:00] reduce public trust and science. I mean, there's all sorts of reasons why we don't want, why we should care about catching scientific fraud and why I find it weird that we, that we don't, in many cases, like we, that we, that we seem to not have that many resources dedicated to dealing with scientific fraud.
So, um, yeah, I, I, uh, I, I agree with you in, in that, that, um, there are some kind of emergency cases where it might be, you know, fine to be defrauded a little bit, as long as, as long as it means that you're keeping, continuing moving. But, um. I think those cases are quite few and far between,
Benjamin James Kuper-Smith: uh, from like a benefit cost analysis.
Like in that case, I wonder like how much you can, you know, the amount of like hours put into running the system, being costing more than the occasional fraud.
Stuart Ritchie: Well, the problem is at the moment, the system for catching fraud there isn't really one. And, and, and it takes people all this time, all these resources and so on.
Whereas if we had pet fraud prevention measures in the first place, we probably wouldn't need so often, at least to get to the point where people are spending hours and hours and hours of unpaid time investigating [01:30:00] these, investigating these papers, um, with the, the, the weird data would've been caught out already or, or, or whatever.
Um, but yeah, I, I agree that there will come a point where being obsessive obsessively focusing on fraud will, um, will mean that we, um, will mean that we stop being able to do research as quickly or as efficiently as we did before. Um, but I think we, that would have to be, you'd have to set the bar quite.
Uh, I think, I think I, I I, I, I, I, I think we should care an awful lot about whether scientific fraud is in the literature and, um, you would have to make quite a strong argument for a cost benefit thing about, you know, we'd have to somehow, we'd have to somehow quantify the benefit of not having scientific fraud in the literature, which I think is, is really important.
Right.
Benjamin James Kuper-Smith: Yeah. And I guess, I mean, so this, um, this is kind of like the last kind of topic I want to open up, or, um, I think a lot of these things maybe can be also addressed with fairly simple things, like it being fine for no results to be published. So then like a lot of failed [01:31:00] replications won't come up.
But this kind of, so this relates to like the last one I wanted to make, which was, um, so one thing I kept wondering, whilst, whilst reading your book is that like, you know, you, you, you talk a lot about these individual solutions, right? Um, like, you know, we've mentioned enough of them already.
Stuart Ritchie: Hmm.
Benjamin James Kuper-Smith: Um, so I'm not gonna repeat them now, but.
Do you think like a bunch of small things is gonna do the job or does it need like a, a revolution in that sense?
Stuart Ritchie: Um, um, well it depends on what you mean by revolution. I think probably science will end up, if, if we move towards making science better, we'll end up looking very different from how it looks right now.
Like, we wouldn't maybe have the same journal system, we wouldn't have the same system of putting your research online, publishing your research, doing your research. You know, uh, if we are, if we're gonna go down the route of, you know, the registered report and so on. But revolution in the sense that we should just upend the system right now and completely replace it with new things, I think is a bad idea given that we're all scientists here and we want to test whether things that we're doing will actually improve the quality of, [01:32:00] of, of research.
So I think there's a, there's a, there's a strong argument for having a long period of experimentation with new ways of doing research, new ways of publishing research, new ways of doing peer review, new ways of evaluating research, whatever it is, uh, whatever point along the, the line where we do much more meta scientific research, collect high quality meta scientific data and really test whether it's in randomized controlled trials or some other clever way of testing, um, new ways of doing science and, and how robust replicable important, uh, whatever each research finding is.
We do not want to end up in a situation where. We institute a new system, which has all these unknown knowns or whatever, uh, uh, unknown unknowns.
Benjamin James Kuper-Smith: Yeah, unknown. Unknown.
Stuart Ritchie: Which, which ruin the, which, which ruin science in a, in a, in a, perhaps not exactly the same way as it's being ruined right now, but in a different, in a different way entirely.
So, you know, as I say, we're all scientists. Let's do some research on what makes things better, rather than having a massive revolution and [01:33:00] just changing it to the way that we want it to go. Right Now, let's, let's be humble about the fact that we need to test that too.