Eiko Fried is an Assistant Professor of Clinical Psychology at Leiden University. He recently published a target article in Psychological Inquiry about the lack of theory building in network and factor models, and how this impedes progress.
In this conversation, we talk about that article, problems with theories in psychology, Eiko's general approach to science, and much more.
BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith. New conversations every other Friday. You can find the podcast on all podcasting platforms (e.g., Spotify, Apple/Google Podcasts, etc.).
Timestamps
0:00:04: Eiko's photography
0:03:33: The Lancet Psychiatry profile about Eiko / being a generalist
0:15:42: Eiko's "No Committee"
0:26:33: Begin discussing Eiko's paper "Lack of theory..."
0:49:55: Theories don't have to be correct
0:53:02: Model comparison in network and factor models, and constraints of the scientific (publishing) industry
1:14:14: Useful fictions in science
1:22:03: Writing critiques without pointing fingers
1:25:03: Paul Meehl
1:28:09: Education in Psychology
Podcast links
Website: https://bjks.buzzsprout.com/
Twitter: https://twitter.com/BjksPodcast
Eiko's links
Website: https://eiko-fried.com/
Google Scholar: https://scholar.google.de/citations?user=DUK0qQoAAAAJ
Twitter: https://twitter.com/EikoFried
Ben's links
Website: www.bjks.page/
Google Scholar: https://scholar.google.co.uk/citations?user=-nWNfvcAAAAJ
Paul Meehl's lectures on YouTube: https://www.youtube.com/playlist?list=PLzRWx56_mpAT5yWRI-po-ybK9uAyZNX_z
References
Borsboom, D. (2013). Theoretical amnesia. Open Science Collaboration Blog.
Feyerabend, P. (1993). Against method.
Fried, E. I. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological Inquiry.
Fried, E. I., Greene, A. L., & Eaton, N. R. (2021). The p factor is the sum of its parts, for now. World Psychiatry.
Frith, U. (2020). Fast lane to slow science. Trends in Cognitive Sciences.
Kellen, D., Davis-Stober, C., Dunn, J. C., & Kalish, M. (2020). The problem of coordination and the pursuit of structural constraints in psychology. PsyArXiv.
Kendler, K. S., Aggen, S. H., Werner, M., & Fried, E. I. (2020). A topography of 21 phobic fears: network analysis in an epidemiological sample of adult twins. Psychological Medicine.
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of consulting and clinical Psychology.
Meehl, P. E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant it. Psychological Inquiry.
Meehl, P. E. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports.
Morgan, J. (2019). Eiko Fried: organising incoherence with models, networks, and systems. The Lancet Psychiatry.
Smaldino, P. E. (2017). Models are stupid, and we need more of them. Computational Social Psychology.
Yarkoni, T. (2020). Implicit realism impedes progress in psychology: Comment on Fried (2020). Psychological Inquiry.
-
[This is an automated transcript with many errors]
Benjamin James Kuper-Smith: [00:00:00] By the way, that was on your, uh, on your website or blog. And I was curious, are you, are you still taking photos or is that something that's from the
Eiko Fried: past? I wish, I wish Facebook keeps showing me these, like, you know, you took this picture seven years ago, here are 500 likes, and I'm like, I should do this again.
And, uh, no. My camera has been lying, uh, dormant for five years now, I think. Just too busy. Completely, or? Yeah, nearly completely. Sometimes I take it out for a friend of mine asks for a CV photograph or something, and sure I can do you that favor. But, um, no, I haven't been traveling with it. It's a big, and it is a backpack and there's six or seven lenses in there and two cameras.
So it's also a bit of a commitment to, to bring, bring my camera in that, in that sense.
Benjamin James Kuper-Smith: Yeah. I'm especially asking because actually not for long part of my life, but for a few years I actually did quite a lot of photos. And one thing I [00:01:00] actually, I mean this was mainly also because I was a student and I just had zero money.
Uh, but I always had one camera, like one or two lenses at most, which made it easy to, you know, it's like half of half of Ack or something.
Eiko Fried: Yeah. I think the last trip I did was actually, sorry. Uh, it was, um, Arizona. Between two conferences I had six weeks of time, and so I rented a car and ended New Mexico, Arizona, Utah, Colorado, California, Nevada
Benjamin James Kuper-Smith: landscape then, or just
Eiko Fried: Yeah, mostly.
Mostly landscape.
Benjamin James Kuper-Smith: Yeah. I mean, I've, I've never been to those areas, but it sounds like you get some pretty good land l it's
Eiko Fried: amazing. I had no idea Arizona was that beautiful. I really did not know that. So brilliant.
Benjamin James Kuper-Smith: Yeah. But is that something like photography? Is that something you did seriously, or, I mean there's, I guess there's different levels of seriousness, right?
Eiko Fried: Yeah. I was not a professional for sure, but I, I did have a few exhibitions. I made some money on the side [00:02:00] studying, uh, you know, doing some wedding photography or other things that pay fairly. Okay. I even got to travel a little because people invited me to, you know, they, they, uh, paid the plane ticket to fly somewhere to do the wedding photography.
Benjamin James Kuper-Smith: Oh, really? That's cool.
Eiko Fried: Yeah, that was nice,
Benjamin James Kuper-Smith: but it just doesn't, uh, you lost interest or you, it just doesn't fit into the other lifestyle. You are.
Eiko Fried: My, my parents were, um, super supportive in that I could, you know, I played the cello and the piano and I had, you know, volleyball and table tennis and TaeKwonDo, but at some point you just need to decide what to do.
And the same goes for my hobbies. I, I, um, also used to sing quite a bit. Even after I left Berlin, we used to travel quite a bit and then give concerts and can't do that, and photography and academia and, you know, role playing and board games and being a passionate cook at the same time that just, it doesn't pan out.
So I, uh, I dropped the photography at some point and I, I regret it. And maybe I'll pick it up again at some point.
Benjamin James Kuper-Smith: Yeah. The thing that you [00:03:00] have to decide on what to do is something that I'm still trying not to learn. That's something I'm still very ly trying to, what's the term, uh, rebel against or whatever.
Not like in an active way, just doing it and realizing like, oh, this isn't working.
Eiko Fried: I, I would say my career is a good example that I haven't learned this lesson either. Given the sort of width of the projects we've been doing in the last three or four years. There's really no specialization anywhere in there.
It's quite all over the place.
Benjamin James Kuper-Smith: Actually, that does relate a little bit then to one of the questions I actually had for early on, which was there was a profile written about you right in Lance's Psychiatry. Uh, actually, first question, how does that kind of thing happen? Did they just contact you and say, Hey, Aiko, we wanna write about you, or,
Eiko Fried: yeah, the, uh, journal reached out to me, um, I think I met the editor as part of a meeting at the Welcome Foundation in the uk.
Benjamin James Kuper-Smith: Mm-hmm.
Eiko Fried: Um, [00:04:00] I'm not sure if that's the reason why they reached out to me. I doubt it, but, but, um, maybe I, I don't know. They reached out to me and they had a professional, um, journalist, uh, do that, write that piece for them. So an external person. Um, so making sure that the journal itself has no sort of conflicts or anything like that.
And that was quite scary to be honest, because I. I am not used to giving interviews in the first place, and if they happen, they are about my work and not about me as a person. And so in that interview also the, the journalist who was very kind and super professional asked, you know, about my life and my, my sort of things that happened before my career and, and motivations for doing my work and that not something I commonly get to answer.
So that was a bit scary.
Benjamin James Kuper-Smith: Okay. When was that?
Eiko Fried: One and a half years ago, I think
Benjamin James Kuper-Smith: One and a half years ago. Two maybe ago. Okay. So you are, you are trained now for doing this podcast?
Eiko Fried: Yes. I'm, uh, perfectly prepared professional. These, these 45 minutes have, uh, given me all training I require for [00:05:00] any further interviews.
Benjamin James Kuper-Smith: Yeah, but I mean that's, it did seem, I mean, did they just, uh, I dunno is like a common thing to have these s written about because I can't remember seeing many really, but I also don't really read. Journals in that way, right? Like, I just read the papers, so I don't know.
Eiko Fried: No, I, I am, I hold the, uh, the editor in chief and Lance Psychiatry in really a high regard.
I think they're, they've been doing some, some really progressive things in the journal, and yeah, writing, featuring early career researchers and profiles, I think is a very valuable thing to do. Obviously, I, I greatly benefited from it, so I'm not gonna be the one being critical about that, that, but I, I think it's a nice idea, um, featuring sort of more, more younger folks and their work and their motivations to doing their work.
It's a different format and yeah, you're right, Ben. It's not commonly done as far as I'm aware.
Benjamin James Kuper-Smith: Okay, cool. So the, the quote, uh, that reminded me, uh, rather what you, you said something that reminded me of the [00:06:00] quote, which is about this. Almost not deciding on a single thing to do, but doing multiple things.
Uh, the quote is, his strength lies. He believes in the breadth of his interests and not so much in the depth of his knowledge in any given area. He makes a point of saying that he's not an expert in any one thing, but he has a knack of translating different ideas and theories across disciplines. That
Eiko Fried: sounds nicer than I put it, uh, in the interview.
Did you just
say,
Benjamin James Kuper-Smith: I dunno what I'm doing or,
Eiko Fried: uh, I think I said on, because I think that the question was what my areas of of research are, and I said I'm sort of a clinical psychologist by training, which means I have a research PhD in clinical psychology, but I, uh, I have no education and seeing patients.
I'm not a psychotherapist, which requires very different educations in most countries, at least in Europe. And that I'm also a bit of a methodologist because I worked in methods departments in, in, during my four years of a postdoc. And so I think I, I said to the journalist that I, four days of the week, I'm neither, [00:07:00] and there's this one day a week where I think I'm both, um, and I think he translated that very kindly into that.
I'm a, I'm a generalist. There was a, there was a grant I applied for maybe two years ago, and that was the first grant I really felt comfortable applying for because the idea was that you hire five specialists from five areas and you sort of are the bridging node and you get to ensure that they can talk to each other through you or with you.
And, um, I think that's something I could do.
Benjamin James Kuper-Smith: Is that the grant that you then got the ESC grant? Or
Eiko Fried: is that No, I, I ended up not applying for this grant actually in the end, uh, due to time constraints, but I, that was, the f grants are overwhelming and I usually feel unqualified to some degree, but this one I felt comfortable about because that's something I can do.
I can bridge people's experiences and, and research areas. Of course, not all of them, but. Between clinical psychology and methods? I think, yeah, it's one of the reasons that the tutorial papers we've [00:08:00] been writing are quite well cited, I think because we write them together with specialists from different areas and then we do some science translation and it really helps having struggled with statistics in my, in my PhD as a clinical psychologist to sort of have an idea what, what education people have, who read these papers and where they come from and what they struggle with.
Like obviously I don't know this for, for everybody, but I, I know the general concerns and that helps a little, I think, in translating these things to more, um, nons statistical audiences.
Benjamin James Kuper-Smith: Yeah, definitely. I mean, I've, I've tried to learn some maths and you have some books that, you know, it's this like introduction to Maths or something and the first sentence is, let's see, be a set of blah, blah, blah.
And it's just like, no, no, this is not, this is not how you start for people who dunno what they're doing. Right. This is,
Eiko Fried: I had the same, uh, I had the same with Bayesian introduction papers. There were quite a few sort of. Low key. This is a simple introduction, but I was lost in the second or third paragraph already.
So I, I, uh, think we can do [00:09:00] better. Um, and then obviously there's great intro to Bayesian statistics papers now at this stage, but 2000 12, 13, 14, when I, when I was trying to understand, I found the sort of easy intro papers not very easy to read myself.
Benjamin James Kuper-Smith: Yeah, that just makes you feel even dumber. It's like, here's the easy introduction.
You like, it's already way too hard. But is the, say being a generalist, is that something you planned on doing or, 'cause if I remember correctly, it said a new cv, for example, that you, was your Bachelor's of Master's project already on depression or something? Yeah. Yeah. So it sounds kind of like fairly focused from that angle.
Yeah. I'm curious, like whether you wanted. Whether that kind of just happened or you wanted it to happen, or
Eiko Fried: Never given that too much thought? Honestly, I'm not, I'm somebody who, uh, is chronically unworried about the future to some degree. Like it'll work out and I, yeah, that, that doesn't mean I don't worry, but I'm generally, yeah, I don't worry too much about the two years down the road.[00:10:00]
So I didn't really plan things too well. I admit I was really interested in theory of evolution, uh, throughout my studies, biology, and I still think it's really relevant for psychology as well. I'm thinking about where emotions come from, what purpose they might have served at some point. It's a bit of a difficult area to navigate because not all the research is happening at the most thorough or a political level in, in evolutionary psychology.
So maybe we, we best don't get into this, but that's a sort of general interest that I've always had. And no, I think I try to pick up skills that I need to answer questions I have and coming out of my. During my clinical PhD became pretty obvious to me that, um, I want to work on sort of elements of psychopathology rather than these big diagnostic categories.
For example, I wanted to work on symptoms of depression, insomnia, suicide ideation, sadness, rather than on depression as a big yes no category, that, that has been criticized so much by, [00:11:00] uh, by folks in our field. But that requires multivariate statistics because, because all of a sudden you have 10 or 15 dependent variables and not one, and, uh, so I think the, the methods that I taught myself during my PhD were so just necessary to answer my questions.
And then I went into a postdoc in Belgium, uh, in a statistics or a quantitative psychology group, uh, because I wanted to pick up those skills to sort of get clinical answers, um, for the data that I was working on. So it's mostly path dependence,
Benjamin James Kuper-Smith: so it's just kind of having a question following that rather than saying, I work in this area or in this field, or whatever.
Eiko Fried: Yeah, that's been the case for me at least. Yeah.
Benjamin James Kuper-Smith: How do you, so this is now, uh, almost a, something that I find tricky because I think, uh, I, I, I seem to be doing it somewhat similarly then that I have, like, something I'm interested in and I have to figure out like how to, you know, how to do that. Uh, and it often [00:12:00] leads to then, so the stuff I do relates to, there's some competition error sense in there, some kind of behavioral testing, some game theory, um, lots of different things, uh, from pe I mean, in this case also from quite different disciplines.
And it can often be, uh, you know, it feel, uh, it feels like you have to learn everything. I'm not even sure what my question is other than help me.
Eiko Fried: Yeah. I feel, I feel you. I, so. The EPSI wrote a thank you note for receiving a, a thing not long ago, and I, I wanted to highlight how challenging it can be to be a generalist sometimes, or there's a degree of generalism, but I, I think of myself as a sort of generalist and I feel really dumb very often in conversations with my colleagues because I work with specialists.
Often I go to conferences that are on psychometrics or clinical psychology or complexity science, and I'm [00:13:00] often the least qualified person. Uh, you know, the, the many of my colleagues are psychotherapists, I don't know clinical theories too well. Um, a lot of people in the complexity conferences are physicists or engineers or I don't know math that well.
And so, yeah, I think there's contradictory advice on career paths. I have some colleagues who say, you know, make sure you have this one thing that you wanna work on and reject all other invitations and focus on this and. Become a specialist. I don't know if that's my, my advice for folks. Um, certainly if you are not able to be as privileged as I am in putting a lot of work in a lot of hours into your work, I think I would do more focused work.
But I, I love this job. I have the time. I, I don't have a family here. I, I really enjoy, enjoy working on papers and writing and collaborating and, and talking to really smart people. And so, given the time I have, I am, I'm happy to work on multiple projects at the same time across different areas. And I think that the [00:14:00] trick there is to get help from people who know it better than you.
Um, in terms of like, who does, who does the stats, who does the, the theorizing, who don't, don't hesitate to reach out and ask people more qualified than you. That's why I think we should all share our data if we can, because it's ridiculous to assume that you are the most qualified person to analyze your own data.
It's, that's, that's kind of weird though. It's a big world. There are many, many really smart people out there. Or probably better than UN statistics and, and digging into data and, um, and so
Benjamin James Kuper-Smith: especially if you are really good at getting data, right, it's a complimentary skill. I mean, of course there are people who can do both really well, but yeah, I never thought about it that way.
You know, I thought like, you know, showing data for the kind of obvious purposes of if people want to use it, if they can check it, whatever, all these kind of things. But yeah, it's also, I guess it's also true that just, yeah, if you, if you're really good at collecting data there, there's probably someone who's better than at analyzing it.
Eiko Fried: [00:15:00] Yeah. Mental health data are, I mean, there's many experts out there who could find patterns in your data that you might be unable to find. Right. Um, that might inform things down the road. Um, I think of, of this is a two way step process. In the first step, we sort of establish robust phenomena and then the second step we try to explain them.
And, um, it's not easy to separate these always very clearly, but we're far away from having established all the robust phenomena in clinical psychology that, that we ought to establish. And sharing data and, and having people help you, uh, or independently from you look into these robust phenomena trying if they, trying to see if, you know, things replicate, if they can be established before we then explain them.
Uh, I think that's super valuable.
Benjamin James Kuper-Smith: Mm-hmm. Uh, I'd like to, in a second start talking about the paper, but before that, I just have one last question about, about your no committee. So when I sent you an email, as you know, I invited you to, for the listeners I invited Icahn said, would you like to be a guest?
And Aiko said [00:16:00] something to the effect of, uh, I agree to too many interesting things. So I now have a no committee, which has to basically allow you to start something new or something like that. And then a few days later you go back to me and said the committee. Allows me to do your podcast or whatever.
Um, so I'm curious, uh, first what a no committee is, how it works, who's in it? Um, yeah,
Eiko Fried: well, I, um, I don't recall where I got the idea from. I'm pretty sure it was Twitter. Uh, apologies to the person who might have told me. And I, I forgot who you are. Yeah, I, I read about this idea of involving peers and your decision making for some of the things that you're really enthusiastic about, but you're not sure if you have the time for, and it's not so much for them, of course, making the decision you need to do that, but it's for them helping you think through, if that's good for your career, what your goal is here.
Uh, yeah. And, um, I have a colleague, uh, and together we are each other's snow comedy at this moment. Um, shout out to [00:17:00] Provi and yeah, I sometimes I run ideas by her and then we, we, uh, deliberate whether that's a good idea or a bad idea. Oh, okay. Basically, you play devil's advocate for each other, you know, do you have time for this?
What does it mm-hmm. Gonna bring you, um. And then for this, I decided that I, I don't wanna do it in my scientific time. I want to do this in my private personal time because I think it's valuable to, to think about these ideas and, and get your thoughts and feedback and, uh, enga engage with criticism. And I thought that would be, rather than, you know, watching two Netflix episodes, I'll do this instead.
And, uh, it'll be a nice experience.
Benjamin James Kuper-Smith: No
Eiko Fried: decision in the end.
Benjamin James Kuper-Smith: Now I feel under pressure to, to compete with Netflix. Um, but I'll, I'll try my best. Um, okay. But, so that's, that's, I mean, like, like the one obvious kind of, uh, meta joke, you can have a, about no committee is that, okay, so now you set up a committee to decide whether you're gonna have more meetings, that kind of thing.
Um, and so I, I mentioned this to my supervisor, Christoph Khan, and his first question is like, okay, how many people are in this committee? Like, is [00:18:00] this like a, but okay, that's. Okay. So it's more like just in a kind of slightly more formal way, having
Eiko Fried: a friend. It's just a WhatsApp message for me. Yeah, exactly.
I just call it no comedy because I saw that I, I thought it's a funny name, but in the end it's a WhatsApp message with a couple smileys and, and memes and, uh, and, uh, somebody Yeah, asking about your motivation to do something and, and what you get out of it. And in the end, I'm, I'm not always clear on these things and motivations and can be difficult to disentangle and it's just somebody helping you with sort of figuring out whether that furthers the goals that you have for this particular thing you want to do.
And I was invited to give a, sorry, go ahead.
Benjamin James Kuper-Smith: Does it give you, uh, an easier way to say no then? 'cause you can say, the committee said no, rather than sometimes I don't wanna do this.
Eiko Fried: I mean, in the end everybody will know it's my decision, of course. But, uh, but yeah, I can give you some peace because somebody takes the time to think something through with you, which is really valuable.
I was invited to give a sort of keynote for a. A conference that isn't really in my [00:19:00] area. And, um, it's also sort of locally circumference the conference, so it's in a pretty small setting. And so I was discussing whether that is valuable for me or, or not with that person, my no committee. And uh, yeah, in the end we also decided I'll do it, but there are these, um, the things I'm, I go in and I'm not quite sure about and then get some feedback.
I recommend it, but leave Previ on. She's mine.
Benjamin James Kuper-Smith: Okay. Yeah, she's not gonna be everyone's, everyone's Novi. Uh, but so what are the, so this is also, again, you know, all these questions are fairly selfish. So I, I've already noticed, like I'm in my. Technically now my third year of my PhD out of four years of funding.
And I've already like started some side projects and then you realize side projects can take up a lot of time. So what are some, uh, like with your no committee or even without, what are some, like how do you evaluate whether to do something or not? Like what are some of the decision criteria?
Eiko Fried: I, I [00:20:00] would ask the other way around, what do you want to achieve with a project?
I wouldn't even think about the decision criteria. My first thing would be why do you want to do that? Uh, because I want a paper. Then you need to evaluate if you get a paper quickly in this project, you know, is it a constellation of authors and are the data ready and you know, is all of that, will there be trouble down the road with pre-registration?
And is it just an invitation to a special issue where the reviewers might not be as harsh? Yeah, go for it. You know, then you get a paper out of it. But is your goal to learn something or to. I don't know, make people aware of your presence or get to meet awesome researchers then. So evaluate your project based on, on what you want to achieve would be my, my, um, recommendation.
And it's not straightforward. It's always to disentangle what you actually, why you actually want to do something. Right. I, I will never forget, um, Marcos Mocos, a clinical psychologist in Munich whom I [00:21:00]had as a sort of clinical psych teacher in my master's who always said, we all, we have multiple competing personality aspects and in sort of healthy or more healthy folks, these are more integrated than in some of some folks who have.
Fairly severe mental illness, for example. And, um, the trick is to disentangle, you know, all these different parts when you can't decide what you want to do, it's not because you don't have multiple opinions, it is because you have multiple opinions and you need to figure out, you know, which path you follow, if that makes sense.
Benjamin James Kuper-Smith: Yeah. But it sounds like it's very much a, yeah. Like what do you wanna get out of it? Why would you be doing this? Um, the good odd question of why.
Eiko Fried: Yeah. And then you sometimes need to admit to yourself, I wanna do this because I want to be on a paper that is highly cited. How does that feel like? But it's good to know why you want to do this or why you think you want to do this, and then evaluate.
Whether the project under these constellations will be contributing to your goals in that regard? My supervisor [00:22:00] in Luva, Francis Lings always said, projects take, if you wanna know how long something takes, you multiply the time by two, and then you go up one order of magnitude. Um, so, and, and I mean that's a, you know, big, big number.
But yeah, things take longer than you think. But I also think that advice on this never helps. You just need to do 10 projects and you need to see that they all take longer and then you can learn from that.
Benjamin James Kuper-Smith: Yeah, I think there's some stuff you have to just learn the hard way. And I mean, maybe there are some smart people who don't have to do it that way, but in, in my case, I feel like, funny thing is that, um, I, I, uh, I mean this is a different topic and I don't want to, I've talked about it before, but like, uh, I started a PhD once and then quit it, and it was mainly for the topic.
Um, but one thing that also I found quite annoying, not annoying, but that was. It didn't go the way I wanted to, is that my supervisor was very clear in terms of saying like, no, no, this is your project. You do this, you're not gonna start with the side projects. Right. Because I take up too much time. And at the time I was like, ah, whatever.[00:23:00]
No, I've done that. And I've noticed, okay. He was just completely right on that. So yeah. At least in this case, I, I didn't, I, I'm not able to learn from someone just telling me I need to not suffer the consequences. That's way too
Eiko Fried: traumatic. Yeah. But I, I think you can, uh, as a supervisor involve your students in the decision making process.
Right. I have a credit student here now who has lots of offers for site project because she's just super statistically skilled and I have lots of offers for her to do side projects too, because I get asked to do things I don't have the time for, and she would be very suitable for that. But she can do 30 papers in four years.
And so I, together with her, we sit down and we think about. Whether it's valuable, how much it would postpone other projects. And, um, I think you can involve people in that process if they, in the end, believe your judgment of sort of the time allotment. This will take and it takes more time than you think it does.
Um, I ended up committing to, uh, sort of being the statistics guy on, I don't know, two dozen papers maybe during, out my, during my postdocs. And [00:24:00] it felt amazing that people reached out to me, you know, it felt nice. I, you know, I had zero citations and like four papers and nobody knew me. And then people wrote me like, Hey, would you help us analyze the data?
And I was like, oh, yeah, that sounds awesome. But what happened in the end is that for, I would say at least half of these papers, I had to, had to considerably rewrite the introduction and the discussion because people would start drawing causal inferences that didn't follow from what, what I did in the analyses.
And I just couldn't let that go because I'm on the paper and I'm not. I don't wanna be on a paper. There were people draw inference that don't follow from the data and so forth. And so it ended up being a lot more work because I sort of, and then people said, yeah, why don't you, you know, help us with putting this the way you want it.
I was like, yeah, I guess you're right. I should do that. But then you end up writing the papers and, um, and I mean, no, no harm nor fall. I, I don't regret that. And, uh, and nobody's drawing sort of inferences that don't follow out of malice. That's, you know, I, I, I don't fault my co-authors for this, [00:25:00] but it was an experience that told me like, if you want to be the methods person on a paper, make sure that, um, to know that you're not gonna only be the methods person in most cases.
Benjamin James Kuper-Smith: Yeah, yeah. Yeah. And I feel like, I mean, for me it's like two side projects during my PhD and. In a way though I feel like it's, it's, I've really done about different kinds of research. This was maybe coincidence in what I happen to have as these side projects. But for me, doing them and being frustrated by them really clarified what I don't wanna do and therefore made it, it's nice to, like, especially I'm very easy to, um, I get, you know, very quickly enthusiastic about something.
So if someone, if so, if I have a reason not to do something, that's usually very valuable to me. So, yeah. I'm really, in a way it's, it's, it's, I think good that I kind of did something and learn. Yeah. Learned that I don't wanna do it again.
Eiko Fried: Yeah. It's learning. Right. Um, I, I've also, I have a list of collaborators.
I sort of just [00:26:00] really, really enjoyed working with, no matter what paper we worked on. And so I will also prioritize working with them in the future because it gives me joy and, um, yeah, I wouldn't work on any topic of course, but, but they, they all do work that is related to the work we're doing here and.
Yeah, they're just people you get along with really well. And to me that's important for, for collaborations because I end up talking to people every day, you know, like late at night or, or early in the morning or, I don't know. Yeah, it's a lot of people contact and science in collaborations. And then, yeah, why not work with people you really enjoy working with?
Benjamin James Kuper-Smith: Yep. Okay. Should we talk about the paper? Um, what
Eiko Fried: paper? Ben? What paper?
Benjamin James Kuper-Smith: I don't know. Just any, let's just pick one at random. Yeah. So you had, um, I'm not entirely sure how to call this. Uh, it's a, I mean, the, it's a, a target article, responses and response to responses. I quite like the format. It's not, it doesn't seem to me to be super common.
[00:27:00] Um, but actually, uh, sorry, just how did this come about? Is that the same thing? You send it to the journal and say like, I'd like for people to comment on this or, yeah.
Eiko Fried: Yeah. So there are not many journals in sort of psychology and adjacent areas where you have this, this format where you write a longer thing in some more detail and then the journal invites multiple respondents to this.
Um, and then you get a write red joinder. I mean, it happens in some journals sometimes, but I think only BBS and psych inquiry. The second is where I publish this paper, have this as a standard format, and I really love this format for contentious papers, opinion papers, broader perspectives, papers, and um, yeah, instead of getting reviewers, you get comments basically from people that get published after sort of basic quality checks, of course.
And. And I, um, yeah, I'm incredibly grateful that this could be published there because the commentaries obviously made the paper stronger in a sense. It gave the [00:28:00] paper more attention. It's, uh, a very humbling experience to get commentaries by the very people whose work 10 years ago, uh, inspired me to think about all of these things.
Yeah, Paul Mino, for example, his work has sort of greatly influenced my career trajectory in the last five years. And, um, yeah, it's humbling to, to get thoughts of those folks on your own paper.
Benjamin James Kuper-Smith: Mm-hmm. So, but so is it then that you write the article, it gets peer reviewed, then it's kind of a finalized version, and then people get to comment on that?
It's, it's still like that, or,
Eiko Fried: yeah. I can only comment on this particular process here. Uh, I don't know what the general procedure is, but yeah, you, you send it in it's quality checked, uh, to some degree, uh, to varying degree, I guess. And then it goes out to commentators and. Then you get the comments back.
There's also some quality control, I'm sure, for the, for the comments for the, uh, commentaries. But I wasn't involved in that. I think both, I submitted this to PPS [00:29:00] first actually. It was rejected there with an extremely kind, long detailed response by the editor saying, we just had a paper that was super similar, and it's really a tough call between those two.
We can't have two because the, the topic is too similar. And so I, I, I went to psych inquiry and I'm very grateful that the editor took the risk to publish it there. And yeah, and then you get to write the reader join after you see the commentaries. That was the biggest struggle for this paper. I think I wrote the paper over five or six years.
It's my longest sort of project. It, uh, was five or six papers that all went into one in the end that all merged together in some way, which was nice. And then I think I had two or three months to write the, the response, but some of the commenters were quite late, so it, it was not, I found it really hard to.
Respond to, I don't know, 140 manuscript pages in 20 manuscript pages. Um, yeah, it was difficult. How, how, how do you do that? Do you, do you know, do you do [00:30:00] point by point? Do you pick out larger themes, meaning you have to ignore some things? Of course, yeah. In the end, I, I chose a few themes and tried to write a paper that can be read standalone.
Uh, my response, the idea is that people can just read the standalone response also and get something out of it, rather than having had to, to read, you know, my long paper and the seven commentaries
Benjamin James Kuper-Smith: rather than having to read those a hundred pages first. Yeah. I don't wanna. So one, yeah. One thing I also find is when reading these papers is that, um, so I mean I had the same thing with Post Morino who I interviewed on the podcast, and it's the same thing there.
To me, it seems like when reading these articles, it almost feels like I should do five years of research first and like work this stuff, like into my understanding of science and try out different things and then speak to the person rather than like, shortly after reading it. But that's not really feasible, but you know what I mean.
It's this kind of thing where you're like, oh yeah, let's, let's see how this works in practice. [00:31:00] Um, and how, yeah, because I think especially with these kind of more meta research things, it's often fairly not easy, but often some things are, you know, easier said than done. Um, and then,
Eiko Fried: and it's also learning by doing to some degree some of the things you read.
In 2010 and you're like, yeah, I get this, but in 2014 you encounter an obstacle and you're like, oh, I really get it now. I did not, I did not get it like 2010, like I got the gist of it, but I did, I missed a crucial point there because I was just not ready. I didn't know enough about adjacent areas. I often have that when reading up on, on measurement problems.
Um, there was a paper recently on the coordination problem. Um, my Kellen, I think is the first author. It might have been part of the perspective, special issue on theory formation by Paul Neil. And, um, the coordination problem is basically if you try to measure something in psychology, a hidden trait, depression, neuroticism, [00:32:00] and you have a rating scale, one to 10, you know, some observed measure.
It doesn't have to be a linear function between your instrument and the true, uh, position of the person on their, on their ability or their score. But that's completely ignored in psychology. Like you don't you. Eight points out of 10 on the depression scale, you're eight out of 10 depressed. But that inference doesn't follow.
There's a function between those and it's quite, we'd have to live in a very bizarre world where this function is always linear for all psychological traits.
Benjamin James Kuper-Smith: Yeah.
Eiko Fried: Um, but I didn't get that. I read up on this two, five years ago. I didn't really understand that problem and that paper really demonstrated this using, uh, thermometers and temperature and, and so forth.
And I was like, oh, now I get it. And maybe in five years I'll be like, oh, now I get it.
Benjamin James Kuper-Smith: Yeah,
Eiko Fried: exactly.
Benjamin James Kuper-Smith: Yeah.
Eiko Fried: Yeah.
Benjamin James Kuper-Smith: Although I, I really like that when you, when there's, you know, when you reread something and you realize, oh, there's more depth to it than, [00:33:00] uh, was, uh, is is recognizable at first glance, I think then that's when you know you've got a really good paper.
Eiko Fried: Agreed. Yeah.
Benjamin James Kuper-Smith: Yeah, so, so to talk about the paper a bit more specifically, um, or maybe can you just give a brief summary kind of, I don't know, app, like a one minute abstract or something?
Eiko Fried: How much time do I have? Um,
Benjamin James Kuper-Smith: yeah, yeah, I know, I mean, I think the, the general assumption is kinda that people have read it and then got it, like once you've read it, uh, might be completely wrong.
Eiko Fried: Um, I think in the pa, so I was frustrated for a long time with interpretation of, uh, structural equation models or factor models in the published literature, personality, but mostly clinical psychology, uh, not about authors, not about people, but about some of the inferences across the board. And I've worked in sort of network psychometrics applications through [00:34:00] clinical psychology, and I started reviewing a lot of papers 2000 15, 16, 17, and I became equally frustrated by some of the inferences that people withdraw from these papers that didn't follow.
Not in a sort of like pointing fingers kind of way, but in a way like I, I don't know, in this discipline, I, I have a bit more insight than some other folks who, who, who weren't privileged to do a postoc with any bors boom and sort of be part of the early days of this development and sort of being up to date about the news art packages and sort of the, you know, really smart people giving talks in then his lab group about all the problems of this and so forth.
So I, I think I had a bit of better bird's eye view and it sort of frustrating me. And so I wanted to write on this and the problems, I realized that some point after writing multiple drafts for separate papers are kind of the same about disciplines, the factor analytic literature and the network analytic literature.
Because in the end you're just throwing models at data. That's why they're called data models in other areas than psychology. And we don't use the term very often, but [00:35:00] um, yeah, we use data models to learn something about phenomena. And that's sort of a weird thing because data models can tell you something about data.
But phenomena are not data phenomena are data and noise or, you know, data and other things that come into play here. And so I, I became interested in inferences and abductive inference, uh, Brian Hanks work. How do we learn, you know, about, about theories from data? What are the, the steps you need to take there?
How can data models or statistical models help us with this? What kind of assumptions do you need to bring in to your statistical analysis to then learn something from your data? How do these assumptions look like? And um, yeah, in the end I wrote a paper on three problems that I see. The first is sort of conflating data models, um, and phenomenon or data, statistical models and theoretical models.
If you want the second, maybe to give you an [00:36:00] example in our field. People often fit network models to data. And then you get sort of a nice picture with a lot of nodes and connections between nodes. And then the idea is that a node has many, many connections. Uh, for example, in clinical psychology has many connections because it might be a really causally relevant node.
It has many sort of, you know, it might influence a lot of other things in the network structure. And if you intervene on this note in a psychotherapeutic process, for example, you could reduce them a lot of other things in this, uh, network of that person. But there are, I think seven or eight crucial causal assumptions you need to put into this to, to make that work, none of which are usually explicated.
And, uh, yeah, it just doesn't follow because we don't even know if a network process generated the data in the first place. It might have been a completely different causal process where, you know, none of this pans out at all. So that's the first point, uh, sort of conflating these, these two [00:37:00] worlds. And I think psychology is bad at that.
Or, or vulnerable because we use the word model in a very unclear way. I think statistical model and theoretical model are different. And I, nobody ever taught me that in my, in my education. So I had to learn that the hard way. And I, I certainly didn't use that, that word, that term very well in my own work until 2018.
I want to say the second problem is one of, uh, I call that latent, uh, latent theories where people have clear causal views on their data, but they're not explicated. And that leads to, to problems or related when people use statistical models that bring in assumptions, but that's not explicated. And I think in psychology, we all know chrome, buss alpha.
I think that's a really good example for this problem where when I submit papers, I'm often asked to report chrome buss alpha for my instrument or my scale. What is sort of the reliability or, and statistically speaking, how strongly are all my items correlated [00:38:00] with each other. Um, and then sort of a high correlation here means good scale and low correlation means bad scale, but this only follows, and that's sort of generally accepted.
Like editors, reviewers will point out your scale has low and that's bad that there's sort of a, but that only follows under one single causal model out of like a million causal models out there. The, that the idea that all the items need to be highly correlated follows from a causal model. We have one common cause and it causes all your items.
And yeah, then an item that that sort of has a low correlation with the others means it doesn't indicate you were your common cause. Very well. You might want to drop that, but there are many other possibilities where our items come from in psychology, um, including like a very simplistic network model where items cause each other and then nobody in the world would argue you should drop an item that is sort of not strongly correlated.
It doesn't make any sense. And the third point is what I call weak theories. And that's, maybe social psychology is a really good [00:39:00] example here where people have made claims, but they're so vague that it's really unclear if the data actually speak to corroborating that claim or rejecting that claim. Uh, Paul Mill has sort of written about this for, for many decades and yeah, influenced my thinking on this a lot.
And so I write about these things and sort of how we can move forward and then fix them. That was a very long summary. Sorry, but that it's a very long paper.
Benjamin James Kuper-Smith: Yeah. Yeah. I was also surprised how log it took me to read. I thought like, oh yeah, I read this in like an hour or two. It's like, Nope. Took longer than that.
Eiko Fried: Apologies. It took me six years to write, but I have some respect.
Benjamin James Kuper-Smith: It's fine. It's fine. Um, I'm fine with the compression from six years into more than one hour. Yeah. I mean, so I think maybe we can, uh, then later talk a bit more about the individual problems, but just a few more, kind of more broader things.
Maybe first is that if I'm assuming your, I mean you, in this paper you're just talking about, um, or [00:40:00]you're restricting your examples and discussion to this factor models and the network models. But this is a more general problem, right? It's, uh, you are using those examples to make a broader point about the way theories are done in psychology, right?
Eiko Fried: Yeah. In the end, it was a question of how to translate these ideas into an, to an audience, basically. And I, I figured, uh, the network literature has received lots of attention in the last 5, 6, 7 years, and every psychologist has run a PCA or a, you know, fact, a CFA somewhere done their career. And, um, often without paying much attention to what's happening there and, and what causal assumptions, you know, could be put in or, or required to get any causal inference out of the data.
And so yeah, it was a practical decision to concern it to these two statistical areas that I also happen to know better than other statistical areas, so I have a bit more, um, authority or whatever to speak to these problems.
Benjamin James Kuper-Smith: Yeah, I mean, when you do this in network stuff, people are more likely to listen than if you [00:41:00] do this in something you've never published in.
Right. It's just fairly,
Eiko Fried: I, yeah, I, I guess, I mean, yeah, just to clarify, a lot of people know a lot more about this than I do, you know, as, as, as is clear from our discussion on being a generalist previously. But, um, yeah,
Benjamin James Kuper-Smith: yeah, yeah. But, um,
Eiko Fried: disclaimer,
Benjamin James Kuper-Smith: uh, one thing you, you, you mentioned in your, uh, in what you just said, there was one phrase that I thought about or that I noticed kind of came up, cropped up in like the entire discussion of the paper, which is something like, without paying too much attention to it.
And, um, so this is, this is a really general point, but. So I was reading the, the paper and, you know, through the three problems you, uh, listed and everything. And at some point I thought like, the problem here isn't really a lack of understanding theory, it's just thinking clearly about what you're doing.
Like the, and I was wondering like, then is this really a problem in terms of not knowing any better, which in some cases definitely is the case. Um, so from Paul's moderna's [00:42:00] work, I've definitely learned stuff that I just didn't know before, but a lot of the things you mentioned here to me seems more a problem of taking the time to actually think about what you're doing and understanding the tools you're using.
Eiko Fried: I'd rephrase and saying, having the time rather than taking the time. Um, I, I don't wanna speak to other people, but at our university, at least in certain departments, you need four or five papers to qualify for a PhD. And that's an informal rule maybe and not written in stone, but that's an expectation raised towards graduate students.
And imagine how much stronger papers could be if you write two papers through your PhD and you pre-register them. You collect clean data that you can share immediately with others, uh, to inform other projects. If you get to have the time to think about all the statistics you want to use, all the assumptions you bring in.
If you can bring in collaborators on expert topics from the outside and you sort of really think everything through with them. Yeah, I think it's a time constraint problem in academia that we've created to some degree [00:43:00] ourselves, and it's kind of hard to dig ourselves out of it. Again, there's this whole sort of slow science debate that we're having right now.
Um, you know, this thought experiment where if you can write one paper per year, would it be stronger than your average paper you've written? It would be much stronger for, in my case, for sure. So yeah, definitely it's a time problem as well, as much as it is a knowledge problem, but a knowledge problem is also a time problem.
In the end, I think. So, um,
Benjamin James Kuper-Smith: right, right. If you had infinite time, you could learn everything. Yeah, but I mean, yeah, I'm not like, obviously accusing anyone of, uh, not doing their job properly. That's not what I mean. But it, there was a point in reading this paper when I just thought, yeah, the problem really isn't that people don't know better, it's just that they don't have the time.
And then it, you know, then, then somehow to me, the, the proposed solution to the problem becomes quite a different one. Right. Because then if that really is the underly, uh, fundamental problem, then um, you know, I contacted you because I found the paper valuable. So I'm not saying this isn't valuable, but like if the, [00:44:00] that the time constraint really is a problem, then, you know, there can be so many papers written about how to do better theory.
It's just if people have like two days to write a paper, then you know it's not gonna get any better.
Eiko Fried: Yeah. I'm not entirely sure. I, I don't wanna speak to others experiences here. My own experiences that I learned a lot in the last few years on this stuff that I thought I knew. Um, the use of like Micro and Borg have written really convincing work on sort of assumptions of factor models and how you can draw best inference, what causal things you need to bring to the table to sort of get most out of your data.
And I've worked with these models for a while and I wasn't aware of that. So I think in part it is understanding. I didn't have any formal stats training, I didn't have any training on theory formation there. There's a lot of things I really didn't know. And then there's care or lack of attention to some degree.
Um, yeah, Jessica Flick and I have written about sort of questionable measurement practices [00:45:00]recently, and in that paper we were urged to distinguish sort of malice or nefarious practices from oversight or lack of attention. But in the end, the impact on the literature sort of the same. And I don't want to guess why people engage in sort of practice that look a little dubious from the sideline.
Um, and yeah, in the paper we say if you're transparent about what you're doing, you're helping us already. Because it doesn't matter what your motivation is, you know, if it's malice or if you just don't have the time to do something properly. If you're transparent, you help us, you allow us to evaluate what you did, and that's the first important step.
It doesn't solve all the problems, but it's necessary, um, a necessary first step. Yeah. Anyway, so a bit of a, yeah, I mean,
Benjamin James Kuper-Smith: one thing I like about the paper is that I think it's very clearly written, but the clarity in writing then and in pulling the problem apart makes it almost seem trivial. Then. Like it's, you know, like the clearer you describe a problem, the more it seems like, well, why didn't people fix this?
Right. So in a way, like the way you wrote about it [00:46:00] might have, might almost by writing well about it lead to me as someone who's not in the field going, wow, that's a obvious mistake. Like, why would anyone make this mistake? The one thing I was thinking of here now is the simulation part you have in here when you say, look, this network model and this.
Factor model have the same fit, more or less. But I generated it with this process. But then it seems like a lot of people would run a factor model and say, oh look, this fits perfectly well, therefore it is a good explanation at least. Or in some cases this is what's happening. But the way you wrote about that makes it seem like very obvious.
Like, no, no, you have to question that assumption. But I guess in practice it's very hard to actually do that.
Eiko Fried: Yeah. I mean, and, and you never know why papers are written up the way they are. Um, you know, what, what are the constraints, which co-author added what sentence on causal inference in some, and these are the things that send out to me.
Of course, I use a few examples in the paper because we're often told that we're arguing against the strawman here. Nobody [00:47:00] really believes this. But if you look at the published literature at large. Um, people say this very explicitly in, we've had this for intelligence for, you know, a hundred years now, where people say, well, sub tests of intelligence are correlated.
Hence, there is a true, you know, common cause that has a name. It's somewhere in the universe and it causes your behavior on these subscores to some degree. Not everybody believes that, but it is a core belief. And, you know, I think it, it's dangerous because it also aligns humans have biases. We make things a little simpler in our head sometimes.
We like to categorize, right? We like to put things in boxes. There's tons of work on essentializing of, of things we like. Five basic emotions, right? The idea that there are like, you know, a lot of emotions and it's really messy. It is difficult, uh, to, to acknowledge. No, I'm not an expert on this, but it's nice if you can
Benjamin James Kuper-Smith: count them on our one hand.
Makes it a
Eiko Fried: lot easier. I found it to, I. I, I had a class on this and I think I needed to learn 20 basic emotion theories by heart. You know, white guy, 1921 said, [00:48:00] forward emotions white guy in 1932 said six emotions and sort of, it killed all my interest in any of this research to having to learn this by heart.
And I, I sort of, I thought about Richard Dawkins, who to who, you know, difficult for all sorts of things of matters. But he once said that, you know, you believe in this God, you believe in the, everybody who believes in a God doesn't believe in all the other gods. I just go one God further basically. Yeah, exactly.
Now that it's like a, that's a cute way of, of, of defining atheism there. And I had this with these emotion theories. How can you, how can you not see that none of these might be true? That it's just a little more complicated than having like x basic emotion. Anyway, I'm getting, uh. Sorry. Um,
Benjamin James Kuper-Smith: no, but I think it relates, um, also to your, I think it relates what you just said relates very well to your third problem about weak theories, because if you have, even if it's 10 theories of emotions, right?
If they're good theories should be able to see which one seems to be most accurate. Right. I mean, that, it seems to me almost like [00:49:00] if you can have, I don't know, 20, 30 theories of emotions, then maybe part of the problem is that no one's really clear about what the theory actually is.
Eiko Fried: Yeah. Or maybe it's a hard problem.
Of course. In addition to that, I mean, so I don't, I don't want to, I, I don't want to go into a domain like emotional research where I have no expertise or little expertise in that sort of trash up people working there. I know there's some very serious work and, and people have worked whole careers on, on establishing theories of emotions.
I just mean to say that the stuff I learned in sort of, you know, Batman, my bachelor's and masters sounded a little too easy sometimes, and, and then the question is also what your goal is in sometimes, and I, I. I learned about this while writing my rejoinder write, I think six books on theories and models for my rejoinder because I just didn't have that much expertise on the difference of these two.
And some of the commentators pointed it out. And, uh, yeah, it was a kind of really busy summer and I now distinguished sort of a theory as a broad, uh, [00:50:00] big, you know, framework. And a model is more like a concrete instantiation of that, that framework. It's like, uh, I think Bay Jones called it a bridge between a theory and reality.
And there's no harm in psychology to simplify things, you know, to say there are five basic emotions and if, you know that isn't true, but it, it facilitates research. It, it makes things a little easier to think about. That's super helpful. I don't mind that at all. I don't mind having a diagnosis, major depression as useful category, which I think it doesn't turn out to be very useful, but, you know, some categories that aren't true can be really useful.
Um, and I think Alia Coney wrote a commentary to that, to that, uh, effect, uh, in response to my paper. And yeah, I, I agree with them. We're both, I think, pragmatists in the sense. And so then the question is, what do you want to do? And do you want your theory to be true or do you want a model that is useful and they're not necessarily the same thing and they, they also ought to be evaluated differently?
I think.
Benjamin James Kuper-Smith: [00:51:00] Yeah, I mean that, that for me was something, for example, that I really learned while reading postino's paper models are stupid. The idea that like your model doesn't have to, number one, reflect everything, and number two be correct. It has to be useful. And the, um,
Eiko Fried: yeah. Paul has done really good work on this.
I, um, I think throughout my re genre, I use this idea of a map in the city, a geographical map as a model where the map needs to be useful for the purpose that you want to navigate the map by. Maybe it's top topology. Maybe you want to climb mountains to meet your climbing requirements on your smartwatch.
I don't know. Or maybe you want to navigate, uh, an easy rah because you want to take the elderly for a walk. Like the, all of that you can get from a map. Um, you know, the question is what you want to get from that map. Does it have to be real? No.
Benjamin James Kuper-Smith: Yeah. There's a reason that the London, like you can't, using, for example, the London tube map to get around check by foot is not a great way to get there, but it's a fantastic way [00:52:00] to get around by tube.
Eiko Fried: Exactly. Yeah. I should have used that example in the paper. It's better than my, than my example, than I used. So it's geographically inaccurate because it's banned and shaped in weird ways to make it look nice on the map. But it is a very nice way to get around by two much. And sometimes
Benjamin James Kuper-Smith: it's misleading.
Like there are, I can't remember which two. Toten Court Road in Covent Garden or something that are 200 meters away from each other or something. And if you look on the map, it looks like, you know, it's a small distance of course, but like, it could be anything. But if you actually live in London, you know, like if you go from one to the other, it's faster to walk.
So like in some days it can even be misleading, but as a whole it's super useful to not, uh, have these geographical realism in there.
Eiko Fried: Exactly. Yeah. It should come with a disclaimer. Uh, the map would use, uh, the, you could make these maps of course, with geographical realism, but it would use a lot of its purpose and utility and using it as a metro map.
Benjamin James Kuper-Smith: Yeah. You couldn't like hold it in your palm and see most of the stations, you'd have to [00:53:00] have this really big map here. Yeah. So, okay. So another maybe I, I should have said that at the outset, but even though I'd, you know, heard of factor models and network models, I've never used any of this. I've, you know, it's the kind of thing I occasionally come across a five seconds.
So it's, there's a decent chance that some of my comments here will. Uh, yeah, just be from an uninformed outsider. But one thing that really confused me, or that seemed surprising to me is that it seems to me at least some of the examples you gave, a lot of the problems that seem to me could have, could be resolved or at least be made more obvious by having some sort of model comparison.
Um, it seemed to me sometimes that people will say, okay, we fit this factor model to it and it fits very well, therefore it's the underlying fact. But if you'd created lots of other models, for example, the network model and all these kind of things, it would've been much more obvious, I think, um, that it's just one interpretation of the whole thing and not necessarily the truth.
[00:54:00] Just because it fits well is, I mean, is that not done or is it done in a different way or,
Eiko Fried: yeah, so there is this, um, I'm gonna give an, I'm gonna answer, answer you trying to give an example. There's this, um. Idea that psychopathology is organized hierarchically to some degree, you know, with a P factor for psychology on top of everything.
And then you have these sort of lower things like externalizing, internalizing, and then you have the disorders under that and you have the symptoms. And I, I, I like, I omitted a few uh, uh, ideas here. I think that's a good idea. In general, I think there might be, the P factor might be something like an underlying tendency might early risk factors that predispose you to a, a lot of different problems in life.
We know that it's true for early adversity, but weirdly enough in this area, factor models have had so monopoly. So if you read the, I don't know, 50 papers or something on this, all of them use a particular type of structured equation [00:55:00] model, uh, reflective late and variable models in three or dif four different flavors.
And I have a paper with Ashley Green and Nick Eaton on this that we published in a World psychiatry just a couple weeks ago where we say that. It's not a bad idea to fit a factor model to this dataset, to these datasets, but methodological pluralism is important at some point to establish robust phenomena.
If the idea really is that, that we're in the stage of, of research, we, where we're still trying to establish robust phenomena data rather than sort of explaining them. And that's, that's why we fit these models that are largely exploratory right now. Then fitting one particular type of statistical model out of thousands is probably not going to cut it to establish something robustly.
And, um, I don't quite understand why we, we haven't done this in that area. For example. Maybe there's path dependence, maybe readers will tell you Yeah, but it's not comparable if you don't do it the same way. Um, certainly academia is conservative in [00:56:00] some, some regard, right? You try to change something and you get lots of pushback.
Benjamin James Kuper-Smith: Yeah.
Eiko Fried: I've recently tried to delete one item from an established oppression scale. Because this item correlates negatively with the other items. So that's really not a good idea to include it. But reviewer treated me as some sort of like leper, like, how dare you touch this scale from 1960 that everybody knows is written by this saint of depression scales, Hamilton.
So, yeah, I mean, probably good reasons for this, but it's probably not good for science that, that we have, um, modernism of methods in certain areas or certain pockets also for network models. Like, you know, they, they, they, we should work on developing more, more network models and we should, uh, are there non, the new interactions going on between node and then stuff like that.
And, um, yeah, pluralism.
Benjamin James Kuper-Smith: Yeah, but it's, it's not routinely done. So for example, I'm looking at your, uh, well, it doesn't matter what network, uh, what, what a particular network you set up, but is it, so somehow, my assumption was that [00:57:00] in many of these papers you would. You know, set up all kinds of networks that kind of make sense from different perspectives, and then you just see how they compare against each other.
Is that done or, I mean,
Eiko Fried: no, that's not done. Um, for possibly similar reasons. I mean, one is papers have figure limits, right? It's, I mean, I'm not justifying why my papers don't have more network models than one or two. Uh, I have some work where we try to replicate them across different data sets, but even there, I will use the same method four times in four data sets be to avoid methods being responsible for differences across data sets.
Of course, you want to use the same method there. Could I have used three more models with the data? Yeah. The, so in my early work on this, the methods were not available, but in the last year or two, uh, social apps come and others has developed additional types of network models that, that deal with false positive rates in different ways or that, you know, do the estimation procedures a little differently.
And yeah, in my, my, [00:58:00] we have a recent paper on, on, uh, phobic fears and then panics and so forth, use all sorts of
Benjamin James Kuper-Smith: robin or, or whatever his name is. Uh,
Eiko Fried: no, that's a, um, I'm now talking about a paper in psychological medicine on 21 Phobic Fears. It's a network model, like an empirical network model, uh, of phobic fears together with Ken Candler, who's the first author on that paper, and 7,000 people.
And we just had lifetime experience of phobic fears, 21 of them. And in that paper, I fit three or four different types of network models to the dataset. We, they get, they give us the same result every time. And I report one in the main paper, I report the other thing, the supplementary.
Benjamin James Kuper-Smith: Mm-hmm. Yeah. That's what I would've assumed.
Something like that.
Eiko Fried: Yeah. So of course for factor models, we, we can do that. The problem here is to some degree, that they're all statistically equivalent. Of course in the, in the factor modeling world, for example, you can fit a bi factor model or a correlated factors model or a higher order factor model or, um, you know, and they [00:59:00] all give you pretty similar fit, but they all look structurally very different.
Like the errors are drawn different and you have DA little bit of, of hierarchies and so forth. And so all, all of them are represented well by the data. And then the question is what you can learn from that. But it would help in, in giving people the sense that like if every single factor model paper would next to it have fit a network model, it might have helped with not overinterpreting, the evident the arrows in these graphs that are, you know, derived from cross-sectional data.
Um, and, and rather arbitrary and the same for network models, people wouldn't have over interpreted central notes as something we need to intervene on. If you next to that, put the graph of a factor model, uh, sort of common cause model where you have a latent variable. To remind folks that we do not know what generated the data.
It might have well been a very different model than the one you're trying to draw inferences from.
Benjamin James Kuper-Smith: Again, I'm not in this field and I'm, I'm at the very beginning of modeling, but what, what does it tell you if all the models fit equally well? It almost sounds like, or [01:00:00] just some of, you know, just like the way you sound, it almost sounded to me like, well, it almost doesn't matter what you want.
You, you fit it to No, it's always gonna fit. So
Eiko Fried: what I'm trying to say is that there will be, um, it's like a regression, right? You can regress A on B or you can regress B on a.
Benjamin James Kuper-Smith: Mm-hmm.
Eiko Fried: That's just the same. And, and for factor and network models, every factor model has a statistically equivalent network model.
And by c versa, they are often quite bizarre. So, you know, I, I, I can fit a network model to data and then I look at the corresponding factor model and it has 14 factors and it, it looks really weird, but it exists. And usually it doesn't look too insane. And just one example. Yeah. I mean, I have an example in the paper.
If you have a fully connected network, you get a one factor model. If you have a one factor model, you get a fully connected network. That's the easiest case. And, um, these statistical equivalencies are problematic and they, they're really well known in the literature. [01:01:00] The, in psychology people use bactor models a lot, but bactor models have the tendency to overfit data, which is known for a long, long time, and they have many equivalencies can fit different models, um, to that data set with the same fit or nearly the same fit anyway.
So, but I, I, I don't know if it's only a problem in this field. You can look at machine learning, for example, right? Everybody's machine learning these days. But people use one machine learning model, they don't need then use, they don't use a neural network and a random forest. Like, you know, at some point you make a choice for a method you do.
Report that very, I, I've yet to see a machine learning paper that uses like six different methods and sees whether the predictors converge or something, because that's sort of, you know, five papers, maybe not one. Or maybe it should be one paper. And, um,
Benjamin James Kuper-Smith: yeah, I mean that's like part of the, part of something, uh, I'm thinking about right now too is because I have, like, one of the things I'm doing, I have some theory in a very basic model, and it seems to, [01:02:00] uh, data seems to fit it surprisingly well also in a way that the, the, the data does allow for the model to be falsified.
Um, but then I'm always thinking like, okay, so like I, but I could like compare it to like, you know, all these other models and I could set up like, you know, linear, non-linear and all this kind of stuff. So this is something I'm really trying to figure out and I like, like is it sensible to. Put everything you can think of into a paper.
I mean, that is obviously an extreme and the answer's no, don't put everything you can into a paper. I don't know, like since I've been thinking about the My Own project, I'm surprised that there, that there aren't more papers that sort of say like, here's the model space. We like the potential models we could have, and we're just gonna simulate all of them because like why not?
And, uh, see how they fit. But
Eiko Fried: yeah, and if you could write one paper per year, you might wanna do that, right? Uh,
Benjamin James Kuper-Smith: yeah. I mean that's kind of what I'm going [01:03:00] for, but yeah.
Eiko Fried: Yeah. No, and it, and it's a viable strategy. It would make your results more robust. You would get more insight, you would learn more, you would get more experience fitting models, which can be a fun thing to do and helpful.
You get more expertise, but it's just their time constraints, I think. And also we communicate in ways. That are incredibly outdated. Like if I submit a paper to a psychiatry journal, I have three and a half thousand words. Sometimes I'm even limited in my supplementary materials and what format they need to have.
Um, and so that, that, that is the way I can communicate at that stage. Right? Of course, I can put stuff elsewhere. I can put it on my blog or something, but we communicate in sort of chunks of information that are, there's no reason that a paper is not an HTML or some other format these days with interactive graphs on the journal website.
It doesn't have to be a PDF, and it doesn't have to be limited to 3000 words. And, and, yeah, I, I don't know. I guess I, I hope there'll [01:04:00] be some changes to this as well, because then we could make things more robust. There are PDF formats these days. I saw that a year ago where you can mouse over something and the graphs change.
Right. So you can have a PDF that says, uh, we use no error control, and then we also use two different types of error control. And you hover over these two and correspondingly in the PDF, the figure changes to like, how do the results look like when I do some FDR or some, you know, ferone. Yeah. No reason that, you know.
Yeah. Okay. I'm done with the topic. Sorry. Okay. It's so frustrating that I need to write this up in this format, in these, I have two figures to communicate my findings. Why not four? What's what, you know? Anyway.
Benjamin James Kuper-Smith: Yeah, yeah. I mean, yeah, maybe, maybe that is one reason why I'm asking these questions right now is because I'm still in the stage where I haven't had my own idea that I really wanted to do my completely own way [01:05:00] and had to think form that to a paper format.
I'm not in that, that stage really right now. Um, so. Maybe I'll stop asking those questions. Once I've realized what it's like to actually write a paper, I'm just not entrenched in like the traditions and I mean, I guess I'm by reading papers, right? Like you, I've only been able to read what's been published, but I think in my head I, I can, you know, it's still much freer than if I've already published quite a lot.
Eiko Fried: Yeah, there's some constraints in place and you can navigate them. For example, we were, we wrote this meta-analysis and the journal told us, well, we can publish it as a full paper, but you can submit it as a letter with 800 words. But it was a meta-analysis. How do you, you know? And so in the end we said yes because it was a fancy journal and we wanted that publication.
And we, in the, in the 800 words, we simply link to the full paper on the website, on the, on the website. And, uh, we submitted, we put the full paper online, we wrote it up as sort of [01:06:00] an 800 word summary. And there's the heart link like in there that that'll be there forever. If the internet is still there in 50 years, the link will still be there in 50 years.
And, um, yeah, people can just read the whole thing. And so you can do these solutions, it's fine.
Benjamin James Kuper-Smith: So the letter's more like an add for the actual payment
Eiko Fried: and gateway for the PI mean, it's a standalone thing, but for a meta analysis, you really have questions. You wanna know what, you know, you want at least a flow diagram to, to see what was the, what inclusion criteria and stuff.
We couldn't, we didn't have even one figure in the, in the letter. Um, but I mean, in the end, it's also our choice, right? You can publish. There are journalists that have no figure limits, you can opt to publish there. Um, it's just that, you know, if you want to get tenure, sometimes there are considerations in place still, unfortunately, that, that, um, you might, you know, want to have one or two pubs in prestigious journals for outdated reasons.
Benjamin James Kuper-Smith: Yep, yep. Yeah. That's all the stuff I'm, I'm aware of, but that's still. On, uh, on the horizon. [01:07:00] I'm not thinking about it too much right now, but,
Eiko Fried: yeah.
Benjamin James Kuper-Smith: Yeah. We'll see. I don't know. My, my hope is that like there are, you know, more and more of those prestigious journals that are adopting at least some stuff. To me, it seems also like, like for the kind of stuff I do in Nature behavior would always be a very cool journal and it would fits like probably most of the stuff I'm doing.
And they're fairly progressive, so there is at least the hope that if I have something that a prestigious journal would want, I could even get it in in that format.
Eiko Fried: I think they have a 10,000 euro open access fee, but apart from that it's um,
Benjamin James Kuper-Smith: oh,
Eiko Fried: that's
Benjamin James Kuper-Smith: fine. We'll just play with my salary. Yeah, yeah, yeah.
Okay. Yeah, we'll see where that comes from. Okay. So let's not talk about depressing aspects of. Institutions or whatever exactly we're talking
Eiko Fried: about. I mean, to be fair, I, you know, I know you want to go back to the paper, but just to conclude this, there's been so much progress the last [01:08:00] decade that, you know, more progress in the last two years than ever before.
So yeah, I'm quite hopeful about this overall. There's some negative stuff still, but so much progress happening. And overall I think it's a positive story what psychology specifically has achieved the last two years with registered reports and, and new journals coming up and so forth. And, um, in the Netherlands, you know, so much open access initiatives here and so much new things and new ideas.
There's the ver now that I can upload all my PDFs on my website after half a year. Um, and the, you know, journals can't do anything about that. Um. 'cause it's my work and I get to share it with the public because taxpayers pay my position. And the Netherlands just decided that's a thing now. And,
Benjamin James Kuper-Smith: but how, actually I think you, did you tweet about this a while ago?
I think, yeah. Um, I was really surprised that the Netherlands can decide for publishers that might reside in other countries like that, that's not really how it law works. Right. Or
Eiko Fried: not a legal expert on this. But so far the Elsevier has not [01:09:00] sued the Netherlands. Um, the most important thing is that the Netherlands has said, we, you know, if you get into trouble, we will be responsible, not you as an individual researcher.
And that made the, they made the point very convincingly, including the head of the, sort of the Dutch, I don't know, NIH or something. It's not really medical, the, like, there's a Dutch research organization that, that, that is very large and, and funds a lot of research. And the head of that organization said on Twitter, yeah, this is our position and we are willing to defend you in court or this position.
So that's cool.
Benjamin James Kuper-Smith: Yeah,
Eiko Fried: that, that's helpful. And then if Elsevier Springer wants to sue the Netherlands, they, they can go ahead. I, I'm, I don't know more about that than no legal expert, but
Benjamin James Kuper-Smith: Elsevier probably has more money than the Netherlands. But, uh, yeah, it's still a tall order.
Um, um, actually one thing when you were just, were just talking about like progress and these kind of things, uh, and open access things, one thing [01:10:00] that I found kind of interesting, and I think, uh, I think I read it somewhere. Oh, I can't remember where it is. It's somewhere in your paper. Um, but I think you, I think you're citing something by, uh, how you pronounce same B bomb.
Eiko Fried: Yeah. Horse bomb.
Benjamin James Kuper-Smith: I think he might've said it that something like, um, preregistration are cool, but they're kind of a sign that the field isn't creating strong theories. Because if you had like strong theories that have made clear predictions, you would, it would be, you know, right now, I think the thing was like, we don't test the theorists, therefore we need preregistration.
Eiko Fried: Yeah. Denny, Denny has this amazing 2013 piece called Theoretical Amnesia that I really recommend everybody. He's a brilliant writer in it, that, that's just a very fun to read. Uh, I recommend it's a short blog post on the OSF and he says that, yeah, theorists are not really trustworthy to some degree because they don't fully tell us what they actually think and believe, and they don't fully write down their theories in a formal way.
And that's, that's why we have [01:11:00] preregistration because we don't fully, uh, it's not about malice, but you know, if, if you write down your theory in an equation in some way, that is entirely clear and you make clear what is part of your theory and what is part of your auxiliaries. You know, that, that, that also need to hold to some degree.
You don't need to pre-register. It's, we know what's happening. And of course, in psychology, we also have the issue with what are your stimuli, what are, you know, like all the nitty gritty, well, how do you measure this and so forth. That can be helpful to
Benjamin James Kuper-Smith: exclusion
Eiko Fried: criteria because Yeah, and, and I mean, in my field, depression, clinical trials companies have been caught switching outcome measures because they find something on some outcome measure, but not on the other outcome measure and so forth.
Right. And so there is, there's value to, uh, nailing people down and like what they will actually do later on. Um, given that there sometimes are financial interest, for example.
Benjamin James Kuper-Smith: That's what I found interesting that, so I also thought like what other fields use these kind of clear registrations ahead of time?
And the only one I could really, I mean [01:12:00] not, I dunno about most sciences, but the only one I could think of was medicine and pharmacology and drug development. And there I thought like, well it's clear because these people have a financial interest in having a drug work and that kinda stuff. But then again, you know, what is the interest of psychologists then?
Right? Uh, like why?
Eiko Fried: I mean, staying in academia, I have many, like getting a job
Benjamin James Kuper-Smith: who
Eiko Fried: could not. Yeah, it's not untrue. It's, we, we all, we all have conflicts of interest to some degree. Even people who have a job because they want more citations and so forth. And you know, the human desire for recognition and rewards and awards and all of these things.
So
Benjamin James Kuper-Smith: yeah, I like when people put that in the acknowledgements. Something like, we declare nothing other than that publishing this paper is gonna increase our chance for getting tenure, whatever.
Okay, so
Eiko Fried: sorry, we got sidetracked talking about open science. That happens when you talk to me sometimes, unfortunately or fortunately. I don't know
Benjamin James Kuper-Smith: that, I mean that's, [01:13:00] uh, I, that might also have been me. I think I'm, like, my education almost in terms of like during my PhD was, it seems to me the first two years I was really reading a lot about open science, preregistering my experiments, now learning to upload my code and all this kind of stuff, right?
Like, and it seems like after about a year and a half or something of that, I went like, yeah, but something's still missing. And then I came across like, not your paper, but these kind of papers, um, that the theory part is actually the problem. So in a way, these are just, it seems to be like the two big mythological themes that I'm working through.
So. I'm always happy to talk about it. Science,
Eiko Fried: I mean, that's a fun career. Open science and theory formation. That sounds like a fantastic thing. I wanna do both more and other things less, to be honest. I think that's both super, super exciting areas to work in. Sort of the meta meta science of open science and metas psychology and, uh, that is my doorbell and, um, and, [01:14:00] uh, yeah, theory formation.
I'm, I'm in.
Benjamin James Kuper-Smith: Yeah. Yeah, I mean, I would, it's the kind of thing that I wouldn't want to do full time. Um, I think I want to have like a particular topic in question. I think it's the kind of thing I'm, I'm gonna be very interested in, but probably from the outside anyway. There's one thing in the paper that maybe I read, the part that made me think of the question.
If the P factor is indeed a singular causal mechanism that explains comorbidity, the next logical steps may be to find p investigators neuro biological basis and try to intervene on it. If instead, p is a sensible and useful summary of the problems people tend to have. Similar to SESA, use formative construct that predicts mortality, morbidity, aiming to identify biological marks for P may make as much sense as trying to find the biological underpinnings of SES.
This is because formative constructs, aromatic summaries of data that can be useful but remain convenient fictions. So this is like a kind of a smaller part of what you're of a point you're making. But [01:15:00] my question when I read that sentence or those two sentences was how do I distinguish a convenient fiction from an actual phenomenon?
Eiko Fried: I mean, I guess it starts with the first step of being humble and acknowledging that the things we work with are convenient fictions, they're models, they're not theories. And I, I guess that would be my take. And I, I, I think that would be a step forward if every researcher, so, okay. Example in my area or not in an adjacent area, people take diagnostic categories.
That in my field are known to be problematic. Low inter rate reliability, you know, low validity, all sorts of issues, measurement, error. They take them and because they're not from my area, they're from genetics for example, they sort of unquestionably pretend these are true things in nature. They're like true zeros and true ones.
Everybody with depression is a one everybody without is a zero. When we all, like in clinical psychology, it's pretty widely acknowledged that it's [01:16:00] dimensional and a lot of folks are arbitrary, you know, do you include them? Do you not include them? It's sort of on, on, on the edge and there's no true depression as a category.
It's much more complicated than that. But if you then look for the genetic signal of depression by mistakenly pretending this is purely categorical, and everybody with the one in your dataset really has true depression the same way every gold atom has 87 protons, I think. And you know, every non gold atom has different number of protons.
That's, you know. Um, yeah, then you end up drawing inferences that just don't follow because you mistake convenient fictions. Sometimes pragmatic, sometimes useful, sorry, sometimes not useful for the truth tm and, um, that leads you to trouble on the inference road in the end. So I think the first step, that's the prescriptive to say like that, but I think we're working with a lot of convenient fictions, at least in clinical sciences, and I think it's good to be sort of, you know, to, to be aware of that.
And god knows clinicians are, A lot of the problems [01:17:00] that that, that my work has pointed out in, in clinical sciences applies to the way we do research on these diagnoses. For example, um, if you talk to clinicians, they will tell you, obviously depression is not a true category in the universe. It's a convenient fiction that I require for healthcare services and can be useful sometimes for transferring patients, saying, you know, the person has A and comorbid B to give them a rough impression.
But yeah, every person is different. And of course, you know, there's more information than the zero one,
Benjamin James Kuper-Smith: but isn't the hope of psychiatry to eventually find specific neuro mechanism that actually explain why people have, let's just say, certain problems. Um, so isn't that kind of the inherent hope that there actually are real biologic phenomena?
So I guess my quote is just like, there's this weird indepe, I don't know, in between stage or something where at some point you have clear fictions and at some points you go, okay, I'm assuming this is some sort of neurotransmitter or whatever, and there's too much of it in the brain or too little or whatever.[01:18:00]
Uh, or then you have stuff like the connectivity of, let's say a brain area is whatever, uh, not enough, like a small world network or whatever, right? So, yeah, it is just, it's a very like Phil philosophical question. Like when is something real and when it's, when is it just a construct?
Eiko Fried: Yeah. And it's an important question definitely.
And it's hard to navigate. I don't have a good answer. I think that there are a few fields in my area, in my adjacent areas that have dominated the funding infrastructure so heavily, like neuro neuroscience and genetics for mental health with very little useful insights or not useful. I don't wanna say that with very little actionable insights there.
We have not developed any new drugs or, you know, we can't help people better than 20 years ago with, you know, if none of that funding would've happened. Doesn't mean it won't pay off in the long run. Of course that would be silly claiming that. But so far actionable insights for the majority of the large mental health categories, you know, mood disorders or [01:19:00] anxiety disorders that have been really, hasn't contributed much I think in part because we oversimplify things and um.
Yeah, I don't know. It's a big discussion. I obviously trivially there will be neural mechanisms related to how people think healthily or unhealthily or that's how brains work. But if, if, if there's a clear relationship between, uh, the evidence so far shows that if we find aberrant mechanisms, they are often related to multiple mental health problems.
At the same time, they're sort of trans diagnostic in nature, which makes sense, I guess. And they are, they have low specificity and sensitivity. And that means if I know, you know, that you have this arant mechanism that tells me very imprecisely about your mental health, you know, it, it maybe helps me above chance level to predict anything but maybe only 2% above chance level.
And that means it's completely unactionable. It's [01:20:00] like saying like, I made an HIV test with you and it's not a conflict anymore. Now it's 52 to 48% that you have HIV. It's useless on a, it's on a population level, it might not be useless, but on any clinical implementation level, it's absolutely useless.
Nobody would take the test because you would need to take it 30 times or maybe 50 times, you know, to, to get, uh, get any insights. And even then it would not be useful. So, I'm a little skeptical,
Benjamin James Kuper-Smith: but Yeah. But the criticism is that it's less actionable than current constructs or, because in some sense you could also say, right, it's like, it's, maybe it's, it doesn't relate clearly to clinical, uh, diagnoses because those diagnoses aren't based on biological mechanisms.
Right? Like it can go either way, almost.
Eiko Fried: Yes. And I think there's value in finding mechanisms that are not necessarily married to the current dsm. For example, the, there's this RO framework that, that tries to do this, uh, rather successfully at the moment, I think. [01:21:00] And as we're supporting, I just think that, uh, a lot of these initiatives have been, so our doc started with a paper in 2010 saying that all mental disorders are brain disorders.
Quote. I don't think that's helpful because it ignores environmental and psychological aspects that we know are really relevant for mental health problems. Are they going through the brain in some way? Yes. But they're, they don't originate in the brain in some sense. So I, I just think we've been oversimplifying things for a little bit.
Also, psychologists, I, I, I had the, the huge honor and pleasure to sit in the same office as a sociologist during my PhD. Shout out to Homan. And yeah, I noticed how ridiculously little we take the environment into account in, in psychological research. Um, so we're, we're not better or, you know, I'm not better.
I'm just, I wish we would all work a bit more integrative.
Benjamin James Kuper-Smith: Yeah. So as part of the solution, just trying to be more aware of. I dunno, I guess it's like if you [01:22:00] study something for years, it becomes a real thing, right? It, it feels like a, you forget that it's a construction and it's just this friend or enemy of yours depending on how you relate to it.
But
Eiko Fried: yeah, there's lots of, of, of empirical work on the essentialist bias that, you know, you work with constructs and then we think they're real to some degree. And, and adults do that. Children do that. Uh, researchers do that. I know, I do that. Everybody does that. Um, it's good to be, be reminded sometimes that we work with convenient fictions.
Yeah. At least at the construct level.
Benjamin James Kuper-Smith: Yeah. The way you said that makes me wonder what you are. You said something like, children do it, adults do it, researchers do it.
Eiko Fried: I do
Benjamin James Kuper-Smith: it. Do it. Exactly. It's
Eiko Fried: like, okay, well what
Benjamin James Kuper-Smith: are you then, if you're not one of those,
Eiko Fried: I just mean I'm not exempt. Uh, I I'm not a special category.
It's just, uh, I, I point, i I, in the paper, in the theory paper, I pointed out the fingers, but the finger also pointed at me. I call out my own papers, um,
Benjamin James Kuper-Smith: yeah.
Eiko Fried: Multiple times throughout this work. So I'm really, [01:23:00] the writing this paper has been a huge learning, uh, thing for me. And, um, I'm, I'm really not exempt from this criticism.
Benjamin James Kuper-Smith: No, I think that's, I think part of. I feel like one potential problem with writing these kind of critique papers of the field, whoever can, it can always seem a bit condescending or I know better than everyone else. And I think you, you navigated that very well by, you know, I mean, not like self-flagellating in the entire time, but like occasionally putting in a sentence or something saying like, in my papers I also did this thing wrong, or this is a problem that I've made in the past, or something like that.
I think that's a quite neat way of ensuring that, yeah, it doesn't look like I could come see and tells everyone what to do, but it's more like this is a problem that everyone's working through.
Eiko Fried: Yeah. So this stuff makes me sleep really badly, so I, I, uh, try to, you need to point fingers to some degree because other people say.
Well, but what evidence do you have? Uh, I hear so [01:24:00] often these are just strong arguments. Nobody really believes A, B, or C. So in the paper I was sort of just required to say, here's a quote for A, here's a quote for B, here's a quote for C. This is not to call of these paper, these authors, as I say in the paper also, it's just to establish a baseline here that these things are being said and, and written and published when they don't quite follow from the data.
You just need to do that. But I, this stuff gives me headaches and, and I reached out to quite a few of these authors and said, Hey, just, just so you know, yeah, this is not, um, it's not about you. And yeah, it's just an empirical baseline. And God, God knows I've made these mistakes in my own work
Benjamin James Kuper-Smith: and the people understanding of this, or,
Eiko Fried: yeah, I didn't have any over trouble so far.
But of course these trouble are usually, you know, behind your back. So I, I don't know. Uh,
Benjamin James Kuper-Smith: yeah, you, you, I'll ask you again. In five years
Eiko Fried: I didn't get tenure because my boss got called about my theory paper. No, I hope
Benjamin James Kuper-Smith: we
Eiko Fried: try.
Benjamin James Kuper-Smith: Yeah, I think that's. The tone makes a huge difference. The, the tone with which you write a pa Yeah.[01:25:00]
Makes a big difference there. I dunno whether this is gonna be like, whether you can comment a lot on this, but one thing I've, it seems like whenever you read these papers that criticize theory in psychology, you come across, what's his first name? Uhm Orm is his surname.
Eiko Fried: Paul,
Benjamin James Kuper-Smith: Paul Mill. It seems like he always, he's always cited somewhere in there and usually with the best quotes, um, is I've, I've read one, I think it was his paper, but I've, I've heard him now.
It seems like he's coming up now more than he used to in discussions about these things. Uh, do you know much about him or what he did or anything, or,
Eiko Fried: yeah, no, I'm no Paul Mill expert, but I've certainly read a few of his papers. I would strongly recommend all psychologists to read at least three or four.
He has, I think 2 19 78 papers. He has a 1990 paper. All three are. Highly cited, highly read. He writes extremely. And which ones, which of
Benjamin James Kuper-Smith: those? I'll just put them in the description then.
Eiko Fried: [01:26:00] 1978. Theoretical risks and tabular asterisk. The slower progress of soft psychology. The,
Benjamin James Kuper-Smith: the, the two year setting here is appraising and amending theories and why?
Summaries of research on psychological theory often. Exactly. Uninterpretable.
Eiko Fried: Yeah. And then the 19 80, 78 paper I just read out loud. These are three.
Benjamin James Kuper-Smith: Mm-hmm.
Eiko Fried: Quite relevant ones. I think so. I I was incorrect in saying 2 78. One 90 was 2 91 78. But doesn't matter. Yeah. So he was a, I think he was a clinical psychologist by training actually.
Um, and I met a psychologist. He thought a lot about how to theorize how he, he had, he was a prominent critic of no hypothesis, significance testing an early prominent critic. And perspectives on psychological science. The a PS Journal is publishing a special issue on Paul Mill, uh, soon. I think some of the, most of the papers are online as preprints.
Some of the papers are actually published in the journal, I think last week or two weeks ago. [01:27:00] And, um, we also wrote a paper on formalizing theories from sort of Paul Neil's perspective, why the field is not doing it, how Paul Neil's criticism lends itself to this solution. Why, why he never suggested it himself.
I learned a lot from reading Paul Neil. He's fun to read because he's very vocal. That's why the, like, the big sort of, you know, headline quotes are from Paul Mill. He, he really didn't hold back. And there's a, I think it's on YouTube, there's a collection of his lectures and I can highly recommend this if you like podcasts.
So if you like, you know, listening to stuff while you cook. Uh, there's, um, 10 12, I forget. That's online and it's a fantastic resource to just get you thinking. He, he, he writes well, like Danny Boson, some people just write in a way that is entertaining, scathing, um, using examples in a way that you helps you understand things that he really didn't get quite without these examples before.[01:28:00]
And, um, yeah, that's why he also features in my paper, uh, quite prominently.
Benjamin James Kuper-Smith: Okay. I'll put, as I said, links to all of that in the description. Okay. One thing, so this is something I, um, talked about also with Paul Smino. Uh, and I guess I'm gonna ask a you a fairly similar question here. And this is also something that you allude to kind of, oh no, not allude to you explicitly mentioned kind of at the end of your response to the responses and your blog post about it, which is kind of, are we asking too much of psychologists?
Um, how are we supposed to learn everything? Because I mean, like, I'm really not someone who. Is one for like social reform or wants to tell people what to do or whatever. But when I read these things, I keep thinking like, why did no one tell me this? Like, shouldn't I have learned this during my, my bachelor's with the, uh, caveat that I often didn't attend lectures?
But um, you know, it feels like there's a lot of blind spots that are in the current [01:29:00] curriculum and educational system when for people who do psychology. So, um, maybe I think the question yeah, is maybe, uh, if we have to include something like how to maybe not create good theories, but at least distinguish weak from poor theories or something like that.
Or maybe a bit mathematical stuff, a bit of modeling. What do we have to take out assuming we stick with a three year bachelor's,
Eiko Fried: what do we have to take out from the curriculum?
Benjamin James Kuper-Smith: Yeah.
Eiko Fried: Yeah. I really struggled with this as well in my. Basic, basic emotion theories? No, um, I don't know. I, I had to learn, I honestly, I mean, I'm 38, so my education wasn't literally yesterday, but it was kind of yesterday in the history of the field, I would say I had to learn a lot of theories by heart, and I mean, theories, sort of weak theories by heart, summaries of things in personality or, or emotions or even clinical.
I needed to learn like the DSM stuff by heart and so forth. Stuff that you [01:30:00] can just read up on. So I think I spent a lot of time, I mean, honestly I got into grad school because I had really good grades because I just happened to be somebody who can learn things quickly and then forget about them after the exam.
So it certainly was sort of helpful for my own career, but it didn't make me a better psychologist or researcher having sort of being born with a fairly decent, like shortish term memory for a couple of days. Um. It's difficult. I would, I mean, I grappled with this in the paper as well saying, you know, we want better stats educational, we want better theory education, we want more interdisciplinary education.
Um, uh, one, one solution of course is earlier specialization, but I think that's a very bad idea. I, uh, you know, people, there are specific bachelors already for clinical psychology, but I think it's been really helpful for me to have a sort of general psych bachelor, at least hear about cognitive psychology and personality and intelligence and so forth.
Benjamin James Kuper-Smith: Yeah.
Eiko Fried: Um, I certainly couldn't be a generalist today or sort of a generalist or aspire to be one if I [01:31:00] didn't, if I had never learned about these adjacent areas that often face similar problems, by the way, I don't know. I would, uh, probably ask people to learn things less by heart and, and think more, but that's very general advice that that.
Benjamin James Kuper-Smith: You'd keep the clinical stuff. For example, like one thing that Paul and I wondered about is whether it might be sensible to separate the two. Because in my, I did a straightforward batch of psychology and yeah, as soon as anyone thought about saying the word math, um, half the room just start, you know, fell apart mentally.
Uh, especially in the examples of like a simple connectionist model and people just didn't get it. I, I dunno what that really is to get, but I found like there were some cases where I felt like, am I, you know, are we, these people are clearly interested in something very different than me. Should we even be in the same degree program here?
Eiko Fried: Yeah, I see that, especially for clinical psychology of course, because [01:32:00] a lot of people have a pretty clear idea that they would want to, might want to become a, a psychotherapist for example. Maybe that can be a separate track. I don't know. Here we have a research master's in clinical and we have a master's in clinical, like an AP applied master's to some degree, if you will.
And the research master students more likely go into a PhD trajectory. And the more applied clinical master students more likely get their sort of obliging psychotherapy education here in the Netherlands. But every university does it differently. I, I really struggle with this. I, I think here also we need pl and just see how things work out best in different places.
And I do think there's a, there's a place for specialization. We already have sort of statistical master programs in psychology. There are theory masters or sort of theoretically, um, uh, cognitive psychology tested quite a bit. For example, I, uh, I'm not gonna make the mistake again. I said in a talk two and a half years ago that there is no theoretical psychology.
And somebody said, I literally have a master's in theoretical psychology in the audience. Really? That's, [01:33:00] that's unusual. I didn't know that that was either, but it, yeah, it, it is unusual, but it is, it, it happens. And part of my, I feel stupid comes from. Working sufficiently across disciplines that I know that stuff exists, I have no idea about.
And that's been pretty humbling for me. I don't do it. I, I can't spell it, but I know it exists and it makes me worried about the inferences from my work. Like the coordination problem, for example, I alluded to previously, like how does the sort of, the true thing we try to measure relates to the observed thing that we measured Is that linear, non-linear, in which way?
Um, and it makes me worried anytime I measure anything now, and I haven't read up on this yet, but I know it exists and it makes me worried and humbled about sort of this issue of trying to, you know, measure anything in psychology. So there's some degree of benefit to teaching people a broader set of skills or teaching them about different disciplines or a, a broader set of problems.
But there's also a [01:34:00] benefit in having specialists who specialize in theory formation and specialize in math and so forth, and psychology and, um, putting people together when we want to collaborate on, on something complicated. So I see both our valuable goals to some degree and we need to find a good sweet spot between both, I think.
Benjamin James Kuper-Smith: Yeah. I mean one, uh, one extreme example of specialization that I thought, this is not like a serious proposal I have, but something I thought might be a, something I've thought about occasionally for five seconds. Uh, is that because, so I was reading a lot of these theory papers and I thought like, did I actually learn anything during my bachelor's in psychology?
That's really useful for what I'm doing right now. And some definitely, um, especially like writing, I had to write pretty much something every week that I had to hand in. So there's definitely some stuff I learned, but one thought I had was like, why don't we. Just have, like, if you want to do research, you don't even have to study.
You just do an apprenticeship in research. So you like, I dunno, you're ideally your [01:35:00] 16, 17 years of school and then you just go into research. You are taught, I mean, of course that's much more time intensive 'cause you have to be taught by someone how to do it. But I don't know, I wonder whether, whether I would, what I would've missed had I not even studied almost in a way.
But just, you know, I mean that's the, I think maybe the way it often was in the past, sometimes, right? When you hear, I can't remember, but some, I remember reading, oh, no idea who it was. Some chemist or something who started off as like, you know, not exactly sweeping the floor in the lab or something like that.
Like in the 19th century. Like really just working their way up. And then when they were 30 they could do their own experiments or something. I wonder whether that's worse.
Eiko Fried: Yeah, I, and I haven't heard about that idea before. I think. So when I was in Luva in Belgium during my first postdoc, in a pretty technical, uh, quantitative psychology group with engineers and physicists, many of the technical people there held the belief that [01:36:00] to do psychology, you should get a math bachelor's and then like do a one year psychology specialization showing you how little they knew about psychology.
Like, I really don't want to undervalue Yeah. The importance of like what the field has done, all the clinical wisdom that is there are there, for example, in my area, um, cognitive behavioral therapy and, and the theories around it, for example, has been hugely relevant and is huge, hugely useful in helping people.
And it has nothing to do with statistics or math. Yeah. So I, I, it is hard to know the counterfactual, what, you know, what I would know if I hadn't studied. And I think you pick up a lot of things on the way that you don't pay attention to one. So 2011, 12, when I was in mic, university of Michigan, Richard Dawkins gave a talk there about his new book and after in the q and a, somebody asked him about the value of philosophy.
And he said, I like in his condescending tone, I never truly understood what [01:37:00] philosophy is other than thinking clearly. And I thought, well, that's kind of important. Richard Dawkins, like thinking clearly is very, very hard. Humans are not sort of made to think clearly and unbiased. And so I think if anything, I learned to think more clearly during my, my Bachelor's in psychology just by doing it.
And that takes time, I think. And, you know, learning by doing.
Benjamin James Kuper-Smith: Yeah. I also like, as I said, like this is not something I, I'm gonna start a petition for anytime soon, but I did, I mean, there's like all these other things, right? Like you are in, in my case at least, living. I mean, I grew up in a small city, then moved to London.
Uh, you live in a complete different place. That's like part of, you know, being a student is also just all this like, personal stuff, right? That, I dunno to what extent you'd get that if you were in a lab with three people or whatever, rather than in a cohort of one 50. So I, I'm a supervisor
Eiko Fried: now. I supervise my own graduate students and on a daily basis, I'm not exaggerating.
I [01:38:00] realize how much I picked up from my prior mentors without them teaching me how to teach. I, I just, I had so much luck with, with Denny and Francis and, and, and others along the road that I picked up all these implicit skills and how I practice them explicitly. And I realize I do. And I'm like, oh. Then he taught me that sort of, although he never taught me in a sense.
And so I think we pick a lot, a lot of things on the way that we, that we might not be aware of that, that are really useful down the road. And how to have an argument, how to be, how to be constructive in feedback, for example, that you just learn that because if you're not, you get, you know, hit in the face inside in a metaphorical sense sometimes.
So I was about
Benjamin James Kuper-Smith: to ask
Eiko Fried: whether that's from
Benjamin James Kuper-Smith: experience or
Eiko Fried: No, no, no, no, no. I'm, I am, I might have said some things I regret, but I, I'm, um, my, I'm a very agreeable person in my personality structure, so I get, I've deleted so many tweets because I'm like, oh no, maybe it's gonna be misunderstood and I'm, and then I sleep badly and I, I'd rather be just delete it.
Benjamin James Kuper-Smith: So no fist fights from your [01:39:00] psychological argument?
Eiko Fried: No, I, I do have a, i I, I did take one over for quite some time, but I, maybe that's why,
Benjamin James Kuper-Smith: like before you could, is that someone you say, by the way? I know TaeKwonDo.
Eiko Fried: Exactly.
Benjamin James Kuper-Smith: Yeah, but I think the, I wonder like, in terms of like how much of a problem really, I mean, it's, it definitely seems to me like a problem like that.
In psychology, you learn lots of content. You know, as you said, learning, I mean, I didn't have to do what you had to do with learning all the stuff by heart, but learning lots of content. And then, I mean, so I actually looked at the curriculum for my bachelor's and it's roughly 60% content, 40% how to do experiments more or less.
But like, I mean, how to do experiments includes like analysis and that kinda stuff, right? But there's, you know, nothing about theory in there at all. And uh, yeah, at least I think, I think we both agree with at least something should be in [01:40:00] that. Uh,
Eiko Fried: yeah, I mean de force master's new paper in the perspective, special issue on the theory construction methodology or theory construction cycle.
I think I featured that quite prominently in my rejoinder in this, um, paper here that we talk about today. Just something basic like that is so helpful in structuring thoughts. How do we, we talk about theories all day, but how do we build them? How, how can we inform them? How do they help us learn something about the world?
All that, that was new to me, and I was, yeah, I, I sort of grappled with that by writing the paper and by learning about it, by making, making mistakes, by being called out in the commentaries for unclear use of nomenclature. So that, yeah, it's been a really nice learning experience and I'm really grateful.
I sort of, I really mean that had the privilege to engage with these brilliant thinkers and to be, um, in the position to talk to Denny about this on a daily basis. For example, you know, when I lived in Amsterdam and, and worked, when I worked in Amsterdam, um, I remember [01:41:00] I kept pestering Denny about statistical assumptions.
I said, you know, Denny, what about the multi normality of, and we, we were in the middle of a conversation and Denny would just like, leave my office. And I was like, no, he's this tall, impressive figure. He would just turn around and go, and I'm what's happening here? And he would, I would hear him walk down the hall.
Opening his office, you could hear the key. And then he came back, and then he dropped a book on my, on my table. And it was like, against Methods by Paul, I think. Fire, fire album. Yeah. Yeah.
Benjamin James Kuper-Smith: I haven't read it, but
Eiko Fried: yeah. And, um,
Benjamin James Kuper-Smith: great name.
Eiko Fried: Well, it, you know, it's, it's, yeah, it says, you know, it, Darvin wasn't really a good methodologist and, uh, Newton wasn't really the best in math and uh, you know, and so, um, there, there are assumptions that you can bring to the table, but, but sometimes also just doing things might, might give you some insight.
I'm, I'm, there's a bad summary of the book. You should read it. But anyway, lots of experiences with great mentors that, that helped me, um, do things better down the road. And, uh, I [01:42:00] didn't in that moment realize what I learned, but I, I realized it two years later.