Reflections on the Need for Open Science in Forensic Psychology
I lost my faith in science during my years as a research assistant and PhD student in forensic psychology. Sounds dramatic, doesn’t it? It was and still is. Here, based on my own personal experience researching forensic psychology, are some reflections on why I am currently part of organizing the 1st Psychology & Law Open Science Conference.
In 2010, I was a research assistant preparing my application to enter a PhD program. As was common in the laboratory I belonged to, I had (with minor supervision) planned and supervised the process of collecting data for a quasi-experiment testing the effectiveness of a technique for investigative interviewing. Once data collection was finished, we decided how to analyze the data, including which measures to employ and which statistical analyses were appropriate for us to use. The process was dynamic and exploratory in nature, particularly the phase of operationalization and coding.
We wondered if we should analyze each interview in whole or in part, meaning, should our analysis be based on for example free recall alone or should it also include interviewees’ responses to specific questions? When we looked at data collected from free recall or specific questions separately, the results were not very interesting. But when we looked at the whole interview, including interviewees’ free recall as well as responses to specific questions, we saw how some interviewees seemed to have successively changed their statements. That seemed interesting, so we decided to code that too, and these were the codings used in the final paper.
At the time, I didn’t question this process. It seemed to me like a reasonable way to do science, just like it seemed to Professor Brian Wansink and his colleagues.
We finished the codings and agreed that it was time for statistical analysis. It was decided that I would run independent ANOVAs with condition as the independent variable and the measures we had developed as dependent variables. Most of our analyses turned out to be statistically non-significant. Too bad, but that’s science, right? You can’t always find support for your hypotheses (even if they were Hypothesised After the Results were Known; so called HARKing).
To my surprise, however, these statistically non-significant results presented a slight crisis for us. It was argued that we could lose funding opportunities if we went ahead and published those results. I couldn’t believe what I was hearing. I lost interest in the study and couldn’t get myself to write the manuscript. This experience made me lose my faith in science (which I today have regained through the movement for open science). Today, that work is my most cited study and the results have inspired current best interrogation practices as summarized by the High-Value Detainee Interrogation Group (2016). I have later published retroactive disclosure statements, for this and other papers, in which I provide information about the methods that was not included in the original publication.
A few years later, I was writing a manuscript for a different study, which once again was low powered, with crucial p-values above .05, and small effect sizes. A senior co-author from a different laboratory advised us not to publish our results as they were. Instead, he suggested that we obtain smaller p-values either by collecting more data or by throwing away the codings we had made and conduct new ones with new measures. This time, two of us who were authoring the study said no, asserting that the study should be submitted as it was. One reviewer, however, did suggest that we collect more data. We once again persisted, referring to metascientific literature showing that this type of practice, known as “optional stopping”, is highly questionable and unscientific. We won the battle and the study was published as it was — with statistically non-significant results.
In 2017, I attended the second annual meeting held by the Society for the Improvement of Psychological Science (SIPS). There I met a researcher who had formerly been a doctoral student in one of the world’s most prominent forensic psychology laboratories. He had left academia without finishing his PhD. One of the reasons was that he had failed to convince his supervisor to stop employing questionable research practices such as p-hacking and HARKing.
Recently, in 2018, I met another researcher who had abandoned his career in forensic psychology. He had chosen to leave psychology and pursue a career in law instead. His reason, again, was mainly that he had witnessed too little scientific integrity in the field of forensic psychology.
We are currently a team of three researchers in forensic psychology who are organizing the 1st Psychology & Law Open Science Conference. The movement for open science in psychology is a merger of two different movements (Willén, 2018). The first movement is over a century old and reflects a concern amongst statisticians over how statistics are often misused by empirical scientists. The second movement is comparably young and concerns making use of modern technology in the scientific process, such as making research materials and data publicly available in digital repositories, as well as making written reports openly accessible to the public. This sub-movement, commonly, and perhaps a bit sloppily, referred to as “open science”, is calling for a change in how we as psychologists conduct our research. The most notable changes that have been proposed are adopting completely transparent reporting standards, preregistering methods and statistical analyses prior to collecting data, making data publicly available, radically increasing sample sizes, and conducting high-powered direct replications. (IGDORE provides a brief introduction to the raised problems and proposed solutions.)
Implementing these changes would dramatically improve the trustworthiness of forensic psychology as a scientific discipline. It would make it possible for us to contribute to society the way that we as scientists are supposed to and, hopefully, in the way that we aim to. After all, isn’t that why most of us chose to do research in this particular area to begin with — because we wanted to provide scientifically-based assistance to people in vulnerable and painful situations as well as to professionals working with such people? Employing questionable research practices can do more harm than good to the people affected by our research. We must radically update our methodological practices. The more we wait, the more societal and individual harm we may cause.
References without permanent links
High-Value Detainee Interrogation Group. (2016). Interrogation: A Review of the Science. Retrieved 18 Aug 2018 from https://www.fbi.gov/file-repository/hig-report-interrogation-a-review-of-the-science-september-2016.pdf/view
Willén, R. M. (2018). The Future of Science is Freedom. Manuscript under production.