An Introduction to Research Integrity
Wednesday, 7 December 2022, 10:00 – noon GMT
This webinar focused on key topics in research integrity, research ethics and publication in research, and was aimed at those early in their careers, those with responsibilities for leading and training researchers, and those looking for a refresher in these important issues. The videos of the talks and a transcript are below.
SPEAKERS
Introduction to research integrity
Ms Zoë Hammatt, Adjunct Associate Professor at the University of Hawaii; research integrity consultant; and UKRIO Trustee. To watch Zoë’s presentation please click here.
Research integrity, ethics, governance and culture – what’s the difference and does it matter?
Dr Simon Kolstoe, Reader in Bioethics, University of Portsmouth, UK; and UKRIO Trustee. To watch Simon’s presentation please click here.
An introduction to authorship and publication ethics
Matt Hodgkinson, Research Integrity Manager, UKRIO. To watch Matt’s presentation please click here.
SPEAKER BIOGRAPHIES
Ms Zoë Hammatt, Adjunct Associate Professor at the University of Hawaii; research integrity consultant; and UKRIO Trustee
A licensed attorney with a Master’s degree in Law and Ethics, Ms Hammatt served on the faculty at the University of Hawaii medical and law schools and taught courses in biomedical ethics, regulatory compliance, human subjects protection, and responsible conduct of research. She also has served as the Legal and Regulatory Specialist and Chair of the Ethics and Regulatory Subcommittee for the NIH-funded Research Centers in Minority Institutions (RCMI) Translational Research Network (RTRN), a national consortium of 18 institutions dedicated to reducing health disparities through collaborative basic, clinical, and translational research. Among her many accomplishments, she led the harmonization of the IRB process across all 18 RCMI institutions. Ms Hammatt has been a Fellow with the St. Francis International Center for Healthcare Ethics in Honolulu since 1997 and since 1995 has been involved in international projects to empower women in Central and East Europe as they develop initiatives that support their communities.
Ms Hammatt received a B.A. in English Literature at Loyola Marymount University in Los Angeles, a Master’s degree in Law and Ethics in Medicine from the University of Glasgow in Scotland, and a J.D. from the University of Hawaii, where she practiced law with Case Lombardi and Pettit, the oldest law firm in Hawaii.
Dr Simon Kolstoe, Reader in Bioethics, University of Portsmouth, UK; and UKRIO Trustee
Simon Kolstoe is an academic with a focus on the role of ethics committees and governance structures in promoting research integrity. He chairs ethics committees for the Health Research Authority, UK Health Security Agency and UK Ministry of Defence. Prior to moving into Bioethics he was a post-doctoral research fellow at UCL Medical School, and then Senior Lecturer in Biochemistry at Portsmouth University where he received a BBSRC new investigators project grant. Following further degrees in Philosophy and Research Ethics, he moved to a Senior Lectureship in Evidence Based Healthcare in 2017, and then Reader in Bioethics as of 2021.
Matt Hodgkinson, Research Integrity Manager, UKRIO
Matt is a Research Integrity Manager at the UK Research Integrity Office (UKRIO), supporting the work of the charity including the advisory service and the development of policies and publications. He has a BA (Hons) in Biological Science from Oxford and an MSc in Genetics from Cambridge. He worked in open access science journal publishing for 18 years, developing expertise in peer review, critical appraisal, editorial policy, and publication ethics as an editor at BMC and PLOS and then heading the Research Integrity team at Hindawi. He is a council member of the Committee on Publication Ethics (COPE) and Treasurer of the European Association of Science Editors (EASE).
WEBINAR TRANSCRIPT
The transcript below has been edited for clarity and to add links.
James Parry: Today’s topic is “an introduction to research integrity”, an issue that we cover in our free webinars at least once an academic year, because we recognise it’s important to the research community. I’m very pleased to be introducing a crop of expert speakers well-versed in issues of good research practice and giving their perspectives on these important issues, looking at the fundamentals of research integrity, what it is, why we need to worry about it, but also why it’s relevant to researchers, and the challenges in achieving standards of good practice in a variety of contexts. Our first speaker, Zoë Hammatt, will give a broad-ranging introduction to research integrity, and then Dr Simon Kolstoe will talk about research integrity, ethics, governance and culture, and our final speaker for this morning, Matt Hodgkinson, will be talking about research integrity and the concept of authorship and research publications, and broader issues in publication ethics.
Zoë Hammatt
James: Without further ado, I’ll hand over to our first speaker, who I’m very grateful for joining us because they’re currently a long way away in a very different time zone. Zoë Hammatt is an adjunct associate professor at the University of Hawaii, has been a research integrity consultant for many years, and also in the summer joined our board of trustees at the UK Research Integrity Office and so we’re very pleased to have her with us. Zoë is staying up very late and is coming to us from sunny Hawaii, and is going to talk about an introduction to research integrity. Zoë, it’s great to have you with us this morning. Well, it’s this morning for us. I appreciate it’s late.
Zoë: It’s morning now as well, but in the beginning of your day. Thank you for the introduction, James, and also to Beilul Kahsai who has done a wonderful job in supporting us in preparing for this webinar, and also being a fantastic new Events Coordinator at UKRIO. I look forward also to hearing from Simon Kolstoe and Matt Hodgkinson. And I think our topics are quite related.
I’m delighted to begin by describing what I see as an arc of research integrity. It’s very hard to truncate a whole field that has emerged over the last many years into a 20-minute talk, but recognising the spectrum of activities and behaviours, and also expectations that form part of the culture in which we all are conducting our research and serving as actors in this research enterprise, regardless of the discipline. Within this arc, I like to think of accountable administrators, which doesn’t only encompass those at institutions, such as universities or research institutes, but also those who are charged with serving in roles at funding agencies, or in oversight agencies as part of governments, private institutes, as well as those involved in journal publishing and editors. Each entity that is involved, including professional societies, for example, plays a role as do the individuals who form each of those groups, and their attentiveness and action to all of the environment around them, as well as their own behaviours, and the behaviours of the groups in which they thrive, is fundamental to this arc that I see as part of our spectrum of discussion today. In addition to that, individual researchers being responsible and serving as role models helps us to engage in conversations around serving as mentors and setting examples as superb supervisors, which is one of the model programmes that’s been developed over the last years, and demonstrating respect for each other as we collaborate and communicate.
In addition, there is this contribution and commitment as the Concordat, for example, sets forth. These commitments to creating a community that fosters a culture of research integrity. Of course, culture is not something that’s fixed: it continues to change and evolve, and one of the exciting things that I am honoured to witness as one of the people involved in helping with the World Conferences on Research Integrity (WCRI) is that our themes evolve as the field evolves, and as different disciplines become more sophisticated, not only in their ways of organising the data that forms the foundation of their field, but also in the ways of researching and understanding how the activities and behaviours of specific researchers and groups and entities then affect that environment and how we can behave very specifically in order to produce this positive research culture.
The field of research integrity is quite “young”, but arises from many different disciplines around the world, in a philosophical sense, as far as virtue-based ethics and values and moral codes that have been with us in many different cultures. I can’t think of a single culture that doesn’t have its own codes and myths and ways of organising human behaviour that have to do with honesty and responsibility and some form of ethics. I particularly like the Japanese concept called magokoro (真心), which is ‘a true heart’, one of the words they use to describe integrity – because integrity, for example, from a personal perspective, doesn’t exist in the way that we think of it in the English language in the Japanese language. This true heart concept is used as part of self-reflection and being able to look oneself in the mirror and feel confident and comfortable and happy about the way in which one is behaving, not only toward oneself but also toward others.
In this field, the WCRI have been fundamental to creating a forum for discussing these issues on a global stage. For example, the first occurred in Lisbon in 2007, and the second in Singapore, in 2010, during which the group of people who came from different disciplines all around the world began to discuss what some of the basic principles for research integrity might mean, and what commitments might be necessary from different actors in the research enterprise. The basic themes emerged as honesty, accountability, courtesy, and fairness in working with others, and good stewardship of research on behalf of others. Since the Singapore Statement, there have also been statements on collaborations in Montreal, as well as the Amsterdam Agenda and the Hong Kong Principles, which are wonderful at looking at how to assess researchers in qualitative ways and not just count the number of publications. We are very excited to be announcing soon the Cape Town Statement, which I’ll talk about toward the end, which emerged from the WCRI in Cape Town this past summer [this was published on March 24, 2023].
One of the definitions I like, because every jurisdiction has a different definition, is that of the University of Edinburgh, which describes conducting research in a way that others can have confidence and trust that research and not just the process itself, but the methods, the findings, and everything that arises from that. That means that it can be validated, reproduced, and provide some kind of benefit to society. I almost want to say unfortunately, the US emphasis on research misconduct in this field has unfortunately contributed for many years to a rather narrow perspective in how research integrity is described. Research misconduct in the US definition, which is more than 30 years old now, focuses on fabrication, falsification, and plagiarism, and across the spectrum of research there is a very legal standard. Significant departure from accepted practices is very difficult to define, regardless of the context. Look at different disciplines and how accepted practice may differ across disciplines; even within a particular institution, across the hall, you could have very different accepted practices. There is this narrow definition, but also very specific procedures for handling allegations of research misconduct, which emerged out of a very politically driven field, where there were very high-profile cases that occurred at major universities in the United States, and thankfully some members of Congress who said, “We need to have an oversight agency to handle this.” There was an awful lot of attention in the media to scandals that went wrong, and therefore a lot of focus on what to do when things go wrong. There is basically one phrase in the US regulation that talks about fostering an environment that promotes the responsible conduct of research, it’s very general, and some other funders, for example the National Science Foundation in the US, have asked for responsible conduct of research training plans, which allow for more detail. The academic field and academic freedom have been largely left to emerge on their own and not have government intervention to a great extent in the training of research and the training of responsible research practices, which is a delicate balance around the world that people are struggling with, even in their own interpretation of the codes and guidelines that dictate their practice in their particular country or university.
To touch upon for those who are new to this field, people may often use the acronym FFP. Fabrication is from the Latin fabricare, making up data or results and recording or reporting them. I use as an example the University of Hawaii executive policy, which is a fairly general one, but acknowledging that this is one that also emerges in many of the policies and procedures that are found around the world because of this very focused emphasis of the original US regulation. Falsification is manipulating research materials, equipment, or processes in some way that the research record is no longer accurately represented, and therefore can’t be trusted. Plagiarism is using or appropriating another person’s ideas, processes, results, or words without giving appropriate credit – this gives rise to an awful lot of authorship disputes, which we’ll talk about in a moment.
There are lots of different codes and guidelines. There are two that relate to the UK – unfortunately, not part of “Europe” anymore, but still in the spirit of collaboration, and also being within the zone of Europe. The ALLEA code of conduct for research integrity, which is in the process of being revised, sets forth these commitments and principles for various parties, and also has some detail about procedures and how to handle, and expands the definition of serious misconduct, which is described as falsification, fabrication, plagiarism, and other serious problems, but there are also many other kinds of misconduct, or breaches of integrity that can occur within this European code. And the UK Concordat, as many of you may know, also sets forth these five commitments that talk, in my view, in a very positive way about things that institutions and others involved in research can do to foster this environment of research integrity, particularly with the latest update, where there is an opportunity for the institution to provide a statement about what they’re doing to promote that and have some wonderful ideas about how to make people not only happy, but reduce their stress and reduce the levels of competitiveness, and pressures that are either real or perceived, but somehow denigrating and undermining this thriving environment in which we would all like to work. Another national example is the National Policy Statement on Ensuring Research Integrity in Ireland, which similarly follows the European code, and then has some specific guidelines for researchers working in Ireland. This is endorsed by funders, as is the UK Concordat and the ALLEA code. These breaches of integrity that are described also in the ALLEA code, there are a vast number of things that may find their way to into various policies and be actionable, depending upon an institutional policy, but also any way of undermining the research record or the practice, so some policies include bullying and harassment, failure to supervise properly, or reckless supervision. These are fundamental elements of responsible research, and the failure to adhere to these practices can result in in very dire circumstances, so it’s fantastic that they’re being addressed along these lines.
There is, of course, this vast diversity in the education and training of responsible research. Liz Wager did a wonderful review a few years ago, recognising that we have very little longitudinal data about the effectiveness of some of our research programmes and how they are preventing, if that was the goal, research misconduct or breaches of integrity. There isn’t much of a causal link yet that has been made. However, in the World Conferences and in many different individual jurisdictions, lots of people are working on this and conducting very rigorous research on these elements, are understanding and unravelling different pieces of it as we go, to say what may cause this and how can we address the root cause? The National Institutes of Health (NIH) has just added civility as one of the elements for responsible conduct of research training, and that seems absurd to me that we have to train people to be civil, but it does demonstrate the fact that this is lacking in environments where there is this incredible, if not reality, perception of pressure and need to compete with your colleagues and step on people’s heads in order to get ahead.
Some of the grey areas are what pose the most important problems. And they may be those that don’t necessarily fall within any definition or any policy. If you’re expecting a perfect answer on any of these, I don’t believe there is one. For example, determining what the proper authorship standard might be for a specific discipline. That may be something one could put into a definition, like the International Committee of Medical Journal Editors, the ICMJE criteria, provides some guidelines for medical and biomedical authorship. However, there are many areas where a specific element may not fit into that, and there may be a discipline that has a different standard, also when you transpose that to a different country. Looking at journals for the guidelines of that specific journal is probably the first thing I would do. There are resources on how to address some of the issues that have been debated by experts in the context of publishing through the Committee on Publication Ethics (COPE). Another grey area is what constitutes a conflict of interest and what interests should be disclosed and when and to whom. These all merit open conversation, not only among those who are handling research integrity at the institution, but also among researchers as they collaborate and among specifically senior researchers and their early-career research students and staff. Defining what are the generally accepted research practices is fundamental. In the context of a research collaboration, identifying what specific methods might be used for data collection, data analysis, data sharing, is one way to at least try to create a concise framework around how what agreement has been made for that specific research project, and what will be the standard for that specific programme of research. As I mentioned, COPE has lots and lots of resources and guidelines. One that that emerges frequently is authorship disputes, and there is a guide for new researchers. There are also many different resources that come from the journals themselves. And they have been a very important player in the conversations around research integrity, and how to foster this thriving environment.
Another element that is critical to consider is data management. It’s something that has been part of funding requirements in the United States for many years, and is finding its way into other areas. Science Europe has some fantastic guidelines about this and the European Commission through Horizon 2020 and other programmes developed the FAIR acronym, making sure that data can be found by others when going back to retrieve and understand in order to be able to replicate and validate data, making it accessible so that it’s well described and software programmes have ease of use, interoperable, and reusable. Of course, in different disciplines this will vary vastly. These are lovely principles and guiding concepts, but they need to be addressed in the context of a specific project.
Here is an old-fashioned kind of laboratory notebook. I have seen enough of these in unfortunate cases of research misconduct or breaches of integrity, where there were problems in consistency of data collection and consistency of documenting data collection and then analysing it and interpreting it. This kind of data, whether it’s electronic or paper, will serve as not only the legacy for that specific research project for whatever period of time it needs to be kept, but also protection from any allegations, if something comes up and one is accused of fabricating or falsifying data or selecting outliers, misrepresenting the data. This is the kind of evidence that can help prove or disprove an allegation of a breach of integrity. It is absolutely fundamental to ingrain this when early-career researchers are learning their trade and for institutions, which has been done more and more, to have a data steward or a data management staff member who can help people understand how to implement standards that will be consistent and rigorous.
This is an example from my father’s company, he is an archaeologist here in Hawaii. In any field, this kind of data management is so fundamental. I wanted to touch upon it here, this is an ancient Hawaiian burial site. Collecting the bones and any artefacts that are relevant to that specific trench absolutely requires the kind of care and commitment to the spiritual element of what it is one is doing, because this is not just a bone, it belonged to a person; that person has a whole story, there is a descendant process. This is an example where elements of research integrity have changed over time. When my father first started in Hawaii there were no rules about having to consult with community members where they may say, “Well, my great, great, great, great grandfather lived on that land. I may be a descendant of the individuals who were buried here.” Those human remains need to be treated in a specific way. Those laws weren’t in place. Someone who has the sensitivity to communicate and get permissions and have a real conversation with members of the community to ensure that the data collection and sharing and analysis is respecting those cultural traditions is fundamental. And now the law is changing, so that that’s required. Nevertheless, regardless of the discipline, this kind of care and commitment really reflects the true heart that is essential to the Japanese definition that I described.
I want also to touch upon the Integrity in Practice Toolkit, which was developed by the Royal Society and UKRIO very proudly, and with Francis Downey, when she was at the Royal Society; she’s now at UKRI. This toolkit I have here has all kinds of wonderful positive examples. It’s available online, it’s free. It has all kinds of small steps that institutions can take, researchers can take, to engage everyone involved to make it relevant in the way that James was describing, as far as embedding integrity and ethics into the institutional culture, and agreeing shared expectations. For example, there’s the World Economic Forum Youth Scientists community that said, “well, we don’t like all these other guidelines as much, so why don’t we create our own”, and they created their own code of ethics that was real and relevant to them. Another example is creating informal channels to openly discuss research integrity, and Marcus Munafo, who is a fantastic leader in this field and in his own discipline, and has spoken at UKRIO conferences in the past, had an example where members of his lab found that it was too difficult to send things by email, so they developed a Snapchat that they use to describe their triumphs and disasters and support each other. “Oh, I lost my microscope. Does anyone have a microscope?”, “Yes, okay, good, I got my paper accepted”, and help each other in ways that are meaningful to them. The University of Glasgow is another example, with Elizabeth Adams, who was instrumental in developing the University of Glasgow’s system around research integrity and identifying the research integrity champions who are not as intimidating per se as the Vice President or Provost to whom one typically has to go to report some kind of concern around research integrity. Instead, to embed champions across the disciplines, at different levels, where there is more of an opportunity for informal conversation and assistance in going forward with an allegation, understanding what it is. If there are issues of bullying and harassment those can be addressed, rather than “okay, now you’re in the process, you have to be a whistleblower” – you’re off on this journey that you may not be prepared to take.
The international perspectives are what excites me the most about this entire field. This group of people who come together from so many different disciplines and are able to disagree with each other and debate these issues and delve into deep rigorous research, as well as take the field to new levels and acknowledge that it actually is quite an old field when it’s grounded in ethics, and some of the philosophical underpinnings of virtue and responsible behaviour. In our past meeting, we had just in May, June of this year, the Cape Town Statement had just released the preprint, the publication is coming out very soon [published March 24, 2023]. This is about integrating fairness, equity, and diversity in conversations around research integrity. That in order to have integrity, we need to have efforts that reduce power imbalance and to have clarity among authorship policies, where there is a fair treatment of every partner involved and reduce some of that stigma around research-intensive countries and institutions that develop the research agenda, because they have the funds to do so and then those in other countries who have to basically follow and not develop the priorities on their own and as true partners. It will be quite an exciting paper once it is released.
We were thrilled to announce our next World Conference will be in Athens in two years’ time. We plan to have a bridging event hopefully online in May or June of this coming year, 2023. The theme is being refined as we speak, but I have no doubt it will be absolutely fascinating and perhaps we’ll again come back to some of the deeper ethics-based and virtue-based conversations that may have got lost in the legal paradigm in the last 30 years.
These are some references and resources. There are so many; I could have had 10 slides on this. I wish to thank my absolutely stellar members, Sheila McLean at the University of Glasgow, in bioethics, where she had the idea of bringing together law and ethics in medicine to create a bioethics degree when I was much younger, and acknowledging the many different disciplines that contribute to the field of research integrity, I had this opportunity to be able to learn about pulling things from different disciplines in order to create this beautiful spectrum of research integrity as part of the arc I described.
James: That was a fascinating introduction to research integrity and a reminder of how fundamentally relevant it is to all the research we do. This is about making sure research is of high quality and high ethical standards and doing what we can individually and collectively to make sure the environments in which we work promote that.
We have a question from an anonymous attendee. They say, “Do you feel like data management and openness should be a criterion to examine PhD dissertations in the future?”
Zoë: Data management is one criterion that is fantastically handled by the University of Hong Kong. I had the chance to go there and say, we thought we had this good system in the US where the funders require a data management plan as part of a grant application or progress report, but in Hong Kong, they said, that’s not good enough. They had early-career researchers, Masters, PhD students, required to upload their data management plans and make them visible and accessible, so there could be oversight not only from their mentors, but also from the broader institution, to help them understand how to do this right, and how to think through data management, prior to going through the whole process of doing their Master’s thesis or their PhD. As far as data management being fundamental to PhDs, Hong Kong is a great example.
Open science, or open research, has lots of positive elements. This is also to do with Plan S and lots of other initiatives that are being encouraged. There are also some imbalances, however. I had the honour of being the guest editor for a journal in forensic sciences, where Francis Downey and Rebecca Veitch at UKRI contributed a fantastic piece on research culture. In that context, there was an article processing fee (APC) and some people from some countries were not able to pay that. This is one of the disadvantages in the way that the burden has now shifted to many authors having to pay the processing fees. Some jurisdictions have made it where the institution itself will pay for those or the grant, if it’s publicly funded or even privately funded, you can pay for APCs as part of a budget. In order to make it a requirement, it has to be that a PhD student has the funds to be able to make their data open. It should be open as possible, as closed as necessary, recognising legal and regulatory requirements for patient confidentiality.
James: Carolyn asked, “What areas of research and best practices do you feel are most commonly overlooked or not considered by UK institutions?”
Zoë: I’m certainly not an expert in UK institutions, other than having studied at the University of Edinburgh and Glasgow. One of the things that needs more attention, not just in the UK but around the world, is mentoring and supervision. We’ve got all this focus on, you need to have integrity, you need to do your methodology and you’re pressured and you’re doing all this. But those who are mentoring you either are not necessarily setting a good example – I’m very lucky, I’ve had fantastic mentors – or they were never really trained how to be a mentor, how to be a good supervisor. Hence the idea of having a career development mentor or a writing mentor and then a scientific or other disciplinary mentor to help address specific aspects, because we must look at it in a holistic way. We can’t simply say, “this is my mentor who’s helping me with my project” and deny that we have a life, and our life is falling apart because we’re working so hard. I think mentoring and supervision would be one area that would merit more attention and training and support for those who are learning how to mentor, regardless of their age.
James: If someone finds themselves in a mentor position because either they’re a principal investigator or head of a department or a school or they’re a veteran researcher, they wake up one morning and they find, “gosh, people are looking up to me”. What do you think that people in those positions can do to encourage researchers to engage with issues of research integrity and help improve their local environment?
Zoë: Tamarinde Haven in the Netherlands has developed this superb supervision workshop and I believe she’s made some of it available online, where people can learn how to be a better mentor. Marcus Munafo has an example of asking your research team and your early-career researchers – it seems very obvious – what it is that you need in order for me to be a better mentor? It’s not just, “I’m the boss, I’m going to tell you what to do and you’re going to follow my orders”. It’s “let’s have a shared agreement and shared expectations and maybe even a mentoring agreement where it’s not just I need to follow the mentors rules”. The mentor also needs to live up to my expectations and if it’s not working, because certainly we’ve all had bad bosses and situations that haven’t worked, then there is a forum for discussion and resolution and actually separation if it doesn’t work. Those are two elements I would consider.
Simon Kolstoe
James: I will now hand over to our next speaker, Dr. Simon Kolstoe. Simon has been involved in matters of research integrity, governance, and ethics for many years. He recently joined the board of trustees at UKRIO and so I’m very lucky to have him with us to share his perspective.
Simon: I’m going to talk about some of the complexities we find in this area. It’s really nice to be able to go after Zoë because I can touch on a number of things she raised, there’s very much complimentary there. A few of my conflicts of interests or other interests: I spend most of my time chairing ethics committees for the Health Research Authority, Ministry of Defence, UK Health Security Agency, and I’m involved with ethics with the European Commission.
However, my background wasn’t really in ethics originally. In fact, I was a biochemist, and due to various accidents of luck, one of the drug compounds we were working on got to the point of going to clinical trials. At that point, I came across ethics committees, governance, and all these processes for the first time. Both myself and other members of our lab very much felt like a tiger being forced to jump through burning hoops, to comply with all these requirements before we could get going with our research. I got into this area almost through frustration from a researcher’s perspective, saying, “there’s so many processes, so many governance processes, ethical processes that you have to go through as a researcher, it really stops researching, and in some ways can cause a lot of problems for researchers. Surely there must be better ways of doing this sort of thing”. Over the last 10-15 years or so, my research has moved away from biochemistry, drug development into looking at ethics and governance processes, expanding to considering research culture and integrity.
I’m now a reader in ethics at the University of Portsmouth and recently became a trustee of UKRIO. This is a fascinating area, it’s a very important area within research, but we have to acknowledge that it’s an area that can cause significant frustration for researchers, both in terms of having to deal with governance, but also as we see inequalities and inequities within the research process.
I’m not the only researcher to become frustrated with this. There was a paper published a couple years ago by some researchers at the University of Cambridge, and they sat down and catalogued all the emails and all the people they had to interact with just to get the single research project off the ground. There’s a researcher at my institution who has done something similar and came up with a very similar result of hundreds of emails, and any of you who are researchers will know this. You spend far more of your time dealing with administrative processes than actually doing research. If you try to catalogue these things, there’s interactions with funders, contractual things, and you have to consider your research design methodologies. Depending on your design and your methodologies there’s various permissions that you need to seek and then there’s this horrible area of data protection governance, GDPR, these four initials that everyone certainly in the UK and Europe are terrified about. How do you get those sorts of issues right? Then there’s general research conduct, which Zoë was talking about, research integrity.
This whole landscape can sometimes be overwhelming to the researcher and unfortunately what tends to happen is people use a shorthand and say, “we have to get through ethics before we have to do our research”. A lot of this gets bundled together and that can be really confusing, because a lot of researchers and a lot of people within the research environment don’t have a clear idea of all the parts of the system and how they interact with each other. Some of the problems that we encounter as researchers come from not having a particularly clear understanding of the landscape we find ourselves in. This is something lots of people appreciate and the problem is one of the actions that lots of people take is to then go and study this and write reports. If you’re aware of this integrity area, there’s all sorts of reports that have been written by the Wellcome Trust, Nuffield Council, Vitae, looking at research culture, looking at integrity, looking at the governance processes that sit alongside our actions as a researcher, and we’ve already had Zoë refer to the work that the Royal Society has done in the United Kingdom. The Science and Technology Select Committee in the House of Commons have done a number of reports and investigations into this area of research integrity, and then we have some global organisations, like the World Conference on Research Integrity that Zoë was talking about. Retraction Watch, which is one of my favourite websites, that goes through all the reasons why papers have been retracted; it’s like the gossip column for researchers. We’ve had reference to Marcus Munafo, and the UK Reproducibility Network, to UKRIO, and then recently in the UK we have this new committee, the UK Committee on Research Integrity (UK CORI), that has been set up. There’s a wealth of information out there. There’s a lot of organisations, there’s an awful lot of reports, that sometimes leave researchers feeling a little bit disempowered, and it’s such a complex environment. How exactly do we make a start in addressing some of these issues in a practical way?
My approach to this: I came across this out of frustration and trying to get my head around the landscape, but one of the things that really struck home to me was a series of papers published by Iain Chalmers and Paul Glasziou starting from about 2009, looking specifically at the area of research waste. This was the thing that really struck a chord with me in this entire area: they came up with this fantastic figure that 85% of research is wasted because it asks the wrong questions, it’s badly designed, it’s not published, or it’s poorly reported. I certainly got into research, in my case medical research, because I thought it was something that could really help and benefit other people, and thus it’s absolutely shocking to see a figure like this – 85% of research is wasted – especially if you consider the amount of time we spend as researchers lobbying for funding or saying why our research is fantastic and groundbreaking and is going to make a big difference to the world. And yet you have this sort of horrible figure of 85% and a lot of people didn’t believe this. Iain and Paul are not to people to be trifled with and over a period of about 15 years they’ve kept publishing papers arguing why it is this number. Essentially, their calculation goes along these lines: out of every 100 projects that get funding, there is solid evidence to show that only about half of those projects lead to a publication or outcome at the end of the project. If you take those 50 publications, only about half are good enough to reproduce the experiment. This is where it touches on reproducibility and that’s a number that I absolutely agree with having been a post-doc: taking the Methods section of a paper into the lab and trying to replicate an experiment, and realising there’s some key issues, key elements of the experiment missing. I can absolutely believe that number and for that half that you can reproduce in the lab, or can at least make an attempt, only about half of those have no serious or avoidable design flaws so that you can get the same result as the original researchers, or there’s good arguments or good logic in the paper. This is how they come up with this figure of 85% research waste. This is quite robust and whenever anyone’s tried to argue this number, there’s this relation, there’s lots of evidence to show that. Now they went on and worked with a lot of different people. There was a series of five papers published in The Lancet a few years later, trying to drill into these issues of research waste, and it came up with this this helpful table identifying five specific areas: looking at how research questions are come up with in the first place; how research methods are chosen to answer those questions; the efficient use of regulation and management touched on briefly before; and then ensuring that once you’ve done the research that information is fully accessible, unbiased, and published. That’s all very helpful. I went to a talk at the WCRI in Amsterdam in 2017 and heard Iain Chalmers talk about this and he was specifically focusing on ethics and governance processes and how they could be a main form of research waste. I got to talking to him after the conference and we got to considering the understanding of research waste and that although efficient research, regulation management is held up as being potentially a problem in this area, surely, in an ideal world what you could do is if you make those processes better so they could have positive effects on some of these other aspects of research waste. Through various conversations over the last couple of years – and a couple of papers that we’ve published; Zoë was involved in one of the pieces of work we did – we’ve been looking at ethics and governance processes and saying, “we construct these in such a way that rather than contributing to research waste and contributing to problems in this area they can have a really positive effect.” I want to run through some of our thinking and practical ways that, whether or not you’re a researcher in a university or you’re working in research administration or other aspects of the research process, you can think about this whole area and apply some processes and thinking that will help address some of the problems that Zoë was introducing. One of the key elements is trying to get an understanding as to this complex landscape where some people talk about research integrity, other people talk about reproducibility, other people talk about research culture, other people talk about research waste: are these all the same thing, how do they relate to each other? Getting good definitions is a good place to start. When I read the paper I referred to about research approvals and all the people you had to interact with to get a research project off the ground, one of the things that struck me is that there was quite a lot of conflation between different processes that were the responsibility of different parts of the system. Myself and David Carpenter, who some of you may have come across through ARMA and UKRIO, wrote to respond to the paper, saying it’s important to get a clear understanding of who’s responsible for what within the research landscape.
The model that we’ve come up with over the last couple years is that when considering what good research culture looks like, if we want to address some of these problems and some of these issues that Zoë introduced, it’s helpful to consider three specific perspectives or approaches you can take. I refer to them as governance, integrity, and ethics. When we talk about research integrity, Zoë referred to the virtues and the ways that researchers act and I very much agree with that. If you look at the virtues of a researcher, the personal characteristics, one of the early studies was by the Nuffield Council. They spoke to lots of researchers, people within research environments, and said, what are the key characteristics of a good researcher? We came up with a list of about 10. I know there’s been a number of papers and reports recently that looked at the key virtues of research but if you take these four: good researchers are rigorous, original, honest, and open. These are character traits that we’d like to build within our researchers and research community. I imagine everyone would agree with this, but you can go further than just saying these are good things by considering the perspective of Aristotelian virtue theory, a philosophical approach to ethics, and trying to understand how people behave. The idea of a virtue is it sits on the golden mean between two vices: the vice of deficiency, where you don’t have enough of that virtue, and the vice of excess, where there might be too much. By doing this analysis of the virtues, you can start to see the behaviours on both sides of the spectrum that you’d like to try to avoid within research. For all the virtues, including the other ones that Zoë mentioned, you can sit down and try to understand what it looks like when people don’t have enough of this virtue, what kind of behaviours does that lead to? Conversely, people go too far in some areas. And then you can think more how is it that we can go about promoting these virtues, and something that I’ve learned through leading various research integrity courses and training sessions at numerous institutions and organisations is that it’s very difficult for someone outside a research field or community to lecture researchers on these areas. These are issues that very much need to come from the academic communities themselves and this I think is one of the key roles that mentoring has. People learn how to behave by copying others. If you’re a parent and you have children, your children learn to behave by copying you, and we see that within research communities. As people work with more senior researchers, work within research environments, they adopt the behaviours the people they are working with have. We’re talking about research integrity in this context. Research virtues are something we need to encourage our research communities to be thinking about, and talking about, with their junior researchers and with the people who they are mentoring. This puts a specific responsibility on who it is within the research environment to culture these behaviours, which very much lies within the academic communities. This is very different to another approach that we’ve called governance. Along with people knowing how to behave and behaving in virtuous ways, we work in society, there’s various policies, processes, and laws we have to obey if we are going to be conducting any behaviours, but especially research. If you’re doing medical research, and especially clinical research, we have clinical trials regulations, the Mental Capacity Act, Human Tissue Act, laws we have to follow. There’s lots of other policies. Your institutions will have policies, professional bodies, the British Psychological Society, there’s all sorts of policies to follow. Indeed, UKRIO produces some very useful checklists of the responsibilities and duties that researchers have. This is a different approach to the virtue approach within integrity. When it comes to governance these things can be quite technical, like the General Data Protection Regulation, for instance, there’s various rules or things you have to do. Because of the technical nature of these governance issues, rather than it necessarily being the academic communities who should take the lead on promoting these things, as they should with virtues and integrity, this is very much the role of research support and it’s the reason why universities and other organisations have Research and Innovation Services – or whatever your research office is – in which the idea is to have experts there who can sit alongside researchers and help them navigate the policies and processes that need to be followed when you’re conducting research.
The third part to the circle is what I’m calling ethics. The role of ethics committees is extremely important, and sometimes all these issues are referred to as ‘ethics’. That’s very much not the case. Governance, as I said, that’s a very technical area. Virtues and integrity, those needs to come from the research communities, but there is an important role in having independent committees looking at specific projects, and considering our participants, be they animals, be they humans: are they protected? Second, are the researchers supported to be able to do the best research in this context? On the many ethics committees I sit on – when you’ve sat on a lot of committees, you get an idea of different approaches people can take to recruiting participants, gaining consent, obtaining participant information – you can help researchers build up those projects. There’s various analyses people have done as to exactly how ethics committees should look at projects and one helpful approach is this idea of principlism, borrowed from biomedical ethics, where an ethics committee looks at autonomy issues: freedom of choice, freedom to participate in experiments, beneficence (the benefits), maleficence (the harm), and broader justice issues. I think there’s very much a role for these independent committees to look at research protocols and provide advice directly. If we want to build good research cultures, having these three different perspectives and understanding that these communities overlap helps. I’m an academic who also works in our research innovation services, and also chairs ethics committees, I do a bit of all three: these are three very good and important perspectives. If we want to address problems around questionable research practices, some of the issues Zoë was talking about, these three different approaches, three angles can be really helpful to address these problems.
If you were to list all the questionable research practices, going from all the honest errors that people make in observation, analysis, and adequate record keeping, right the way through to the other end of the spectrum of illegal research, misuse of funds, etc. If you put this analysis on it, you can see that a lot of the things that fall down around the fraud, criminality, misconduct end, if you have really good policies and, in some cases, laws you can prevent an awful lot of these things happening, by taking a governance-heavy approach.
Things in the middle: when you’re talking about sloppiness, incompetence, if you have ethics committees that are actually looking at projects that researchers are proposing to do, they can maybe highlight HARKing, Hypothesising After the Results are Known, where people go on a fishing expedition and gather as much data as they can and analyse it in as many different ways as they can think of until they find something that they think is relevant, and then publish that. That’s a fairly well-known questionable research practice, but that’s something ethics committees can be very sensitive to, and can help in spotting of what happens in the middle of the spectrum. Obviously integrity, how researchers behave, is important right across the spectrum. If you’re behaving with honesty you’re less likely to do the things on the fraud side of the spectrum, but likewise, you’re going to be more honest in realising when you’ve made errors and in your approach to your experiments. It can be helpful if you think about questionable research practices from this perspective. You can see how you might be able to address them and how different members of the research community and research environment can contribute to stopping them happening.
With this model, from the perspective of someone who both is a researcher but who also works at an institution trying to promote positive research culture, I find it really helpful to break down my understanding of how we address these issues into: what can be addressed through better governance processes; what can be addressed through ethics committees and their role; and then what are the issues that the academic communities themselves need to tackle to promote a culture of integrity and good practice in our research?
James: Simon, thank you very much. That was fascinating. It’s interesting to hear your perspective on research integrity as a component of parts, a theoretical but also a very practical way of looking at how various actors within the system can do their bit to help progress things.
How could study approval be better embedded into the funding review process? Developing a study and securing all approvals is very hard to support prior to submission of a grant application, but arguably much funding is being wasted.
Simon: From a researcher’s perspective, the amount of administration around research is extremely frustrating, but you only have to be involved with one or two misconduct investigations and you realise rapidly why it’s important to have these policies and processes in place. This very much falls within governance. There is always going to be bureaucracy, there’s always going to be burning hoops that people have to jump through, but it’s important to make sure when we’re looking at processes that we know and understand very clearly why it is that each part of the process exists. Often where things start to go wrong is that rather than having one person or a group of people having good governance oversight on how research is organised, it tends to be handled by different departments who don’t talk to each other. Within a university, you might have a human resources department to deal with the contracts, a finance department who deals with funding, and then Research Innovation Services or a governance service, and they don’t always talk and interact with each other particularly well. One of the things that is nice about this model is that you can define what the governance elements are of the research process and do a network analysis, a way of understanding how the different parts interact with each other and how you can create efficiencies. It’s important to distinguish it from other players within fields. Ethics is a good example: people think ethics committees are responsible for all sorts of things that they’re not, data protection being a good example. Data protection is a firm governance responsibility because it’s the legal responsibility of the organisation. People think it’s the role of ethics committees, and there can be all sorts of confusions when governance people say, “that’s the ethics committee’s responsibility” and the ethics committee says, “no, that’s not our responsibility”, and the researcher gets stuck in the middle. The whole thing gets confusing. Trying to map out these interactions and work out who’s responsible for what is helpful. If anyone’s involved in research administration, it’s important to understand these different perspectives.
James: Many issues regarding research practice seems to be focused on the STEM disciplines in particular; how relevant are these issues to all academic disciplines such as the arts and humanities, and what can we do to make sure that they are perceived to support researchers in those disciplines?
Simon: Absolutely, a good example there is qualitative research. I know a lot of qualitative researchers feel hard done by when they go through ethics processes, because they say this is all built around a quantitative model. When you approach something using grounded theory approach or a phenomenological analysis, it doesn’t fit the system. Over the last 10 years or so, I’ve seen research processes significantly improve when it comes to dealing with qualitative research and research that is not in STEM areas. Even though I come from a STEM area, I’m in the School of Healthcare Professions, pretty much every research project I’ve run for the last five or ten years has had both a quantitative element, but also a qualitative element, be it doing interviews, surveys, with people as well. Increasingly, there is a blurring of boundaries between traditional quantitative research and qualitative research and as people start working together in teams more frequently, there’s a better understanding of the value that different research perspectives bring. If you look at some of the healthcare research we’ve been involved with, we’ve had people from our Arts and Humanities faculty who’ve come in and helped us with the way information is displayed. We’ve had people from our Technology faculty contribute to the modelling around COVID. There’s an increasing understanding that research is very interdisciplinary and although a lot of these systems and processes have come out of STEM subjects, originally, people are doing a much better job of understanding how they apply to other areas. In my university, we use exactly the same ethics review process across all five of our faculties and we specifically designed it in such a way that it’s as applicable to people who are doing research in the arts and humanities as it is in more traditional STEM subjects. There’s still a way to go and one does have to appreciate the history as to where these processes came from, but people are definitely getting more sensitive to these other research areas.
James: Someone says they totally agree with you about the various teams and departments you need to go to for approvals and sometimes the lack of cohesion between them.
Regarding research ethics, how fast are research ethics regarding animals involved in research evolving as part of the current ethical reflections in society about how we treat animals?
Simon: In some ways, the ethics around animal research is a lot better developed than the ethics around human research. There’s Animal Welfare and Experimental Research Bodies, or AWERBs, which are committees that all university or other organisations that are doing animal research need to have, and that will include having independent vets and other experts as well. In that area, there’s very strong governance, there’s very strong ethics, and most of the researchers I know who are involved in animal research are very sensitive to these areas, they have thought about it and in some ways have thought about it far more than people who are involved in human participant research. Those of us who work with humans can learn quite a lot from animal researchers because if you look at the history that’s happen around animal research and the problems, the public conversations, and dialogues, they’ve got their governance, integrity, and ethics aspects sorted out in quite impressive ways. They’re quite a good model to follow when we’re thinking about these things.
James: How do institutions embed or at least foster a culture of research when there are competing priorities such as teaching and a continued lack of resources?
Simon: That’s the million-dollar question. It’s not just for academics, it’s for all aspects of society: when there’s limited resources, how do you prioritise? One of the important things, though, is to not try to spread oneself too thinly. Sometimes institutions are guilty of trying to do too much research rather than doing a smaller amount really well and properly, and areas that often get compromised on sit in governance, the resources to have good governance. If you’re going to be doing human participant research, if you’re going to be working with the NHS, there’s certain processes that you need to follow. Researchers need that governance support to go through those processes. If an institution wants to do research in those areas, they need to also fund the governance support for people to do research in those technical areas. Likewise, training and mentoring: you need to you need invest in those areas and if there are limited resources, it’s important to make sure that you do a very good job of what it is you’re you are doing, rather than trying to spread resources too thinly.
James: If not 100% of research projects end up producing outputs then this could lead to risk aversion and discourage academics from looking at more speculative or original topics. What are your thoughts on that?
Simon: That ‘85% research waste’, it does need caveating. An awful lot of research, although it doesn’t lead to a publication, does lead to experience among researchers. Often you have to do ten experiments, nine of which don’t work, and then you get that one experiment that does work. It’s making the point that if we want research to be useful, it needs to have an output and often, the incentives within the system push us not to publish good research results, but push us to get the next research grant. It’s like we publish to get grants, rather than get grants to publish papers; you can get things the wrong way around. We will never get to 100% of research projects publishing or coming up with something useful. Research is a difficult thing to do right – there’s plenty of reasons why you never get to that point of publishing the research – but having said that, I think we have to be self-reflective when we are doing research and think, “are we just doing this to try to tick some institutional box or try to just get the next research grant or are we actually trying to do something that people will find helpful?”. If we’re involving human participants, it’s a waste of our participants’ time if the research that you’re doing with them does not lead to an output. We have more of an ethical obligation to ensure that when we are setting up and running research projects, we are focusing on what those outputs are going to be. We may not get to 100% but I’m sure we can do better than 85% wasted.
James: What do you feel the challenges are for smaller institutions operating in this space, who don’t have the resources of the big ones to address these issues? Particularly managing conflicts of interest might be a challenge for smaller institutions.
Simon: I find sitting on an ethics committee within my university more challenging than being on either the NHS or the Ministry of Defence Ethics Committees simply because I know all the researchers who are coming to my institution’s ethics committee, whereas you have that distance when you’re on national committees. It gets down to that point of trying to work out what your strengths are and then really focusing on doing a good job of the areas where you have strength. If the institution is so small that it does not have the resources to put in place strong governance processes, for instance, that suggests that there are some areas of research that just shouldn’t be touched by the institution, because if you can’t have that technical support, to meet various laws, policies, processes that are necessary for some areas of research – animal research being a good example – if you can’t put the governance in place for doing it you shouldn’t be doing research in those areas. Likewise, if you can’t get ethics committees together to review projects, or without a major bottleneck in the system, you have to look at prioritising which projects are done because all three of these areas are important and if you’re going to have a good research culture, you need to address all of them. You can’t forget about one of them and yet still try to do research, because if you if you start doing that, that’s where questionable research practice comes in, that’s where you start having breakdowns of broader integrity issues and people put under pressure, that’s when you start getting misbehaviour, plagiarism, fabrication as well. Playing to strengths and making sure that all these areas are addressed appropriately is essential.
James: What do you recommend to deal with unexpected ethical dilemmas that crop up when one’s doing field work, in terms of the need to have speedy decisions made but equally still respect ethics?
Simon: If you’re looking at the design of ethics review processes, it’s true that generally an ethics committee will review your project before it starts, but I think it’s important to allow some more informal interaction between the ethics committee and researchers as research is going along so they can get advice. There are amendment processes and sometimes they can take a very long time, but the European Union’s quite a good model for this: for a lot of projects, they insist that an ethics advisor is appointed to the project, someone who the researchers can come back to and interact with as things develop, and I’m the ethics advisor on a couple of projects. Having more interaction between ethics committees and researchers as research is going on can be very helpful. It gets back to resources, it gets back to time as well, but it’s definitely possible that a lot of people who sit on ethics committees do so because they’re interested in research and because they’re interested in ethics, and quite often they’d be very happy to help provide advice as projects are going along as well if they’re asked and approached about that. If you’re in an institution worried about this, have a chat with the ethics committee chair about how they may be able to support beyond that initial review, because there are plenty of people providing that sort of support.
James: It’s also worth remembering that if as a researcher you encounter an ethical issue or a dilemma, this won’t be the first time this has happened. Your ethics committee, other colleagues, mentors, and managers will have encountered this before. So whatever problem you come across, normally people have a useful perspective they can impart.
Matt Hodgkinson
James: I will now hand over to our final speaker of this morning’s webinar, Matt Hodgkinson. Matt is a research integrity manager at UKRIO. He joined us earlier this year, having had a career in publication ethics, working in the publishing field. Matt will be talking about publication ethics and authorship.
Matt: Zoë and Simon’s talks were so brilliant that now I fear mine will pale by comparison. I’m doing less of a story of principles and virtues and more of a whistlestop tour through some of the aspects of publication ethics.
Publication ethics is part of research integrity, it interfaces with research integrity, but it’s not quite the same thing. I’ve come from a background of working for open access journal publishers, so my view of publication ethics and research integrity has always been aligned, but it’s really about best practice in communicating research – and that’s from the researchers, the reviewers, the editors of the journal, everybody who interacts with an article – making sure that they’re doing the right thing or not doing any of the wrong things. This is going to focus on journal articles; there’s similar issues that affect conference proceedings and book publishing, but there will be differences so if you have specific questions, then please raise those. Many of the same principles apply to publication ethics as apply to research integrity. The overarching ones are: honesty, being true – that ‘true heart’ principle from Zoë, I think was fantastic; transparency, showing as much as you can about what you’re doing and also the process of what you’re doing; and then care, care about what you’re doing, making sure you’re doing things well and care about everybody involved in the process, of respecting the others involved in the process.
As Zoë mentioned, the Committee on Publication Ethics, COPE, is a major player in this area. I’m also on COPE Council, that’s one of my conflicts of interest in this area. There are tonnes of different areas of publication ethics and I am going to try to go through: authorship, dual submission, COIs, reporting guidelines, the ideas of ethics and consent, peer review, plagiarism, citations, copyright, and post-publication issues. So, let’s go.
We have a section on our UKRIO Code of Practice that describes issues with publication and authorship.
So, authorship: discuss it early and often. This is something that gets really tangled up. Even though it’s not directly associated with the research content itself, it’s one of the most vexed areas of publication ethics and research integrity as a whole. Authorship cases can drag on for years, even involve court cases. It’s the kind of thing that can just break down relationships. What authorship is about, is about credit. This is why it’s so important. It’s the academic currency, but then it’s also responsibility. It’s being accountable for the research. If you’re an author, you have to say, “I can stand by this work”. There’s various criteria: there’s the International Committee of Medical Journal Editors (ICMJE), otherwise known as the Vancouver group; there was a group that published in PNAS; there’s the Council of Science Editors. COPE noted in a discussion document on authorship the two minimum requirements are: 1) a substantial contribution to the work; and 2) being accountable and giving approval for publication. Those are the two overarching things that apply. The ICMJE require you to have taken part in the conduct of the work, the writing and editing, and the approval and accountability, whereas Nutt et al. and the CSE are more relaxed and say you can just do the substantial contribution and/or the research and the writing, as well as the accountability and approval. The criteria will vary, but make sure at the start that your group understands what the criteria are that will you follow in your institution and in the journals that you’re going to be targeting to avoid confusion.
There’s a wider principle than authorship, which is contributorship, quite a helpful idea. It’s a bit like how film credits work, so you can see the role of everybody who’s involved in the research and the reporting of the research broken down. Something that was really useful is called the CRediT taxonomy, and that’s now under the umbrella of NISO, the standards organisation in the US. It’s focused on STEM areas, but can be adapted wider, and it breaks down the different areas of say, data collection. You can assign those roles in a structured way and this helps you work out what were the contributions of everybody and whether that amounts to authorship or not.
As well as authorship, the thing a lot of people forget about is acknowledgements; these are a step down from authorship. Anybody who was involved in the study but who wasn’t an author should be in the acknowledgments and they can be credited in a structured way using the CRediT taxonomy. This is important: people should agree to be authors and they should also agree to be acknowledged. A useful tool to help structure authorship is ORCID, the Open Researcher and Contributor IDentifier. This is a way that everybody can have a unique identifier that will mean that every time they log into a publisher’s submission platform they can say, “yes, this is me, and here are my previous publications”, so there’s a structured way that can tell the difference between the different John Smiths and the different Jing Wangs. It’s unfortunately not validated by most institutions, but it is still really useful.
There’s lots of problems that come with authorship. There’s the idea of gift authors and ghost authors: these are flip sides of each other. Gift authors are people who don’t deserve authorship, who are nevertheless given authorship, and ghost authors are people who do deserve authorship but don’t appear, even in the acknowledgments. The classic case is a medical writer from a pharmaceutical company and they may do lots of writing, and maybe adjust the statistics, but not get acknowledged. This is the kind of thing that the pharmaceutical companies have improved a lot in the past 20 years; there were lots of scandals in the 1990s. Essentially, the principle is that everybody who qualifies for authorship should be on the author list, and nobody who doesn’t qualify for authorship should be on the author list, but everybody who contributed should either be an author or be acknowledged, with their permission.
There is the problem of sale of authorship. This is something that first came to attention about a decade ago and sometimes entire articles are being sold, but often just different positions on an article are being sold. This will often happen at the point of revision or at the point of acceptance. There’s brokers who do this, there’s websites, marketplaces where authorship can be bought or sold. I think it goes without saying that it’s unethical.
There are more nuanced issues with equal contributions and corresponding authors. Authorship is the currency, but the most valuable currencies are the first author, the last author, and the corresponding author, and so there are struggles over those positions. Although, author order is also interpreted in different ways. This means that the idea of ‘equal contributions’ is used. Equal contributions is an indicator that people do the same amount of work, but sometimes you get five authors all saying they did equal contributions; this starts to get a bit silly.
Sometimes authors can be uncontactable, either when you’re writing up the work or after you’ve submitted; you can even have cases where authors may be deceased. There are complicated and sensitive topics about authorship.
All of this boils down to authorship disputes. As I said, these can be nasty, drawn out, and involve legal action. Prevention is much better than cure, and it’s very difficult for journals to resolve authorship disputes. Journals can try to facilitate the authors talking to each other, but if the authors won’t agree then journals don’t have many options because they don’t know who really contributed so they have to pass it to an institutional investigation. This is one of the reasons it can be so drawn out. Often, compromising on acknowledgments can be a way to get to a solution; sometimes, journals have to resort to expressions of concern. It shouldn’t be the kind of thing alone that you retract an article for, but there can be other issues such as disputes about the content or permission issues that arise.
At UKRIO our guidance is a bit old and needs updating, and that’s something I am working on. That was written by Liz Wager, who is a former chair of COPE, and that guidance is really useful. There’s also a page of resources on our website I put together recently, which gives a really deep dive into some of the literature, some of the key issues and recent discussion, broken down into different topic areas. We also ran a Midsummer Challenge in June, this is a way that you can look through some of the different questions and links and give yourself a refresher on authorship issues.
Dual submission and redundant publication: the principles are really easy, but then there’s complications. Don’t submit the same work to more than one journal at the same time. It can seem like that would speed up the process, but it contributes to waste of effort, of the editors and the reviewers, it can lead to confusion, and it can mean that you don’t respect the review process; you’re essentially treating it as a game that means you just want to get published as easily as you can and you pick the one that gives you the easy ride, and that’s not an honest approach. You need to respect your peers and take on board what they say rather than ignoring them and going with the people who didn’t spot the problems that the other reviewers raised.
You shouldn’t publish the same work more than once – but there’s exceptions. Preprints are okay with most journals: you can post an early version of an article on a preprint server or on an institutional repository. It’s clear that it hasn’t been peer reviewed, it’s not the final version, and you’re allowed to formally publish that. The same with theses: you can publish your thesis, and then you can publish articles that stem from it without worrying about it being dual publication.
Conference proceedings are trickier: the general rule-of-thumb is at least a 30% difference, but there’s copyright problems wrapped up with this. You need to check with the different publishers to make sure that you’re not breaching copyright and be transparent about it. That’s one of the keys: everybody knows what’s going on, everybody’s agreed to it, and then you won’t find yourself in problems. There’s also community guidelines, often prescribing guidelines in medicine or reporting guidelines in different fields, and those can be published in multiple journals with the agreement of those publishers and editors. The ICMJE give great guidance on this.
Conflicts of interest. The rule-of-thumb on this is quite simple: it’s anything that might interfere with the objective design, conduct, analysis, or reporting of research, or with its peer review or editorial handling, or anything that could be reasonably perceived to interfere with it. It’s not always about whether it actually does affect the process, it’s about whether it could and whether you’ve declared it fully and whether you’ve managed that conflict of interest. Often people focus on the financial side, about money, what grants and sponsorship you get, other payments as well. It doesn’t have to be a direct payment for that piece of work for it to be a COI: if you’re doing work on a drug that is manufactured and sold by a pharmaceutical company and you’ve received funding for another project from the same pharmaceutical company, then that is still a COI and still needs to be declared. But there’s also a whole slew of non-financial COIs that people should be declaring as well. Things like affiliations, so their employer and maybe boards that they sit on, even non-profits. That can be things like co-authorship and collaboration with other people, family, and friends. Often it’s seen that your immediate family’s COIs are essentially your COIs. Personal beliefs can affect research and how it’s reported, and if there’s something that aligns very strongly or clashes very strongly with the area of research then that should also be declared. The principle is, if in doubt declare it. If it’s something that came out afterwards and would be embarrassing, then make sure you’re upfront about it. Declaring COIs is not a bad thing, it’s a good thing.
Another aspect of reporting that is sometimes overlooked is reporting guidelines. This goes along with what Simon was saying about structured governance: this is structured reporting, to make sure that you’ve included every single item that’s necessary. This is most advanced in clinical research. The EQUATOR network is fantastic for this: there’s CONSORT for randomised controlled trials, there’s PRISMA for systematic reviews, there’s many more that have reached approval, but there’s hundreds of different ones. There’s the idea of trial registration as well. Clinical trials have long been required to be pre-registered: you post on a website, the WHO trial registry platform is the umbrella for all of these different trial registries. You say what you’re going to do, how many participants, when you’re going to do it, all those different things before you start a trial, so that you can’t then go back and change everything. Everyone can see what you’re planning to do and you’ve got to justify changes. There’s also the PROSPERO database for systematic reviews.
Outside clinical research, there’s the FAIRsharing website, which gives a whole bunch of other standards, databases, and reporting guidelines. They are not as well developed as in clinical research, but there are many in biology, especially in genomics. There’s also the idea of Registered Reports. The idea of being pre-registered is very similar to trial registration, but it can work in different ways. You submit your article to a journal that accepts Registered Reports before you’ve done the research, before you’ve collected the data. They look at why you’re doing it and your methods and then they say, “we think this is good, come back to us when you’ve collected the data and analysed it, and then we will publish it no matter what the outcome is of the work”. This is a great way to guard against the bias against so-called negative results.
Ethics and consent. I won’t go into this too much because Simon covered it, but if you’re reporting on human research, if it’s interventional, you are going to have to have considered very carefully your ethics and have formal ethical approval. If it’s observational, it can vary. It’s much more of a grey area with social media, for example, but there are guidelines out there about this. For case reports, you don’t necessarily need ethics approval or even consent to participate in research because it’s not really research, but you may need consent to publication: if somebody’s case details or their picture are included – even if you’ve blanked out the eyes, which isn’t a recommended approach anymore – then you should get the consent of that person to publish. There’s those different ideas: ethics approval; written informed consent to participate in work; and then consent to publish anything that may reveal somebody’s personal information. In animal research, obviously consent isn’t an issue but you also need ethics committees in place, depending on the animals that you’re using.
Make sure you’ve got your ethics documentation because journals may ask for it, and make sure you’ve got good ethics and consent statements. Even if you didn’t need ethics approval, if you’re working on anything related to humans and animals, even environmental research, for example, you should include an ethics statement, even if you didn’t need or obtain ethics approval.
Peer review. I’ve found people will think that certain forms of peer review aren’t ethical. All of these different forms of peer review are ethical, it’s just that some people aren’t as familiar with them. People will have a strong attachment to the one that they’re familiar with from their field. You can do single- or double-anonymized: single-anonymized is where the reviewers can see who the author is and not the other way around; double-anonymized means neither the authors nor the reviewers know who the other ones are. It’s difficult to achieve: especially in the era of pre-printing, double-anonymization can be breached quite often. There is increasing evidence that it may reduce biases in review processes, although the evidence is mixed.
Open peer review has different aspects: it can mean that the reviewers are named to the authors; it can mean that they’re also named after publication to the public; it can also mean that the reviews are published after publication alongside the article, with or without the reviewer names.
There’s all sorts of different ways to run peer review; as long as you’re clearly giving people an idea when they’re agreeing to review what the format is, then it’s still going to be ethical. There’s lots of principles beginning with ‘C’. One of the real principles that underlines peer review is confidentiality. You should keep what you’ve done private unless you’ve got approval to share it. There’s also a really important principle, which is competence. If you’re doing peer review, you have to make sure that you know what you’re doing in this area. You don’t have to be expert in all aspects of the article, but you should decline to review if you’re not adequately qualified, otherwise you’re wasting the time of the editors and the authors and possibly giving them false trust in the review process. You should be constructive as well. Nobody likes vicious takedowns. I think they are getting less common, but they still exist. Open peer review is a way to improve the constructiveness of reviews. There’s been randomised trials and it shows they’re more constructive and polite and longer. There’s this idea of Reviewer 2: they’re the one who gives you nasty comments, and Reviewer 2 Must Be Stopped! is a Facebook group, which I’m a member of, which is both irreverent and constructive. And you should give credit in peer review. If somebody in your lab has helped you do a peer review, then tell the editor. If a PI is passing it on to a postdoc, they should say to the editor, “my postdoc has done this review and they should be the one with their name on it even in your internal systems, not me because I wasn’t the one doing it”. Ghost writing is a problem within peer review, as well as in article writing.
Plagiarism. This one’s quite simple to understand as well: it’s taking the ideas, work, or wording of someone else without appropriate attribution. So: somebody else and you’re not telling people that you’ve done it. There’s no such thing as an acceptable amount of plagiarism. People often ask, “how much plagiarism I allowed in my article?” You just don’t plagiarise: that’s the principle. People get confused because they’re thinking about things like iThenticate and Turnitin, which is the academic equivalent as opposed to the research journal use of iThenticate, and they’re thinking about percentage thresholds like 10-15%, but that’s not the same thing as plagiarism. Plagiarism is taking other people’s ideas, work, or words without appropriate attribution: that’s just something you shouldn’t be doing at all.
You can avoid it by putting any wording you cut-and-paste from another source into quote marks in your notes so you don’t accidentally mix it up with your own wording. Always cite and quote when you use someone else’s words, and if you’re using somebody else’s words, use their exact words, not a fudge of what they said. You get their exact words, verbatim, and put it in quote marks, and you include the citation of where you got it from, and then you’re never going to be plagiarising.
Don’t closely paraphrase: this is where you get somebody else’s words and you jumble them around a bit, change the word order, add some synonyms. Never, never use synonym generators, it makes a word salad of ‘tortured phrases’. In the method sections, often clear attribution is enough. If you say, “we followed the principles of Smith and Jones” and put the citation and say, “the wording follows their approach”, it’s often seen that that’s acceptable; publishers are not going to be chasing you down for reproducing fairly standard phrasing in a methods section.
A closely related idea to plagiarism is text recycling, reusing your own words. It’s not plagiarism, but often a misnomer is self-plagiarism. It can be fine as long as you’re not reproducing too much, you’ve clearly indicated it, and you’ve cited it, you’ve attributed it, it’s not the results or the conclusions – because that’s where you run into problems with redundant publications, salami slicing, i.e., slicing the research sausage too finely, and also there’s the problem of copyright as well. You might be taking your own work, but you might have signed over the copyright to a publisher and not have permission to reproduce a lot of wording. As I said, the methods is normally fine, but if you’re going to be copying the whole introduction from work you don’t have the copyright for that’s going to be a problem.
With citation ethics, only cite work you’ve actually read. Make sure you do a thorough literature review so you’re citing and discussing the recent relevant work. There’s no excuse for not being aware of similar work to yours, especially work that contradicts it: you got to be aware of it, and discuss it, and reconcile your work with it. Equally, don’t excessively self-cite: you’re allowed to – and you should – refer to your own previous work, but don’t pile in lots of references. Take this example, “There has been much work conducted in this field [1-50]”: if that’s the author’s own work, then you’re not doing something that’s going to help readers and it looks like you’re just trying to help game your own metrics. Don’t cite simply to curry favour. If you’re submitting to a journal, don’t just throw in a whole bunch of throwaway citations to the journal or the editor. You can cite them if they’re relevant, but don’t do it for the sake of it. Don’t let reviews or editors coerce you into citing their work or the work of the journal. Be wary with a reviewer who says, “everything’s fine, but please include this paragraph that happens to cite 15 articles by the same authors”. It’s called coercive citation.
With copyright and licencing, it’s a bit on the side of ethics, but get permission to use text or figures. Think about copyright transfer. You’re allowed to self-archive your work by lots of publishers and that means that you can post the final version, before it goes into production, on your own website, maybe on an institutional repository. The Creative Commons licences, they offer a common way to do open access. There’s issues of the non-commercial licence that can prevent others using it for commercial purposes, but that can often include education as well, for example in Germany that’s the interpretation, and the non-derivative licences can prevent people translating your work. Although some of these licences can seem like a good idea, they can introduce barriers to other people reusing your work. If in doubt, speak to your librarian. They often know loads about this, as well as about open science in general.
Post-publication, letters to the editor are a common thing, or commentaries, so make sure that you’re responsive if those are submitted. Now we’re in the world of social media: there’s the site called PubPeer where people can anonymously, or named, put comments about papers. It was designed as essentially a journal club, but now it’s very much used for sleuths to point out things like image duplication. There’s also blogs, so be responsive to them. Don’t just brush it off because it’s not in the peer-reviewed literature, but take it seriously. You might have to correct your work, in which case, be open about it and go straight to the publisher. Expressions of concern are where a journal will post a note about issues; that might be something that is interim or it could be the final conclusion. In sad cases, then you get retraction. If you find a fatal flaw in your work, then doing the right thing will be rewarded and celebrated. It won’t harm your career. There’s been research done on this: for people who retract for reasons of honest error, the citations for their other work don’t go down [nearly so much] afterwards. Make sure that you do the right thing and as soon as you spot problems be really upfront and transparent about it.
There’s more: Paper mills; predatory journals; data and materials sharing; image and data manipulation; ethical editing; open access; open science; institutional investigations; editing services. I am not going to go through all of this, but this is just to give you an idea that even though this was such a dash through the issues with publication ethics, there’s so much more out there that I just don’t have time to go into.
James: A very comprehensive and detailed introduction to the fundamentals of publication ethics, authorship and all the issues that come with it and the mayhem that can result when people don’t understand the rules and of course when certain actors try to deliberately flout them.
UKRIO, a while ago, got on some mailing list. It was ironic that we were getting emails from people trying to recruit us into companies that write fake papers for free or asking someone to buy fake papers. We put them straight in the spam folder and deleted them, but I always wanted to write back and say, are you sure you’re asking the right people about this?
Text recycling. Does that mean from your own other published work or does it have another meaning?
Matt: Yes, it’s generally going to be from your own published work. It might be also from a thesis as well, which is often much less problematic because you’ll usually retain the copyright for that. If you retain the copyright to work, then you’re allowed to reuse it. The best practice is to attribute where it came from. Copyright and plagiarism are two things that are in the same sphere, but they’re not totally aligned. Text recycling is taking what you’ve published before and reusing it.
James: Obviously only cite work that you’ve read, but how detailed is reading? Is it okay to read just the abstract, introduction, and conclusions? Do you need to understand all the details? For example, if it’s peripheral to your own area of research but informs important the background or if you’re researching the social impacts of AI, but you don’t understand all the maths from an AI research paper.
Matt: Oh sure, there’s going to be grades about it. You’re not necessarily going to be able to critically appraise all the work in the area. One way would be to write it to show that you are resting on what others have done and that you aren’t the expert in it. Don’t write with undue confidence, make it clear that you are taking what others have said and done. Clearly in some areas you are going to be able to do critical appraisal; if you are able to, then please go ahead because as we’ve seen published research is not all correct. However, I don’t think you’re all required to learn about machine learning in order to discuss AI.
James: Could you clarify the use of corresponding author attribution, please? Joint corresponding authors seems to be suggested between authors as an indication of contribution, i.e., instead of recording co-senior or co-first authors, and there seems to be some confusion about the role of corresponding author attribution.
Matt: The corresponding author is really just the person who readers should be contacting if they’ve got questions about an article. It’s seen as this really important role, but it’s almost like an admin role. In reality, you’re the poor sap who people are going to be writing to: “hey, can I have a copy of your paper?” or “can I can I get your data set?”, this kind of thing, but it’s seen as the person who is the most important as well as the last author, but it’s really not. You get these conceptions that that’s what that role is, so then you get you get three corresponding authors on an article, which is unnecessary.
James: It’s interesting how in some fields or different teams within the same field corresponding author is either seen as “hurray, I’m the person who people come to you to ask questions about our research!” or it’s “oh great, I’m the person to whom people are going to come to to ask questions about the research”, it really seems to vary.
Matt: As well as the CRediT taxonomy, what you can do is write a narrative author contribution statement that describes what everybody did. Then there won’t be all of this jostling for positions, because anyone who wants to know who did the stats on this paper, because that’s the person I really want to talk to, well, they just go to them. It’s helpful for journal editors as well, because if you want to know who did the statistics and the methods because that’s the thing that you’re looking for, then you don’t have to hunt around for who has that expertise: it’s written there in black and white.
James: Also very handy if questions come up about the research later, because you know who did what from the outset.
Can you describe the difference between gold and green open access? Do all journals offer green, as many seem to hide the option?
Matt: These are terms that were coined by Stevan Harnad in the early 2000s. Gold refers to publications that are published under an open licence, like the Creative Commons licences or in the public domain, so you can access that work immediately and without any permission barriers; all you need is access to the internet and you can read it and reuse it for free. That’s what gold means. It sometimes gets confused because the idea of gold also has a connotation of money. People think it refers to article processing charges, but it doesn’t. Gold OA can include articles that have no funding or have consortium funding, all sorts of different funding models. It’s not about that, it’s about the licence that’s used.
Green is a way to get articles read for free that are published in traditional journals behind a subscription barrier. It is a way that authors can post the final author version, but without the publisher formatting and maybe without the copy editing. They’re allowed by many publishers to do this; they can post it on their own website, they can post it in an institutional repository, or in some cases in a subject repository such as PubMed Central. It won’t have the same open licencing, it doesn’t have the same reuse permissions, but it means articles can be read for free and sometimes there’s also a delay – maybe six months – until it can be deposited there. The US OSTP has recently said everything that was federally funded has to be properly available immediately by the end of 2025. That’s going to be quite a game changer. Plan S, mainly in Europe, as well has really shifted things.
Do publishers hide this? Some of them do, because obviously if you’re able to read it online for free, then why would you subscribe. I think lots of publishers are having transitional agreements and really are moving into the world of open access, so this is less of an issue anymore. Twenty years ago, publishers were hotly contesting open access and now it’s something that’s been adopted. At Nottingham, there’s a directory of policies on green open access called Sherpa Romeo, that’s really useful.