Jana Christopher – Image Integrity Analyst

Expert Interview with Jana Christopher, M.A., Image Integrity Analyst.

Jana is a leading Image Data Integrity Analyst. In 2011, she set up the Image Integrity programme at EMBO Press. She then went on to start her own business Image-Integrity in 2015. Jana joined FEBS Press as a permanent member of staff in 2017 as Image Data Integrity Analyst for their 4 journals. She also runs training courses and regularly presents to students and young scientists.

Jana gave a very insightful presentation on Image Integrity in Research Publication at UKRIO’s second webinar in June 2020 on publication ethics. Just over a year on, we invited her to answer a series of questions to follow up on some of the issue she discussed in her presentation.


From your work as an image integrity analyst over the past 10 years, how has the field of research image integrity analysis changed?

There are two things worth mentioning –

Firstly, a rise in sophistication and skill in the manipulation of image data. When I first started scrutinising Western blot and micrograph images in 2011, we used to see quite crudely performed manipulations, and really obvious falsifications such as dropped-in elements, cuts and splices, and clumsy side-by side duplications. With the exception of a just a few journals (e.g., JCB and the EMBO publications), hardly anyone was checking for image problems back then, and I presume that nobody who did doctor their images really expected to be caught out.

I see far fewer of these ‘rough’ jobs, and instead more skilful tampering, using more sophisticated tools, and even including manipulation of the raw data underpinning the images shown in the figures.

These days, a growing number of journals perform regular checks, and we have a community of post-publication reviewers and whistle-blowers who will, often anonymously – flag image irregularities on PubPeer and on social media. The focus on images has intensified over the last few years, also among editors and reviewers.

Secondly and prominently, the emergence of paper mills. Paper mills often use systematically fabricated data – some of it is lazily done and relatively easy to detect, but some of it is extremely difficult to catch out. I always ask for the raw image data for manuscripts that raise questions regarding image integrity, or when we suspect a manuscript falling into the paper mill category. I recently published a Scientists’ Forum article about raw data from paper mills, which is frequently fabricated.

Of course, image problems are not the only way to spot a paper mill manuscript, but a clear focus here has been very effective.

My own increasing experience and improved skills have also made my job more interesting over the years.


In your presentation for UKRIO’s webinar in June 2020 ‘Image Integrity in Research Publications’, you spoke about how questionable figure preparation has the potential to indicate research misconduct. From your experience, how often is that the case, rather than simple errors by the author?

This depends on the definition of ‘simple errors’ – I think it’s important to remember that haphazard figure assembly and errors such as mis-labelling and duplications of images, even if they are unintentional, are an indicator for potential negligence and sloppiness. Unreliable data presentation is not acceptable.

The ratio of unintentional errors versus intentional illegitimate image processing is difficult to know for sure. It varies between journals and depends on the research field. Western blots are easy to manipulate, whereas for example protein structures are not. Among the various journals where I have screened accepted manuscripts pre-publication, the percentage of manuscripts flagged up for image-related problems ranges from 20 to 35% (of manuscripts checked at acceptance), and the percentage of manuscripts where acceptance is ultimately rescinded ranges from 1% to an alarming 8%.


As a considerable proportion of submissions and published papers contain duplication, manipulated, mismatched corresponding images, and spliced images, in your opinion what would be the most suitable preventative process?

A responsible journal should check manuscript figures pre-publication. But the ultimate responsibility always lies with the authors of course. I think it is important to remember that manuscript figures are not illustrations, but data. So, they show the essence of the study. Wherever possible, authors should work closely with primary data when assembling manuscript figures. It might be just one person from a group of authors who takes care of this, but all authors should pay close attention. An additional pair of eyes critically checking images and figure legends and probing for exactly the things you list above could catch errors before submission.

It appears that some authors use ‘place holders’ for some of their figure panels during manuscript assembly, and then sometimes forget to swap the placeholder with the correct image before submission. Going by how frequently I hear this explanation, it seems to be an error-prone procedure.

The rules of what is allowed in terms of processing images are of course also crucial.


Should peer reviewers be advised on how to do basic image screenings to question image authenticity?

Absolutely, I think this could be very helpful. Peer review training for early career scientists should be more widely available and basic image screening skills could certainly be part of this. Of course, reviewers cannot be expected to do image analysis, which is technical and beyond the scope of their job, but I think reviewers could pay more attention to spot obvious mistakes. Time and again, I detect obvious duplicates, which will have escaped both editors and reviewers, sometimes over two rounds of revision.

Given the enormous focus on images from post-publication reviewers (see PubPeer), I am somehow surprised how little attention is given to manuscript figures during pre-publication peer review.


Do you feel there is a lack of researcher knowledge or training in how to prepare scientific images for publication? How can a researcher help themselves to ensure they are not unwittingly over manipulating their images?

A lot of it is common sense, really. Treat your data as data. Do not process your images in ways that obscure or falsify data, do not bias the data to fit a particular hypothesis, do not show results that were not factually observed.

Comprehensive advice on what is allowed in terms of processing images is widely available from various sources, including journal guidelines of course (see also Cromey DW. Avoiding twisted pixels: ethical guidelines for the appropriate use and manipulation of scientific digital images. Sci Eng Ethics. 2010 Dec;16(4):639-67. doi: 10.1007/s11948-010-9201-y. Epub 2010 Jun 22. PMID: 20567932; PMCID: PMC4114110). Imaging and microscopy experts at research institutes are also often able to help with technical questions on image processing.

Image processing software offers powerful and tempting tools to beautify and falsify images, but scientists are advised to stay away from these when preparing manuscript figures. The use of eraser or clone tools for example should simply be avoided, it always raises a red flag when detected.


We have seen recent example of researchers finding parts of their published work popping up in predatory or milled journals/papers. Is there anything that can be done to protect an author’s original images from being used in this way?

Very tricky. I do not see how this kind of data theft can be avoided entirely. Of course, it is always possible to simply screenshot an image and re-use it as something else.

If at all in doubt, journals need to ensure the research presented is the authors’ own, for example by requesting raw data underpinning the manuscript figures.


Public scrutiny of research has significantly increased during the COVID-19 pandemic as well as researchers speaking out against the peer review process stating examples of where the expected processes have not been met. Alongside this the scientific community has been questioning the role of research culture on the rigour and reliability of research. With these drivers in play have you noticed a shift in relation to image integrity in the publications that you are examining?

Do I find fewer image-related issues than 5 years ago? The answer is definitely no. Image problems are at least as frequent, and as already mentioned, the manipulations are often more sophisticated. But journals and publishers have adopted a different attitude to image screening and are clearly ramping up their efforts, which is encouraging.

Of course, the public have an interest in research being reliable, especially when it is publicly funded. Safeguarding research integrity is in everyone’s interest and all actors involved, i.e., institutes and research organisations, funders, and the scholarly publishing industry, can help foster best practice and ensure that the science they fund and publish adheres to existing standards and is conducted with integrity.

Attention to images is now more widely regarded to be an important and necessary part of this scrutiny.


We would like to thank Jana Christopher for participating in this fascinating expert interview.

*Questions were written by Dr Josephine Woodhams, Senior Project Officer, UKRIO.

Expert Interview Disclaimer: UKRIO invites guest to answer a series of questions to share their expert opinions to the research community. The views, opinions and positions expressed within these expert interviews are those of the author alone and do not represent those of UKRIO. The accuracy, completeness and validity of any statements made within this article are not guaranteed. We accept no liability for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with them.