The Corona-Eye: Exploring how COVID-19 affects deliberative and mediation measures in the REF2021 peer review of Impact

Read our newest preprint on how COVID-19 influence peer review evaluation processes using REF2021 as an example. Gemma Derrick and Julie Bayley have written the following paper found here on the Open Science Framework about how peer review processes will change because of COVID-19.

This project explores how peer review committees might sensitively account for the disruption to ‘normal’ research trajectory due to COVID-19. It uses the REF2021assessment of impact as an example, but it applicable for all peer review sites of assessment that will need to mediate the effects of, and navigate new working arrangements as a consequence of the COVID-19 crisis. Full overview details of the paper can be found here.


The cost of lost academic dreams: A personal learning journey, peer review and research waste

Dr. Gemma Derrick reflects on her own personal experience of grant rejection.

When my last grant application was rejected, I cried. 

Academics miss out on funding more often than they succeed so really, what was the point of crying?  Why did I let it affect me so much? Shouldn’t I be used to this by now?

It wasn’t just that I had missed out on a grant – and that the competitive side of me hates missing out – but that the rejection of this opportunity also meant that the positions of the research fellows that I employed, and valued, were to suffer further precarity.  I felt for the people side of the loss.

I also felt the loss of an academic dream, the loss of the idea: a grant rejection meant that to keep this dream alive the same idea would again need to be repackaged, readjusted and shoe-horned into a future thematic call.

The idea had been excellent; of course I think that, but all external reviewers had also stated as such.  But by the time of the latest rejection, the idea had been regarded as excellent no less than 3 times, in 3 separate funding applications, having reached the final stage in each competitive round and each time coming within a hair’s width of being successful.  That is 6 distinct reviewers; experts in the field and my peers who all thought the proposal excellent and worthy of funding.  But as each of us know from personal experience of the peer review process, success in funding is not guaranteed solely on the proposal being labelled ‘excellent’.

I had also, over the previous 5 years, invested a lot of time, energy and intellectual head space towards developing this idea.  This involved the initial steps involved in constructing a new research questions and design, theoretically and conceptually framing it correctly, aligning it with developments in the academic literature and societal needs, attracting stakeholder partners, and planning its academic and wider dissemination through a complex dance of academic books, articles, blogs, policy reports and in-person performances.  This stage was essential just to convince the internal institutional powers-that-be to allow me to submit[1]

After this came the grant writing narrative supporting the idea that the project would revolutionise the field; fulfil the objectives of the call; needed to be funded urgently; the addition of necessarily spicy adjectives and academic buzz words (decorative); the negotiation of budgets (sometimes with external, non-academic partners), estimates of staff workloads, and then the struggle of writing it in the form, and character limits, required by the prospective funding agency.  This was all even before I pressed submit!

I was, obviously, exhausted of rewriting, rethinking and attempting to inspire anonymous reviewers of the idea’s academic and societal merits. In fact, I estimate that during the five years prior to this point I had submitted no less than 10 grant applications – with a fraction of those applications being successful.  Think of the amount of work described above times ten and then balanced with everything else an average academic has to do as part of their job – paid and unpaid, rewarded and unrewarded. 

Typically, an idea can only be submitted for consideration by the team to one funding agency at a time.  This means that in addition to any loss associated with the rejection of an excellent research project, comes a further delay of resubmission that can result in excellent ideas taking years to attract funding, potentially losing academic potency, urgency and therefore scientific and societal relevance as time passes.

So, yes, let’s talk about research waste, shall we? To date, my initial idea and research project remains unfunded, sitting ready to be picked up again when the (funding) opportunity arises. 

A lot goes into submitting a grant application.  In fact, studies estimated that the average grant application takes about 171 hours (116 PI hours and 55 CI hours) of work to complete and submit (von Hippel & von Hippel, 2015).  Another study estimated that preparing a new proposal took an average of 38 working days and resubmitted proposals taking 28 days (Herbert et al, 2013).  This is regardless of whether it even worthy of funding and successful or not.

However, can we estimate what is the cost of lost/delayed academic dreams?  Thousands of applicants miss out on funding every year; so, what is the cost to the system of these as yet, incomplete research ideas?  What is the personal cost, the cost to knowledge production, the cost of answers to societal problems that could have arisen if only these great ideas were funded initially? There is no counterfactual to this question of what might have been if all great ideas were funded.  There is a limited pot to go around and an endless number of eager academics with ‘great’ ideas, it is not feasible to fund all great ideas.  But since over a quarter of UK research conducted is unfunded (Edwards, 2020), the question remains what proportion of research simply does not go ahead because of a rejected grant application?

We may never know.

In spite of this, I don’t believe that a lottery approach to finding allocation is the answer, or that peer review should be replaced.  Despite peer review as a model operating in much the same way since the 1950s and in my view, its need for an overhaul, it still maintains a number of benefits that are not necessarily quantifiable or amenable to a simplified input-output approach.  Instead of remodeling it, we need to think about how we can have peer review work harder for applicants – not against them – so that academic ideas and dreams are not lost completely.  Together with the Wellcome Trust and Proposal Analytics, we are working to find out how we can alter the existing peer review system in a way that retains its organisational and political benefits, but yet reduces waste by adding value to the outcomes of peer review for the applicants.  Encouraging applicants to rethink, re-strategise then re-apply thus increasing the chances of academic ideas being funded, quicker.

My idea will eventually find a home – I am a research phoenix.  The number of knockbacks I have had in the past has granted me the ability to view this as a matter of faith rather than certainty.  Not all are in the same boat however[2], and it is these experiences from which we can learn the most about how our peer review system[3] serves our needs, and not the other way around.

By the way, I still have an excellent idea and research dream just waiting for funding if anybody is interested? 

Call me.

[1] In the UK many funding agencies require that potential grant applications undergo an internal selection process before being submitted.  This is to reduce waste, and the amount of applications to undergo the individual and group peer review process.  There is currently difficulty for funding agencies to attract sufficient reviewers per application which, for the large part, remain an unpaid academic ‘duty’.

[2] This is a matter of social justice in research and academic culture, of which I have spoken about before. In a way this is one motivation towards creating a system that works for everyone and not narrow associations of what ‘success’ looks like for research careers.

[3] I say “ours” because we all participate in it in one way or another eventually.


Edwards, R. (2020) Unfunded research: Why academics do it and its unvalued contribution to the impact agenda.  LSE Impact blog. https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/13/unfunded-research-why-academics-do-it-and-its-unvalued-contribution-to-the-impact-agenda/

Herbert DL, Barnett AG, Clarke P, et al (2013) On the time spent preparing grant proposals: an observational study of Australian researchers. BMJ Open: 3:e002800. doi: 10.1136/bmjopen-2013-002800

von Hippel T, von Hippel C (2015) To Apply or Not to Apply: A Survey Analysis of Grant Writing Costs and Benefits. PLoS ONE 10(3): e0118494. https://doi.org/10.1371/journal.pone.0118494


Constructive peer review makes science stronger for everyone

 It’s hard to be in academia without hearing about the dreaded Reviewer #3. Despite the thousands of jokes, very little is actually known about the impact of the review process on researchers – and on the research community!

Competitive funding is a core component of maintaining a thriving and innovative research culture – but the focus has long been on the individual instead of the collective. With this individual lens, it is not uncommon for researchers to see funding as a zero-sum game, after all, there is a limited amount of funding available.

Whilst this may be a reasonable response for an individual, it should not be the perspective of the funding systems. Any country benefits immensely from an empowered scientific workforce that produces increasingly competitive research proposals – so why is peer review so focused on tearing down instead of listing up?

At Proposal Analytics, we propose a research review process that generates constructive comments that can help the applicant grow, regardless of the funding decision. Scientific gatekeeping[1], the use of impersonal and biased metrics[2][3], and a needlessly unpleasant reviewer #3 do not make the scientific enterprise better – instead they only reduce the potential achievements we can reach. A more collaborative view on reviewing; one that is still highly critical yet constructive and created with the intention of empowering researchers to improve with each iteration, is one in which we can reach our collective potential.

We are collaborating on this project with the Wellcome Trust and Gemma Derrick at Lancaster University because we are interested in studying how reviewer comments impact the research ecosystem, and specifically how critical comments can be used to improve the collective intelligence of early career researchers.

Proposal Analytics already looks into the grant submission patterns of researchers in the US – where do they submit and how that breaks down across racial and gender groups. With this project, we are interested in understanding if the reviewer comments help or hinder researchers’ future grant applications. Do researchers resubmit to the same funding source after improving their application, or are they disheartened by harsh criticisms and seek another – perhaps more compassionate – funder? If so, can review comments therefore be constructed in a way that doesn’t dishearten the researcher, but instead motivates their improvement?

Answering these questions will help us evaluate how effective the current funding infrastructure is at cultivating a competitive and innovative research ecosystem that includes and improves its participants, not alienates them.

[1] https://advances.sciencemag.org/content/6/23/eaaz4868

[2] https://sfdora.org/read/

[3] https://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

Photo by “My Life Through A Lens” on Unsplash

Subscribe to other blog posts on this subject by adding your email address below:

Join 109 other followers

Why is feedback a fix for a failed peer review system?

Dr. Gemma Derrick reflects on why the Research Phoenix project is both timely and of paramount importance.

Building a kinder research culture has received a lot of attention of late and there is new hope in changing a system that can be unforgiving, competitive and heartbreaking at the very best of times.  If nothing else, I have previously stated that the COVID-19 pandemic provides a great opportunity to demonstrate kindness, as it is clear in the initial stages of the pandemic that kindness was possible (Derrick, 2020).  For us, normalising failure so it is something that we can (and will) openly acknowledge, and then using it to challenge and change our current peer review system, is part of achieving this kinder, new normal.  

Not so long ago, I had the privilege of mentoring a number of colleagues who were all Early Career Researchers and all applying for the same grant.  They had submitted their applications; meticulously prepared, excellent examples of research proposals and worthy of funding.  However, when the results were released, all were left with the same, stock-standard rejection email. 

Why?  What was the reason, we asked ourselves? 

Nothing.  Silence.  Just the rejection letter, the dreaded sentence that started with ‘unfortunately” and ended with a reassurance that ‘it was an extremely competitive call’ and that ‘more applications were rejected than selected for funding’.  They were in good company it would seem but were lost as to how to progress with otherwise excellent ideas. 

Part of transforming the peer review process requires us (funders, reviewers and researchers alike) to reimagine peer review as something more than providing a binary decision: rejected or accepted.  Instead, we should aim to transform it into a system that we can use to encourage learning, help develop stronger research ideas as well as researchers.  The idea of providing feedback in a way that places applicant needs at the centre of the sense-making of decisions is obvious, but surprisingly it is lacking in current peer review processes.  Many funding agencies do offer a form of feedback dialogue, including giving applicants a chance to response to reviewers’ comments (e.g. UKRI/RCUK).  Others, such as the Wellcome Trust, provide a decision with accompanying rationales which is a step in the right direction. The provision of this feedback is a vital step towards building a kinder research culture, and a better peer review system. 

Feedback is defined as a process where learners make sense of information, and then use it to enhance their work and strategies (Boud & Molloy, 2013; Carless & Boud, 2018)[1].  The provision, or lack thereof, of feedback to enable improvement, is one of the most problematic and frustrating aspects of the academic experience.  Beyond the PhD, the provision of feedback on submitted work – articles, grant and promotion applications – concentrates on providing judgements, outcomes, and not always on providing information that aims to foster the future development of ideas or people.  Under current academic governance structures such as peer review, individuals and their ideas either win, or lose; they are funded, or they are not; they are accepted or rejected.  Here, the outcomes are explained through political or organisational rationales for the decisions, rather than providing individuals with actionable points that they can use.  More importantly, this current system only fulfils one of the aspects of the feedback definition (learners make sense of the information – aka “We are not giving you funding, and this is why”) but fails in the other, arguably more important half of the definition, which is that learners must then ‘use’ the information to enhance their work and future strategies.   

Whereas one might be forgiven for thinking academics already have sufficient personal experience with feedback (and rejection) to be able to implement it effectively, more often than not, the personal reaction we have to the rejection in an ever competitive research world, is one that can blur our ability to make sense of it and act accordingly in the future.  Rejection, and receiving feedback associated with that specific “failure” can still be debilitating, even for seasoned academics.  Our personal experience with rejection and disappointment in the current peer review system does not necessarily result in better science, or more resilient academics, nor is it a catalyst for learning and development.  This is what learning, teaching and assessment experts, refer to as our ‘feedback literacy’.  Within this, our ability to manage our own equilibrium, affects our ability to engage with the feedback as critical commentary (Carless & Boud, 2018), not just as a reason for rejection, and act on it for future funding attempts and/or iterations of the same idea.  For the research community, this can have devastating effects including damage to the mental wellbeing of existing academics, a loss of confidence and the loss of talent if people decide to leave thinking that rejection is a ‘sign’ that they aren’t able to survive in competitive academia.  However, in many situations, scholars, like students, underplay their own agency in actualising their own improvement via the provided feedback (Winstone et al, 2017).  For this reason, the tone of communicating decisions, as well as the lens by which decisions are made, are  vitally important to build a peer review system that doesn’t fail applicants, but instead is kind, by providing useful feedback in a tone that empowers individuals, and doesn’t cast them aside. 

There are still questions surrounding how academics act on feedback, especially when the feedback is delivered in coordination with a ‘reject’ judgment.  Without action, the feedback is just information (Ajjawi & Boud, 2017).  As one prominent author emphasised; feedback must be accessible, supportive and actionable otherwise it is like great art hung in a dark corner in need of illumination, to be seen and in need of being used (McArthur & Huxham, 2013).  Indeed, the persistent dissatisfaction in funding decisions made by peer review, demands a change in thinking about how we deliver decisions, as well as support future iterations and funding attempts.  This project, we hope, will be one important step towards developing a better, supportive and kinder peer review system. 

[1] A big shout out to my colleague, Dr Jan McArthur, who has played such a wonderful role in introducing me to the complexities and joys of feedback.


Ajjawi, R., & Boud, D. (2017). Researching feedback dialogue: An interactional analysis approach. Assessment & Evaluation in Higher Education, 42(2), 252-265. 

Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in higher education, 38(6), 698-712. 

Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325. 

Derrick, G. (2020). Kindness under coronavirus. Nature, 581(7806), 107-108. 

McArthur, J., & Huxham, M. (2013). Feedback unbound. Reconceptualising feedback in higher education: Developing dialogue with students, 92-102. 

Merry, S., Price, M., Carless, D., & Taras, M. (Eds.). (2013). Reconceptualising feedback in higher education: Developing dialogue with students. Routledge. 

Photo credit: @helloimnik on Unsplash


Understanding and improving written peer review for grant applicants.

Written by Jonathan Best of the Wellcome Trust, from the funders perspective.

Peer review, the use of experts to assess the merits of research proposals, forms the backbone of decision-making for research funding. The main role of these reviews is to inform funders decisions regarding which projects to fund by giving expert opinion on the strengths and weaknesses of the proposals compared to the funders criteria.

Published studies have shown that at the very least, reviewers (whether as committees or through written contributions) collectively are able to agree on ‘strong’ proposals but the nuances of good versus better are far more difficult to reach a consensus on (Graves 2011). This aspect of decision-making often means that the order in which projects appear just above or below the funding line varies between parallel committees.

Other analyses comparing publication related metrics, career or social network indicators of applicants from either side of the funding line have shown that in some cases those who initially just miss the funding cut can often go on to do as well as if not better in these measures than their counterparts who were successful at gaining the initial funding (Klaus 2019; Wang 2019). This has led to the suggestion that resilience of the applicant plays a large part in ultimately attaining funding and success can be found through persevering with an idea which at first was rejected (the Derrick hypothesis).

This project

There have been numerous reviews and editorials looking at peer review usually with a focus on the problems with the process. Very few studies have looked at the quality and utility of written feedback to applicants.

We have teamed up with Gemma Derrick’s group at Lancaster University to understand if, how and where providing written peer review feedback to applicants sends constructive signals to researchers that although not funded their proposal is worth pursuing further resulting in future success.  


Graves N, Barnett A, Clarke P. (2011) Funding grant proposals for scientific research: retrospective analysis of scores by members of grant review panel. BMJ. 343: d4797. Published online 2011 Sep 27. doi: 10.1136/bmj.d4797

Klaus B, del Alamo D. (2019) Talent Identification at the limits of Peer Review: an analysis of the EMBO Postdoctoral Fellowships Selection Process. Pre-print doi: https://doi.org/10.1101/481655

Wang Y, Jones BF, Wang D. (2019) Early-career setback and future career impact. Nat Commun. 10(1):4331. doi: 10.1038/s41467-019-12189-3.