Our approach

We offer an ambitious and robust two-pronged approach to adapting the current peer review process to the future development needs of ECRs. It uses a combination of proposal analytics, survey, computational linguistics and ethnographic observations to improve the efficiency, integrity and benefits of feedback received by ECRs. When used together these methodologies provide a unique and seamless co-methodological approach to the study of peer review. In addition to traditional methods for studying peer review processes (interviews and bibliometrics), our approach adds more innovative methodologies for studying peer review processes.

Currently, the peer review process underpinning decision-making processes for the award of these grants is standardised across all funding schemes. This means that the processes dictating the consideration of ex-ante academic excellence is the same for ECRs as it is for large project grants, mid- or late-career fellowships. And yet, the deliberations around success of ECR-applicants focuses around considerations of ‘potential’, whereas for more experienced applicants the focus is on their track-record, and notions of ‘trust’ (McAlpine, Turner et al. 2016, Petersen 2016). This alters panel deliberations of how applicants have achieved past excellence, compared to how ECRs have the potential to generate excellent research, meaning that the thresholds for ‘near misses/wins’ (Wang et al, 2019), are different for ECRs than for others. Logically if the thresholds for excellence differ, then so should the peer review processes.

In addition, whereas for more experienced applicants the risk of failure is reduced due to the security associated with already achieved tenured, or ongoing academic positions, the risk of failure is higher for ECRs. A bespoke peer review process that is sensitive to the needs of ECRs should go further than simply providing more funding opportunities. Instead, it should also act to provide an opportunity to learn, and to improve their proposals towards future success. This research proposal links strongly to the call’s emphasis on “integrity” and “benefits” of the research sector and culture, as well as supporting funders to make better decisions, through examining the decision making processes that dictate success in the long and short term.

If unsuccessful, researchers follow three strategies after failed proposals; they withdraw, learn or conform (Boyack, Smith et al. 2018). An efficient feedback mechanism within a formative peer review process is one that provides applicants with constructive information sufficient to make significant changes to the proposal, and that increases the applicant’s chance of being successful on the subsequent submissions. We hypothesise that the most successful applicants are those that learn from failed proposals, implement feedback and achieve funding shortly after first failure.

This research will investigate the strength of the current peer review process and feedback provision underpinning the award of Wellcome Trust funding, with an aim to foster a healthier and more inclusive research culture.

The project will also develop a large scale, open access data resource of peer review outcomes, panel reports and feedback for the period of 2009-2018. This database will be cross-referenced with details of applicants’ future applications made and successes (or not). This will form the major deliverable of this project. It is envisioned that this resource will play a major role in stimulating future research around peer review and proposal analytics, and to strengthen links between the Wellcome Trust, RoRI and a large US research project of funding proposals from the University of Michigan Medical School, currently led by Richard Klavans (RK).

Specifically this project will:

  • Calculate the productivity cost of successful versus unsuccessful ECR applicants and their teams (consequences of first failure);
  • Calculate the value-added by feedback received from their unsuccessful proposal on subsequent successes and time to success (based partially on the time taken to secure research funding at a later date);
  • Estimate the extent that applicants learn (change) from previously failed proposals, or conform (no change) and how this influences subsequent success or failure; and
  • Examine how current peer review panel deliberations consider notions of excellence for ECRs, and how they structure feedback to successful and unsuccessful candidates
%d bloggers like this: