EUROPE

How to improve your chances with a Horizon 2020 bid
Given the high failure rate of applications for the European Union’s Horizon 2020 research programme, the Danish Agency for Science and Higher Education under the Ministry of Higher Education and Science has, in collaboration with the EU Office at the University of Copenhagen and the Research Support Office at Aarhus University, made a very intelligent move to help scientists navigate the maze they often find themselves in when planning a bid.Their new publication, Evaluation and Evaluators in Horizon 2020: Report on an analysis among Danish evaluators, released in June, contains a lot of observations on how evaluators are treating the applications, which scientists across Europe could find useful.
It includes the same information that is given in the work packages for the programmes, but in a less formal way and with more examples of what project applicants do right and what they do wrong.
This is a clever move by the Danes. Evaluators sign a confidential contract with the European Commission when becoming an evaluator, agreeing not to disclose any information about the evaluation they participate in. But this does not exclude them from a survey from their own ministry on what they think about the evaluation processes.
The growing service providers and consultancies that are advising scientists on how to prepare applications for Horizon 2020 will here have a competitor that is free of charge, with empirical evidence on what those who have experienced the evaluation process from inside think are the major assets and the fallacies. The advice is given ‘straight from the horse’s mouth’, that is, from the evaluators themselves.
No longer ‘business as usual’
The idea, the report says in the introduction, came from reading evaluations early in the Horizon 2020 or H2020 programme, indicating that things had changed.
“The original idea for this report came from the University of Copenhagen (UCPH). After reading a number of H2020 Evaluation Summary Reports from proposals that UCPH participated in during the 2014-15 H2020 calls, it seemed that something had changed from previous framework programmes, things were not ‘business as usual’.
“Some of the remarks in the Evaluation Summary Reports baffled researchers from UCPH and their consortium partners as well as research support staff,” the report says.
“Discussions with colleagues in Denmark and in other EU member states led to a wish to better understand the role the evaluators have, and to get a better understanding of the strengths and weaknesses of the H2020 evaluation process.”
From the Cordis participant portal, 215 Danes participating in the H2020 evaluations in 2014 and 2015 were identified and contacted and invited to answer an online survey on habits when evaluating proposals, time spent reading a proposal, cross-cutting issues with open ended commentaries for the option of adding comments. The survey had a response rate of 47% in the first round, with 33 evaluators participating in the second round. The survey was 100% anonymous.
Some 56% of the evaluators were from universities, 17% from private companies, 5% each from research institutions and private consultancies as the major affiliations, and 71% were men. Also, 60% were 50 years old or above and 51% were from a natural science or engineering and technology background.
Some 72% of the evaluators responding had evaluated 10 or more project applications; 77% had evaluated projects in research and innovation actions, and 12% in initial training networks, and 12-15% had evaluated European Research Council grant applications.
How do they work? What do they think?
One quarter said they worked more than four hours on one project application, while 6% worked one hour or less; 46% worked between two and four hours evaluating one proposal. Some 46% said they worked both on the paper version and on screen reading the proposals, while 32% only read proposals on the screen.
Asked which factors affected the evaluation and to what extent, 23% said that “spelling mistakes” had “some influence” on their evaluation; 46% said that “verbose and-or hard to understand” text had “significant or critical influence” upon the evaluation; “unexplained abbreviations” were reported by 25% to have significant or critical influence and “bad English text” was reported by 71% to have some influence or worse upon their evaluation.
Asked in the survey to what extent the geographical spread of the consortium might have influenced the scoring of the proposal, 40% said “it was discussed, but it did not influence the scoring”, 39% said that it was not discussed and 21% said that it was discussed and it did influence the scoring.
Comments from the interviews
Interviews with 27 evaluators were undertaken to provide more in-depth views on why they had listed as evaluators, whether they found this worthwhile and how they were critically assessing the proposals when starting out reading. Which factors made an impact? Was it the abstract, the layout of the text, the graphics and illustrations, the use of Gantt charts, ‘verbosity vs simplicity in the text’ etc? In summary, ‘What do the evaluators like?’
Evaluators’ advice
The key findings offer the equivalent of a list of ‘Dos and Don’ts’ for maximising a proposal’s chances of being accepted. The advice, according to evaluators, is that:
- • The first pages should be exciting. Do not start with ‘Adam and Eve’, pitch your ideas immediately and answer the questions ‘Why is it important?’ and ‘How will your concepts solve the problem?’
- • A good abstract pitches the idea, embraces excellence, impact and implementation, and creates curiosity and excitement.
- • Make only short background descriptions that convince the evaluators that you are the right consortium for answering the questions and bring the research beyond state-of-the-art.
- • Proposals should be well structured, covering the right areas under the different criteria in the proposal template.
- • There should be a strong focus on relevance for the project. Nothing even slightly irrelevant should have a place in the proposal.
- • Have clear and convincing objectives.
- • Use high quality graphics that illustrate the concepts in a simple manner.
- • The figures and Gantt charts should show the project has a clear idea about how the individual parts and tasks are interconnected and timed intelligently, so the reader can see a well-thought out project plan.
- • Geography does not matter – unless stated explicitly as a criterion in the call text.
- • Design goals for impact that are explicit regarding context and balance realism with ambition.
- • When it comes to “measures to maximise impact”, the evaluators want to see a well-developed, realistic plan (as opposed to a long list of unconnected activities).
- • Management structure and the description of it should be tailor-made to fit the project.
- • Having worked together on projects before is considered positive as long as it is not ‘old wine in new bottles’.
- • A very complex management structure also indicates that the project can be high risk (that is, lack feasibility).
- • Do not have too many partners (often 5-9 is enough) in a RIA/IA [Research and Innovation Actions] project and avoid having too many work packages. Each partner should have a clear role.
- • Remember to cite yourselves if you are a researcher.
- • A Gantt chart should be precise and clear. Evaluators from non-academia, especially, look carefully at Gantt charts. A Gantt chart should show that the project is coherently organised and answer these questions: 1) Is the distribution of tasks and work reasonable and realistic? 2) Are the deliverables distributed properly among the partners?
Dan Andrée, a senior adviser to the Swedish Ministry of Education and Research and the Swedish innovation agency, Vinnova, said the study was very interesting and helpful to those seeking to improve their applications to the EU’s framework programme.
“I made a similar study three years ago focusing on impact,” he told University World News. “Most applications fail on the impact criterion and it is often easy to improve this part of the application.
“My advice to researchers is to start with the impact part of the application. You need to know what the EU would like to achieve in order to specify how your project will contribute to the EU policy goals. A common mistake is that applicants do not include any ‘work packages’ covering the impact part of the proposal and this is a 'recipe' for failure as the score in the impact criterion will be low.”