Transparency and negative results in iGEM
While working on our project we have become aware of issues concerning negative results and research transparency.
When we researched older wikis to find inspiration and information for our own project we were impressed by what some iGEM teams had done and eager to build on their accomplishments. However, we soon started noticing a pattern of mixing ideas with results. This sometimes made it difficult to find results and assess what the team had actually accomplished.
We became curious about how the iGEM community perceives and treats negative results and decided to investigate.
Survey to iGEM teams 2015
For this study, we conducted a survey to find out how the iGEM community perceives the reliability of old wikis. Additionally, we wanted to find out how they plan to write their own wiki, with regards to transparency and negative results.
Our population for this survey was all 280 teams registered for iGEM in 2015. Since the population is small, the sample size needs to be large, in proportion, to get good statistical significance. Even for the rather weak confidence level of 90% and a margin of error of 10%, the sample size must be larger than 55 teams. Our goal was to reach this sample size with a good response rate.
We sent a link to the survey by email to 70 teams, chosen at random from all participating teams in 2015. We chose this method over distributing the link on social media to improve randomization of the sample group and to track the response rate. Email addresses were collected from the "About Our Lab" questionnaires on the iGEM website. Invitations to take part in the study were sent on July 29th 2015. Reminders were sent on August 6th and again on August 13th. When the survey closed on August 19th, 44 teams had responded – a response rate of 63%. This sample size was smaller than we expected, and our confidence level is thus 85% with a 15% margin of error.
The survey was comprised by different sections, each one had a set of questions. Sections were answered independently, and it was possible to move backwards in the survey. It was also possible to save the results and continue with the survey at a later date. This survey had three sections, aimed at three areas of inquiry.
- How does the iGEM community use old wikis?
- How does the iGEM community perceive negative results?
- How does the iGEM community treat negative results on their wiki?
The final question in the survey was "Have you discussed with your team before answering the survey?"
How does the iGEM community use old wikis?
BioBricks are like pieces of a machine. To understand how a cogwheel, pulley or lever works in a machine it is best to look at examples. Wikis are important to learn how BioBricks can fit together in large systems. When we researched older wikis we were impressed by what some teams had done and eager to build on their accomplishments. Parts and circuits were often described in language that implied that they worked as intended. Thus, we wanted to investigate how old wikis are used in the iGEM community. When we started looking for results for relevant parts to our project, they were often hard to find. We started noticing a pattern of mixed ideas with results in a way which made it difficult to assess what the team had actually accomplished. To see if this experience was common among other iGEM teams, we wanted to know if they perceived that ideas were difficult to distinguish from results on wikis. Additionally, we wanted to know if they found it easy or difficult to find results.
Even when results are present, the data may be lacking. A result may be presented as positive, but, since wikis are not reviewed before publication, it is good to look at the data before moving forward with a part or circuit. We wanted to know if iGEM teams had been able to find data on the wikis to support the claims. If a team moves forward with a part from a previous project, they may find out later that the part actually never worked as intended. This can be due to difficulty of finding clear results, but it is also possible that some teams leave out negative result and overemphasize positive results. Thus, we wanted to investigate what experience iGEM teams have had when using parts or protocols described on the old wikis.
How does the iGEM community perceive negative results?
Publication of negative results is a point of discussion within the research community. But in research, ideas are usually not published without results. Thus, ideas or innovations that do not work are not published. In the iGEM community the question is even more complicated since there is no review of wikis before publication. Some ideas are presented without any experimental data to test them. This is not strange given the short timeframe and the sky's-the-limit atmosphere of iGEM, but it makes it difficult to tell apart working constructs from lofty ideas. It is possible that some results are missing because they are negative. This is troublesome since a compelling idea might then be attempted again and again by teams that do not publish their negative results. In the second part of the survey we wanted to find out how the iGEM community views negative results.
In the survey we defined negative results. This definition was stated on the top of the page of the section: "A negative result is conclusive and supports the null-hypothesis. By conclusive we mean that the result is significant, has been reproduced multiple times and answers a posed question. In other words, a negative result is one that indicates that your experiment does not work as intended."
We wanted to know if the team had discussed the concept of negative results and how they valued them in relation to positive results. We also wanted to understand why some teams may not include negative results. To investigate this we asked if they worry about how judges would perceive negative results. We also asked if they worried about that negative results might make the wiki confusing to read.
How does the iGEM community treat negative results?
Finally, we wanted to know if teams were planning to report all, some or none of their negative results on their wiki. If they planned to omit some negative results, we wanted to know why.
Part I: How does the iGEM community use old wikis?
Figure 1. Answers to survey question 1.
Figure 2. Answers to survey question 2.
Figure 3. Answers to survey question 3.
Figure 4. Answers to survey question 4.
Figure 5. Answers to survey question 5.
The survey shows that most iGEM teams use wikis from previous years for inspiration and to find out if their project ideas have already been explored by other teams. Many teams also use results, protocols and techniques that they find on wikis. Wikis from previous years are clearly an important source of information and inspiration for iGEM teams.
Most teams don’t struggle to find results on wikis but they have difficulties finding significant data to support the results. Many teams also think it is difficult to separate ideas from results. As one iGEM team said: “The teams state what they are doing, but when you try to search for results, they cannot be found.”
About half of the participants had also tried to reproduce claims from other teams. A majority of these teams later found out that the previous team did not achieve what they had claimed. Out of the teams surveyed, 33% tried and failed to reproduce other team’s results, while only 14% percent succeeded. This suggests that reproducibility of results within iGEM is quite low.
Part II: How does the iGEM community perceive negative results?
Figure 6. Answers to survey question 6.
Figure 7. Answers to survey question 7.
Figure 8. Answers to survey question 8.
Figure 9. Answers to survey question 9.
iGEM teams clearly think that negative results are important and they have discussed them within their team. However, they also worry about how negative results may be perceived. Many think that their wiki would look confusing if they include negative results. Almost 80% of the teams are also worried about how negative results would look to the iGEM judges. Some teams particularly expressed their concerns about the judging forms; “We are very concerned that judges perceive negative results as failures, even if important and/or meaningful information is gained. This is largely due to the judging forms which require success to achieve medals”
Part III: How does the iGEM community treat negative results on their wiki?
Figure 10. Answers to survey question 10.
Figure 11. Answers to survey question 11.
Despite being worried about the judging, more than 90% of the survey participants said they will include negative results on their wiki. Nevertheless, many will not include all negative results. The most common reasons to exclude negative results were to not make the wiki confusing or because the results were considered unimportant.
In this sense, one team said; “If they are significant, we would like to include them. However, there might be results that are not relevant for our project, or we won't be able to find out and explain why they are negative. This will make the wiki incomplete and difficult to understand.”.
In the previous chapter, we showed that a vast majority of newly-formed iGEM teams use the wikis of former iGEM teams to get inspired or to check whether their idea has been tried before (See Figure 1). However, about two thirds of this year’s iGEM participants felt that it is rather difficult to find clear and conclusive data. In an attempt to make this subjective impression more quantifiable, we evaluated 19 out of 34 overgraduate teams' wiki from the competition in 2013 and 2014. We put particular focus on how clear information, ideas, experiments and results are presented on each of these iGEM teams' wiki. In order to quantify how well data and information are displayed on wikis, we developed a “Stockholm iGEM wiki pledge”. In this form, we included criteria with particular importance, indispensable for data representation and clarity of experimental findings.
Major parts of the “Stockholm iGEM wiki pledge” are:
A clear border between ideas and results
All ideas are linked to hypotheses, every hypothesis is linked to a follow up
A hypothesis is stated for each experiment
Results are reported equally and thoroughly.
Negative results are reported
Priority is given to conclusive and critical results
Results are presented thoroughly
Borrowed and attributed ideas are declared
Attributions are clearly stated
We want to stress at this point, that the “Stockholm iGEM wiki pledge” is a suggestion from our group for clear delivery of research data on iGEM wikis. Of course, there are other ways to evaluate wiki pages. Nonetheless, based on the survey showed in the previous chapter, the criteria of the “Stockholm iGEM wiki pledge” seem to be in line with the perception of many other iGEM teams.
The attempt to make iGEM wikis quantifiable in their quality represents a very difficult task. Therefore, we needed to have a strict evaluation pattern which prevents subjective influence from single evaluator. Hence, four of our team members evaluated 5-6 wikis independently from each other. They performed their evaluation in a pre-set lime survey following the ”Stockholm iGEM wiki pledge” outlined above.
The teams from 2013 and 2014 were chosen as evaluation basis as we think that particularly overgraduate teams should be aware of the requirements that come along with data representation. By evaluating 19 out of 34 iGEM teams' wiki, we got a significance level of 90%, allowing us to draw first conclusion from our own wiki evaluation.
We evaluated up to 5 hypotheses for a single team, each clearly stated in their own wiki page. By reading through the overview of the project and the experimental results, we examined the relevance of the previously stated ideas to the empirical part of the project. If the team stated less than 5 claims in their wiki page, we assessed the smaller number of the hypotheses accordingly. Less than 5 hypotheses were examined for 4 teams, out of the whole sample population.
After evaluating 19 randomly chosen iGEM wikis, we compiled data and analyzed it by standard statistical tools such as Quick Statistics software and graphical representation of the responses’ distribution.
Using this approach, we aimed to answer the question whether data representation on wikis is a major issue in the iGEM community. We want to improve the quality of future wikis by quantifying the clarity and conclusiveness of previous wikis. Furthermore, we want to identify possible starting points for future iGEM teams, to promote better quality of information representation on wiki pages.
The results obtained from the wiki evaluation let us assess the connection between ideas, hypotheses, and experimental plans.
A majority of the teams from 2013 and 2014 had clearly connected ideas to the experiments, this accounts for almost 70% of the whole randomized wiki population:
Table 1. Distribution of responses to the statement ‘The idea is clearly connected to the experiment’. No answer indicates lack of a clearly stated hypothesis to be assessed.
The next question regarded the connection between the purpose of the laboratory experiments and their relevance to theoretical ideas in the project plan. Again, the results of this question’s assessment reveal that, most prevalently, the relation was clear.
However, the increased uncertainty of the evaluation suggests; too ambiguous descriptions, awkwardness of language and lack of straightforward explanations. It also suggests insufficiently explained flow of ideas to experiments.
Table 2. Distribution of responses to the statement ‘The purpose of major experiments connected to this idea is clearly stated’. No answer indicates lack of a clearly stated hypothesis to be assessed.
In the next question, we decided to evaluate the follow-up of ideas and their declared outcome. Here, the proportion of positive answers was slightly smaller than in the previous questions, accounting for 60% of all responses. With a stable proportion of the answers indicating no clear follow-up, we could observe more uncertainty among the assessors in this question. Usually, this was caused by vague descriptions of the connection between the idea and the experimental outcome.
Table 3. Distribution of responses to the statement ‘The idea has a follow-up which declares the outcome.” No answer indicates lack of a clearly stated hypothesis to be assessed.
Subsequently, we wanted to assess the wikis in terms of negative results. The results at this point were dramatically opposite. Only one of the four examined teams was able to present their negative results. This accounts for 7% of the whole sample of teams from 2013 and 2014.
Figure 12. Distribution of answers to the statement "Negative results are presented" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.
Next, we aimed to assess if results presented on the wikis were conclusive. In this question, the assessors decided that most of the results presented in the wikis were sufficiently conclusive. Interestingly, a relatively high proportion of the assessments were uncertain. This may indicate poor presentation of results or intentional ambiguity.
Figure 13. Distribution of answers to the statement "Presented results are sufficiently conclusive" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.
Eventually, our team wanted to know if the results are conclusive enough to determine if the project worked as described in the idea part of the wiki. Here the distribution of both ‘Yes’ and ‘No’ responses was somewhat alike and differed by only 7 percentage points.
Figure 14. Distribution of answers to the statement "Conclusive results are thorough enough that it is possible to determine if it worked as described in the idea part of the wiki’" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.
Lastly, some general assessors’ comments indicated poor layout, navigation and intuitiveness of certain wikis. Some figures and formulas also lacked explanation. Nevertheless, the main objection from the assessors side was the definitive lack of negative results.
For this study we wanted to investigate how iGEM teams perceive negative results and the reliability of previous wikis. We also wanted to know how they plan to write their own wiki, with regards to transparency and negative results. Finally, we tried to evaluate how well data and information is displayed on wikis.
The study shows that the iGEM community thinks negative results and transparency are important. iGEM teams want to present their negative results, yet hardly any old teams have included negative results on their wikis. Several factors may have influenced this decision. Nearly all teams worry about how the iGEM judges will perceive negative results. Many also feel pressured to produce functional BioBricks. Defining if a result is positive, negative or inconclusive can also be difficult, especially as most iGEM teams do not have a lot of research experience.
Additionally, creating a straight-forward and user friendly wiki can be challenging. Teams often have to exclude some data or results. Today there are no explicit requirements or rewards for including negative results on the wikis. As wiki freeze approaches, perhaps teams omit them for the sake of clarity or to make a good impression on the judges.
The report also indicates issues regarding lack of significant data and separation of ideas from results. These issues, together with omission of negative results, have also affected many iGEM teams. A third of all teams have tried to reproduce claims made by other teams, only to find out that, the other team never achieved what they had claimed. In a project in which time is so scarce, this is quite a serious problem.
Although this study showed us that the iGEM community struggles with problems, regarding transparency and negative results, this is not unique to iGEM. Professional researchers in all fields also struggle with these issues. It is encouraging that so many iGEM teams discuss these issues and think they are important. This makes us believe the prospect of improving the situation is good.
Taking all the findings from this study into account, we have developed a set of recommendations for future iGEM teams and for the iGEM Foundation. We hope these recommendations can help improve transparency and negative results within iGEM.
Recommendations for Future iGEM Teams
Have a clear border between ideas and results
It is important to differentiate your design ideas from your actual results. This can be done by displaying them on separate pages on your wiki. For clarity you should mark the status of your results as; positive, negative, inconclusive or unfinished.
Report your results equally and thoroughly
Always show conclusive results, even if they don’t prove your hypothesis. Apply statistics to your results whenever possible and include it on your wiki. In addition, it is good to include contact details so that future iGEM teams can contact you with questions or to receive raw data.
Above all, remember the iGEM values; integrity, good sportsmanship, respect and honesty.
Recommendations for the iGEM Foundation
Review the judging criteria
Valuing positive results more than significant results does not create a beneficial research environment in the long run. The iGEM Foundation has a unique opportunity to influence hundreds of future researchers each year. By rewarding teams that present conclusive results, whether they are positive or not, the foundation would emphasize good research conduct. For example; showing that your results are conclusive could be a Silver Medal requirement, while producing a new functional BioBrick could be a Gold Medal requirement.
Promote well structured wikis
Besides the judging criteria wiki requirements and guidelines are a powerful way to influence iGEM teams. Promoting wiki structures that clearly separate ideas from results would benefit future iGEM teams.
Provide information about negative results and transparency
Many iGEM teams have little or no research experience when they join the iGEM competition. The iGEM foundation could help teams by providing information on how to analyze data and draw conclusions from it.