Difference between revisions of "Team:Stockholm/Ethics"
Utsakarmakar (Talk | contribs) |
|||
(35 intermediate revisions by 6 users not shown) | |||
Line 10: | Line 10: | ||
<div class="container-fluid content-container"> | <div class="container-fluid content-container"> | ||
− | |||
<div id="sticky-anchor"></div> | <div id="sticky-anchor"></div> | ||
<div id="sticky"> | <div id="sticky"> | ||
Line 21: | Line 20: | ||
<ul class="dropdown-menu"> | <ul class="dropdown-menu"> | ||
<li role="presentation" class="clickable anchor_link"><a href="https://2015.igem.org/Team:Stockholm/Practices#desc_ethics">Negative results in iGEM: Summary</a></li> | <li role="presentation" class="clickable anchor_link"><a href="https://2015.igem.org/Team:Stockholm/Practices#desc_ethics">Negative results in iGEM: Summary</a></li> | ||
− | <li role="presentation" class="clickable anchor_link"><a href="#">Full | + | <li role="presentation" class="clickable anchor_link"><a href="#">Full transparency report</a></li> |
+ | <li role="presentation" class="clickable anchor_link"><a href="https://2015.igem.org/Team:Stockholm/Wikipledge">Wiki Pledge and recommendations</a></li> | ||
</ul> | </ul> | ||
</li> | </li> | ||
Line 37: | Line 37: | ||
</div> | </div> | ||
− | <h1 id="desc_entrepreneurship"> | + | <h1 id="desc_entrepreneurship">Transparency and negative results in iGEM</h1> |
+ | <p>While working on our project we have become aware of issues concerning negative results and research transparency.</p> | ||
− | <h2>Survey</h2> | + | <p>When we researched older wikis to find inspiration and information for our own project we were impressed by what some iGEM teams had done and eager to build on their accomplishments. However, we soon started noticing a pattern of mixing ideas with results. This sometimes made it difficult to find results and assess what the team had actually accomplished.</p> |
+ | |||
+ | <p>We became curious about how the iGEM community perceives and treats negative results and decided to investigate.</p> | ||
+ | |||
+ | <h2>Survey to iGEM teams 2015</h2> | ||
<p>For this study, we conducted a survey to find out how the iGEM community perceives the reliability of old wikis. Additionally, we wanted to find out how they plan to write their own wiki, with regards to transparency and negative results. </p> | <p>For this study, we conducted a survey to find out how the iGEM community perceives the reliability of old wikis. Additionally, we wanted to find out how they plan to write their own wiki, with regards to transparency and negative results. </p> | ||
− | <p>Our population for this survey was all 280 teams registered for iGEM in 2015. Since the population is small, the sample size needs to be large, in proportion, to get good statistical significance. Even for the rather weak confidence level of 90 % and a margin of error of 10 %, the sample size must be larger than 55 teams. Our goal was to reach this sample size with a good response rate.</p> | + | <p>Our population for this survey was all 280 teams registered for iGEM in 2015. Since the population is small, the sample size needs to be large, in proportion, to get good statistical significance. Even for the rather weak confidence level of 90% and a margin of error of 10%, the sample size must be larger than 55 teams. Our goal was to reach this sample size with a good response rate.</p> |
<p>We sent a link to the survey by email to 70 teams, chosen at random from all participating teams in 2015. We chose this method over distributing the link on social media to improve randomization of the sample group and to track the response rate. | <p>We sent a link to the survey by email to 70 teams, chosen at random from all participating teams in 2015. We chose this method over distributing the link on social media to improve randomization of the sample group and to track the response rate. | ||
− | Email addresses were collected from the "About Our Lab" questionnaires on the iGEM website. Invitations to take part in the study were sent on July 29th 2015. Reminders were sent on August 6th and again on August 13th. When the survey closed on August 19th, 44 teams had responded – a response rate of 63 %. This sample size was smaller than we expected, and our confidence level is thus 85 % with a | + | Email addresses were collected from the "About Our Lab" questionnaires on the iGEM website. Invitations to take part in the study were sent on July 29th 2015. Reminders were sent on August 6th and again on August 13th. When the survey closed on August 19th, 44 teams had responded – a response rate of 63%. This sample size was smaller than we expected, and our confidence level is thus 85% with a 15% margin of error.</p> |
<p>The survey was comprised by different sections, each one had a set of questions. Sections were answered independently, and it was possible to move backwards in the survey. It was also possible to save the results and continue with the survey at a later date. This survey had three sections, aimed at three areas of inquiry. </p> | <p>The survey was comprised by different sections, each one had a set of questions. Sections were answered independently, and it was possible to move backwards in the survey. It was also possible to save the results and continue with the survey at a later date. This survey had three sections, aimed at three areas of inquiry. </p> | ||
+ | <ol> | ||
<li>How does the iGEM community use old wikis?</li> | <li>How does the iGEM community use old wikis?</li> | ||
− | |||
<li>How does the iGEM community perceive negative results?</li> | <li>How does the iGEM community perceive negative results?</li> | ||
− | + | <li>How does the iGEM community treat negative results on their wiki?</li> | |
− | <li>How does the iGEM community treat negative results on their wiki?</li>< | + | </ol> |
− | + | <br> | |
<p>The final question in the survey was "Have you discussed with your team before answering the survey?" | <p>The final question in the survey was "Have you discussed with your team before answering the survey?" | ||
| | ||
− | < | + | <h4>How does the iGEM community use old wikis?</h4> |
<p>BioBricks are like pieces of a machine. To understand how a cogwheel, pulley or lever works in a machine it is best to look at examples. Wikis are important to learn how BioBricks can fit together in large systems. When we researched older wikis we were impressed by what some teams had done and eager to build on their accomplishments. Parts and circuits were often described in language that implied that they worked as intended. Thus, we wanted to investigate how old wikis are used in the iGEM community. | <p>BioBricks are like pieces of a machine. To understand how a cogwheel, pulley or lever works in a machine it is best to look at examples. Wikis are important to learn how BioBricks can fit together in large systems. When we researched older wikis we were impressed by what some teams had done and eager to build on their accomplishments. Parts and circuits were often described in language that implied that they worked as intended. Thus, we wanted to investigate how old wikis are used in the iGEM community. | ||
− | When we started looking for results for relevant parts to our project, they were often hard to find. We started noticing a pattern of mixed ideas with results in a way which made it difficult to assess what the team had actually accomplished. To see if this experience was common among other iGEM teams, we wanted to know if they perceived that ideas were difficult to distinguish from results on wikis. Additionally, we wanted to know if they found it easy or difficult to find results | + | When we started looking for results for relevant parts to our project, they were often hard to find. We started noticing a pattern of mixed ideas with results in a way which made it difficult to assess what the team had actually accomplished. To see if this experience was common among other iGEM teams, we wanted to know if they perceived that ideas were difficult to distinguish from results on wikis. Additionally, we wanted to know if they found it easy or difficult to find results.</p> |
<p>Even when results are present, the data may be lacking. A result may be presented as positive, but, since wikis are not reviewed before publication, it is good to look at the data before moving forward with a part or circuit. We wanted to know if iGEM teams had been able to find data on the wikis to support the claims. | <p>Even when results are present, the data may be lacking. A result may be presented as positive, but, since wikis are not reviewed before publication, it is good to look at the data before moving forward with a part or circuit. We wanted to know if iGEM teams had been able to find data on the wikis to support the claims. | ||
− | If a team moves forward with a part from a previous project, they may find out later that the part actually never worked as intended. This can be due to difficulty of finding clear results, but it is also possible that some teams leave out negative result and | + | If a team moves forward with a part from a previous project, they may find out later that the part actually never worked as intended. This can be due to difficulty of finding clear results, but it is also possible that some teams leave out negative result and overemphasize positive results. Thus, we wanted to investigate what experience iGEM teams have had when using parts or protocols described on the old wikis.</p> |
− | < | + | <h4>How does the iGEM community perceive negative results?</h4> |
− | <p>Publication of negative results is a point of discussion | + | <p>Publication of negative results is a point of discussion within the research community. But in research, ideas are usually not published without results. Thus, ideas or innovations that do not work are not published. In the iGEM community the question is even more complicated since there is no review of wikis before publication. Some ideas are presented without any experimental data to test them. This is not strange given the short timeframe and the sky's-the-limit atmosphere of iGEM, but it makes it difficult to tell apart working constructs from lofty ideas. It is possible that some results are missing because they are negative. This is troublesome since a compelling idea might then be attempted again and again by teams that do not publish their negative results. In the second part of the survey we wanted to find out how the iGEM community views negative results.</p> |
<p>In the survey we defined negative results. This definition was stated on the top of the page of the section: "A negative result is conclusive and supports the null-hypothesis. By conclusive we mean that the result is significant, has been reproduced multiple times and answers a posed question. In other words, a negative result is one that indicates that your experiment does not work as intended."</p> | <p>In the survey we defined negative results. This definition was stated on the top of the page of the section: "A negative result is conclusive and supports the null-hypothesis. By conclusive we mean that the result is significant, has been reproduced multiple times and answers a posed question. In other words, a negative result is one that indicates that your experiment does not work as intended."</p> | ||
Line 76: | Line 81: | ||
− | < | + | <h4>How does the iGEM community treat negative results?</h4> |
<p>Finally, we wanted to know if teams were planning to report all, some or none of their negative results on their wiki. If they planned to omit some negative results, we wanted to know why.</p> | <p>Finally, we wanted to know if teams were planning to report all, some or none of their negative results on their wiki. If they planned to omit some negative results, we wanted to know why.</p> | ||
− | < | + | <br><b>Results</b></br> |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
+ | <br><u>Part I: How does the iGEM community use old wikis?</br></u> | ||
− | < | + | <div class="question"><p>Question 1</p></div> |
− | < | + | <img src="https://static.igem.org/mediawiki/2015/2/24/Trans_report_graph1.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 1. Answers to survey question 1.</p></div> | ||
− | <p> | + | <div class="question"><p>Question 2</p></div> |
+ | <img src="https://static.igem.org/mediawiki/2015/c/cd/Trans_report_graph2.png" class="img-responsive"> | ||
+ | <div class="figure-text"><p>Figure 2. Answers to survey question 2.</p></div> | ||
− | |||
− | < | + | <div class="question"><p>Question 3</p></div> |
+ | <img src="https://static.igem.org/mediawiki/2015/b/bd/Trans_report_graph3.png" class="img-responsive"> | ||
+ | <div class="figure-text"><p>Figure 3. Answers to survey question 3.</p></div> | ||
− | |||
+ | <div class="question"><p>Question 4</p></div> | ||
− | < | + | <img src="https://static.igem.org/mediawiki/2015/3/3e/Trans_report_graph4.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 4. Answers to survey question 4.</p></div> | ||
− | |||
+ | <div class="question"><p>Question 5</p></div> | ||
− | <p>Figure 5. | + | <img src="https://static.igem.org/mediawiki/2015/f/f3/Trans_report_graph5.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 5. Answers to survey question 5. </p></div> | ||
− | < | + | <h4>Summary</h4> |
<p>The survey shows that most iGEM teams use wikis from previous years for inspiration and to find out if their project ideas have already been explored by other teams. Many teams also use results, protocols and techniques that they find on wikis. Wikis from previous years are clearly an important source of information and inspiration for iGEM teams. | <p>The survey shows that most iGEM teams use wikis from previous years for inspiration and to find out if their project ideas have already been explored by other teams. Many teams also use results, protocols and techniques that they find on wikis. Wikis from previous years are clearly an important source of information and inspiration for iGEM teams. | ||
Line 133: | Line 128: | ||
− | < | + | <br><u>Part II: How does the iGEM community perceive negative results?</br></u> |
− | < | + | <div class="question"><p>Question 6</p></div> |
− | < | + | <img src="https://static.igem.org/mediawiki/2015/0/02/Trans_report_graph6.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 6. Answers to survey question 6. </p></div> | ||
+ | <div class="question"><p>Question 7</p></div> | ||
− | <p>Figure | + | <img src="https://static.igem.org/mediawiki/2015/7/73/Graph7.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 7. Answers to survey question 7. </p></div> | ||
− | < | + | <div class="question"><p>Question 8</p></div> |
− | < | + | <img src="https://static.igem.org/mediawiki/2015/2/2a/Graph8.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 8. Answers to survey question 8.</p></div> | ||
+ | <div class="question"><p>Question 9</p></div> | ||
− | <p>Figure | + | <img src="https://static.igem.org/mediawiki/2015/2/24/Graph9.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 9. Answers to survey question 9.</p></div> | ||
− | < | + | <h4>Summary</h4> |
− | + | <p>iGEM teams clearly think that negative results are important and they have discussed them within their team. However, they also worry about how negative results may be perceived. Many think that their wiki would look confusing if they include negative results. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | <p>iGEM teams clearly think that negative results are important and | + | |
Almost 80% of the teams are also worried about how negative results would look to the iGEM judges. Some teams particularly expressed their concerns about the judging forms; “We are very concerned that judges perceive negative results as failures, even if important and/or meaningful information is gained. This is largely due to the judging forms which require success to achieve medals” </p> | Almost 80% of the teams are also worried about how negative results would look to the iGEM judges. Some teams particularly expressed their concerns about the judging forms; “We are very concerned that judges perceive negative results as failures, even if important and/or meaningful information is gained. This is largely due to the judging forms which require success to achieve medals” </p> | ||
− | < | + | <br><u>Part III: How does the iGEM community treat negative results on their wiki?</br></u> |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
+ | <div class="question"><p>Question 10</p></div> | ||
− | < | + | <img src="https://static.igem.org/mediawiki/2015/a/ae/Trans_report_graph10.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 10. Answers to survey question 10. </p></div> | ||
− | |||
+ | <div class="question"><p>Question 11</p></div> | ||
− | <p>Figure 11. Answers to survey question 11 | + | <img src="https://static.igem.org/mediawiki/2015/c/c9/Trans_report_graph11.png" class="img-responsive"> |
+ | <div class="figure-text"><p>Figure 11. Answers to survey question 11.</p></div> | ||
− | < | + | <h4>Summary</h4> |
− | <p>Despite being worried about the judging, more than 90% of the survey participants said they will include negative results on their wiki. Nevertheless, many will not include all negative results. The most common reasons to exclude negative results | + | <p>Despite being worried about the judging, more than 90% of the survey participants said they will include negative results on their wiki. Nevertheless, many will not include all negative results. The most common reasons to exclude negative results were to not make the wiki confusing or because the results were considered unimportant. </p> |
<p>In this sense, one team said; “If they are significant, we would like to include them. However, there might be results that are not relevant for our project, or we won't be able to find out and explain why they are negative. This will make the wiki incomplete and difficult to understand.”.</p> | <p>In this sense, one team said; “If they are significant, we would like to include them. However, there might be results that are not relevant for our project, or we won't be able to find out and explain why they are negative. This will make the wiki incomplete and difficult to understand.”.</p> | ||
Line 201: | Line 183: | ||
− | <p>In the previous chapter, we showed that a vast majority of newly-formed iGEM teams use the wikis of former iGEM teams to get inspired or to check whether their idea has been tried before (See Figure 1). However, about two thirds of this year’s iGEM participants felt that is rather difficult to find clear and conclusive data. | + | <p>In the previous chapter, we showed that a vast majority of newly-formed iGEM teams use the wikis of former iGEM teams to get inspired or to check whether their idea has been tried before (See Figure 1). However, about two thirds of this year’s iGEM participants felt that it is rather difficult to find clear and conclusive data. |
− | In an attempt to make this subjective impression more quantifiable, we evaluated 19 out of 34 overgraduate teams' wiki from the competition in 2013 and 2014. We put particular focus on how clear information, ideas, experiments and results are presented on each of these iGEM teams' wiki. In order to quantify how well data and information | + | In an attempt to make this subjective impression more quantifiable, we evaluated 19 out of 34 overgraduate teams' wiki from the competition in 2013 and 2014. We put particular focus on how clear information, ideas, experiments and results are presented on each of these iGEM teams' wiki. In order to quantify how well data and information are displayed on wikis, we developed a “Stockholm iGEM wiki pledge”. In this form, we included criteria with particular importance, indispensable for data representation and clarity of experimental findings. </p> |
<p>Major parts of the “Stockholm iGEM wiki pledge” are:</p> | <p>Major parts of the “Stockholm iGEM wiki pledge” are:</p> | ||
<br><u>A clear border between ideas and results</br></u> | <br><u>A clear border between ideas and results</br></u> | ||
− | <br>< | + | <br><h4>All ideas are linked to hypotheses, every hypothesis is linked to a follow up</h4> |
<li>An overarching idea can be visionary but it should be broken down into the ideas that will actually be tested in the scope of the project.</li> | <li>An overarching idea can be visionary but it should be broken down into the ideas that will actually be tested in the scope of the project.</li> | ||
<li>Ideas are further defined as testable hypotheses.</li> | <li>Ideas are further defined as testable hypotheses.</li> | ||
<li>Every hypothesis is connected to one or several practical experiments.</li> | <li>Every hypothesis is connected to one or several practical experiments.</li> | ||
<li>Every stated hypothesis is clearly linked to a written follow-up.</li> | <li>Every stated hypothesis is clearly linked to a written follow-up.</li> | ||
− | <br>< | + | <br><h4>A hypothesis is stated for each experiment </h4> |
<li>All experiments are listed along with their purposes and the hypothesis they are meant to test. Experiments are linked to their results.</li> | <li>All experiments are listed along with their purposes and the hypothesis they are meant to test. Experiments are linked to their results.</li> | ||
<li>A result can be positive, negative, inconclusive, unfinished or not started. Results that are inconclusive, unfinished or not started include an explanation for why that was the case.</li> | <li>A result can be positive, negative, inconclusive, unfinished or not started. Results that are inconclusive, unfinished or not started include an explanation for why that was the case.</li> | ||
<li>Every tested hypothesis is clearly linked to a written follow-up.</li> | <li>Every tested hypothesis is clearly linked to a written follow-up.</li> | ||
<br><u>Results are reported equally and thoroughly.</br></u> | <br><u>Results are reported equally and thoroughly.</br></u> | ||
− | <br>< | + | <br><h4>Negative results are reported</h4> |
<li>Negative results are defined as conclusive results that do not support the tested hypothesis.</li> | <li>Negative results are defined as conclusive results that do not support the tested hypothesis.</li> | ||
<li>Results that are not conclusive should be very clearly defined as such. Data from results that are not conclusive can be omitted from the wiki. Observed trends in case of inconclusive results can be reported in the follow up.</li> | <li>Results that are not conclusive should be very clearly defined as such. Data from results that are not conclusive can be omitted from the wiki. Observed trends in case of inconclusive results can be reported in the follow up.</li> | ||
− | <br>< | + | <br><h4>Priority is given to conclusive and critical results</h4> |
<li>When choosing which result to give the most space on the wiki, priority should be given to results that are conclusive and critical, even if this completely disproves a hypothesis that the project is based on.</li> | <li>When choosing which result to give the most space on the wiki, priority should be given to results that are conclusive and critical, even if this completely disproves a hypothesis that the project is based on.</li> | ||
− | <br>< | + | <br><h4>Results are presented thoroughly</h4> |
− | <li>Conclusive results are presented thoroughly enough that it’s possible for future teams to determine if a part, | + | <li>Conclusive results are presented thoroughly enough that it’s possible for future teams to determine if a part, BioBrick or protocol has worked as described in the idea part of the wiki.</li> |
<br><u>Borrowed and attributed ideas are declared</br></u> | <br><u>Borrowed and attributed ideas are declared</br></u> | ||
− | <br>< | + | <br><h4>Attributions are clearly stated</h4> |
<li>Attributions are clearly stated in text each time a non-original idea or result is introduced.</li> | <li>Attributions are clearly stated in text each time a non-original idea or result is introduced.</li> | ||
<li>A separate page lists detailed attributions and how they were used in the project.</li> | <li>A separate page lists detailed attributions and how they were used in the project.</li> | ||
<br> | <br> | ||
− | <p>We want to stress at this point, that the “Stockholm iGEM wiki pledge” is a suggestion from our group for clear delivery of research data on iGEM wikis. Of course, there are other ways to evaluate wiki pages. Nonetheless, based on the survey showed in the previous chapter, the criteria of the “Stockholm wiki pledge” seem to be in line with the perception of many other iGEM teams. </p> | + | <p>We want to stress at this point, that the “Stockholm iGEM wiki pledge” is a suggestion from our group for clear delivery of research data on iGEM wikis. Of course, there are other ways to evaluate wiki pages. Nonetheless, based on the survey showed in the previous chapter, the criteria of the “Stockholm iGEM wiki pledge” seem to be in line with the perception of many other iGEM teams. </p> |
− | <p>The attempt to make iGEM wikis quantifiable in their quality represents a very difficult task. Therefore, we needed to have a strict evaluation pattern which prevents | + | <p>The attempt to make iGEM wikis quantifiable in their quality represents a very difficult task. Therefore, we needed to have a strict evaluation pattern which prevents subjective influence from single evaluator. Hence, four of our team members evaluated 5-6 wikis independently from each other. They performed their evaluation in a pre-set lime survey following the ”Stockholm iGEM wiki pledge” outlined above.</p> |
− | <p>The teams from 2013 and 2014 | + | <p>The teams from 2013 and 2014 were chosen as evaluation basis as we think that particularly overgraduate teams should be aware of the requirements that come along with data representation. By evaluating 19 out of 34 iGEM teams' wiki, we got a significance level of 90%, allowing us to draw first conclusion from our own wiki evaluation. </p> |
<p>We evaluated up to 5 hypotheses for a single team, each clearly stated in their own wiki page. By reading through the overview of the project and the experimental results, we examined the relevance of the previously stated ideas to the empirical part of the project. If the team stated less than 5 claims in their wiki page, we assessed the smaller number of the hypotheses accordingly. Less than 5 hypotheses were examined for 4 teams, out of the whole sample population. </p> | <p>We evaluated up to 5 hypotheses for a single team, each clearly stated in their own wiki page. By reading through the overview of the project and the experimental results, we examined the relevance of the previously stated ideas to the empirical part of the project. If the team stated less than 5 claims in their wiki page, we assessed the smaller number of the hypotheses accordingly. Less than 5 hypotheses were examined for 4 teams, out of the whole sample population. </p> | ||
− | <p>After evaluating 19 randomly chosen iGEM wikis, we compiled | + | <p>After evaluating 19 randomly chosen iGEM wikis, we compiled data and analyzed it by standard statistical tools such as Quick Statistics software and graphical representation of the responses’ distribution.</p> |
− | <p>Using this approach, we | + | <p>Using this approach, we aimed to answer the question whether data representation on wikis is a major issue in the iGEM community. We want to improve the quality of future wikis by quantifying the clarity and conclusiveness of previous wikis. Furthermore, we want to identify possible starting points for future iGEM teams, to promote better quality of information representation on wiki pages.</p> |
− | < | + | <br><b>Results</b></br> |
− | <p>The results obtained | + | <p>The results obtained from the wiki evaluation let us assess the connection between ideas, hypotheses, and experimental plans. </p> |
Line 290: | Line 272: | ||
<th class="col-md-4 human-table-answer">No answer </th> | <th class="col-md-4 human-table-answer">No answer </th> | ||
<th class="col-md-4 human-table-number">12 </th> | <th class="col-md-4 human-table-number">12 </th> | ||
− | <th class="col-md-3 human-table-number">12. | + | <th class="col-md-3 human-table-number">12.64 </th> |
</tr> | </tr> | ||
</thead> | </thead> | ||
Line 346: | Line 328: | ||
</table> | </table> | ||
− | <p>In the next question, we decided to evaluate the follow-up of ideas and their declared outcome. Here, the proportion of positive answers was slightly smaller than in the previous questions, accounting for 60% of all responses. With a stable proportion of the answers indicating no clear follow-up, we could observe more uncertainty among the assessors in this question. Usually, this was caused by vague descriptions of the connection between the idea and the experimental outcome</p> | + | <p>In the next question, we decided to evaluate the follow-up of ideas and their declared outcome. Here, the proportion of positive answers was slightly smaller than in the previous questions, accounting for 60% of all responses. With a stable proportion of the answers indicating no clear follow-up, we could observe more uncertainty among the assessors in this question. Usually, this was caused by vague descriptions of the connection between the idea and the experimental outcome.</p> |
<p>Table 3. Distribution of responses to the statement ‘The idea has a follow-up which declares the outcome.” No answer indicates lack of a clearly stated hypothesis to be assessed.</p> | <p>Table 3. Distribution of responses to the statement ‘The idea has a follow-up which declares the outcome.” No answer indicates lack of a clearly stated hypothesis to be assessed.</p> | ||
Line 372: | Line 354: | ||
<th class="col-md-4 human-table-answer">No </th> | <th class="col-md-4 human-table-answer">No </th> | ||
<th class="col-md-4 human-table-number">9 </th> | <th class="col-md-4 human-table-number">9 </th> | ||
− | <th class="col-md-3 human-table-number">9. | + | <th class="col-md-3 human-table-number">9.57 </th> |
</tr> | </tr> | ||
</thead> | </thead> | ||
Line 380: | Line 362: | ||
<th class="col-md-4 human-table-answer">Uncertain </th> | <th class="col-md-4 human-table-answer">Uncertain </th> | ||
<th class="col-md-4 human-table-number">19</th> | <th class="col-md-4 human-table-number">19</th> | ||
− | <th class="col-md-3 human-table-number">20 </th> | + | <th class="col-md-3 human-table-number">20.00 </th> |
</tr> | </tr> | ||
</thead> | </thead> | ||
Line 396: | Line 378: | ||
<p>Subsequently, we wanted to assess the wikis in terms of negative results. The results at this point were dramatically opposite. Only one of the four examined teams was able to present their negative results. This accounts for 7% of the whole sample of teams from 2013 and 2014. </p> | <p>Subsequently, we wanted to assess the wikis in terms of negative results. The results at this point were dramatically opposite. Only one of the four examined teams was able to present their negative results. This accounts for 7% of the whole sample of teams from 2013 and 2014. </p> | ||
+ | <img src="https://static.igem.org/mediawiki/2015/a/a8/Trans_report_pie1.png" class="img-responsive"> | ||
+ | <div class="figure-text"><p>Figure 12. Distribution of answers to the statement "Negative results are presented" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.</p></div> | ||
− | <p> | + | <p>Next, we aimed to assess if results presented on the wikis were conclusive. In this question, the assessors decided that most of the results presented in the wikis were sufficiently conclusive. Interestingly, a relatively high proportion of the assessments were uncertain. This may indicate poor presentation of results or intentional ambiguity. </p> |
+ | <img src="https://static.igem.org/mediawiki/2015/1/1c/Trans_report_pie2.png" class="img-responsive"> | ||
+ | <div class="figure-text"><p>Figure 13. Distribution of answers to the statement "Presented results are sufficiently conclusive" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.</p></div> | ||
− | <p> | + | <p>Eventually, our team wanted to know if the results are conclusive enough to determine if the project worked as described in the idea part of the wiki. Here the distribution of both ‘Yes’ and ‘No’ responses was somewhat alike and differed by only 7 percentage points. </p> |
+ | <img src="https://static.igem.org/mediawiki/2015/3/3a/Trans_report_pie3.png" class="img-responsive"> | ||
+ | <div class="figure-text"><p>Figure 14. Distribution of answers to the statement "Conclusive results are thorough enough that it is possible to determine if it worked as described in the idea part of the wiki’" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.</p></div> | ||
+ | |||
+ | <p>Lastly, some general assessors’ comments indicated poor layout, navigation and intuitiveness of certain wikis. Some figures and formulas also lacked explanation. Nevertheless, the main objection from the assessors side was the definitive lack of negative results.</p> | ||
− | < | + | <h2>Conclusions</h2> |
+ | <p>For this study we wanted to investigate how iGEM teams perceive negative results and the reliability of previous wikis. We also wanted to know how they plan to write their own wiki, with regards to transparency and negative results. Finally, we tried to evaluate how well data and information is displayed on wikis.</p> | ||
− | <p> | + | <p>The study shows that the iGEM community thinks negative results and transparency are important. iGEM teams want to present their negative results, yet hardly any old teams have included negative results on their wikis. Several factors may have influenced this decision. Nearly all teams worry about how the iGEM judges will perceive negative results. Many also feel pressured to produce functional BioBricks. Defining if a result is positive, negative or inconclusive can also be difficult, especially as most iGEM teams do not have a lot of research experience. </p> |
+ | <p>Additionally, creating a straight-forward and user friendly wiki can be challenging. Teams often have to exclude some data or results. Today there are no explicit requirements or rewards for including negative results on the wikis. As wiki freeze approaches, perhaps teams omit them for the sake of clarity or to make a good impression on the judges.</p> | ||
− | <p> | + | <p>The report also indicates issues regarding lack of significant data and separation of ideas from results. These issues, together with omission of negative results, have also affected many iGEM teams. A third of all teams have tried to reproduce claims made by other teams, only to find out that, the other team never achieved what they had claimed. In a project in which time is so scarce, this is quite a serious problem. </p> |
− | + | ||
− | + | ||
+ | <p> Although this study showed us that the iGEM community struggles with problems, regarding transparency and negative results, this is not unique to iGEM. Professional researchers in all fields also struggle with these issues. It is encouraging that so many iGEM teams discuss these issues and think they are important. This makes us believe the prospect of improving the situation is good.</p> | ||
− | |||
− | <p>< | + | <h2>Recommendations</h2> |
+ | |||
+ | <p>Taking all the findings from this study into account, we have developed a set of recommendations for future iGEM teams and for the iGEM Foundation. We hope these recommendations can help improve transparency and negative results within iGEM.</p> | ||
+ | |||
+ | <p><u>Recommendations for Future iGEM Teams</u></p> | ||
+ | |||
+ | <p><h4>Have a clear border between ideas and results</h4></p> <p> It is important to differentiate your design ideas from your actual results. This can be done by displaying them on separate pages on your wiki. For clarity you should mark the status of your results as; positive, negative, inconclusive or unfinished. </p> | ||
+ | <p><h4>Report your results equally and thoroughly</h4></p> | ||
+ | <p>Always show conclusive results, even if they don’t prove your hypothesis. Apply statistics to your results whenever possible and include it on your wiki. In addition, it is good to include contact details so that future iGEM teams can contact you with questions or to receive raw data.</p> | ||
+ | |||
+ | <p>Above all, remember the iGEM values; integrity, good sportsmanship, respect and honesty.</p> | ||
+ | <br> | ||
+ | <p><u>Recommendations for the iGEM Foundation</u></p> | ||
+ | |||
+ | <p><h4>Review the judging criteria</p></h4> | ||
+ | <p>Valuing positive results more than significant results does not create a beneficial research environment in the long run. The iGEM Foundation has a unique opportunity to influence hundreds of future researchers each year. By rewarding teams that present conclusive results, whether they are positive or not, the foundation would emphasize good research conduct. | ||
+ | For example; showing that your results are conclusive could be a Silver Medal requirement, while producing a new functional BioBrick could be a Gold Medal requirement.</p> | ||
− | <p>< | + | <p><h4>Promote well structured wikis</p></h4> |
− | < | + | <p>Besides the judging criteria wiki requirements and guidelines are a powerful way to influence iGEM teams. Promoting wiki structures that clearly separate ideas from results would benefit future iGEM teams.</p> |
− | + | ||
− | <p>< | + | <p><h4>Provide information about negative results and transparency</p></h4> |
− | + | <p>Many iGEM teams have little or no research experience when they join the iGEM competition. The iGEM foundation could help teams by providing information on how to analyze data and draw conclusions from it.</p> | |
− | < | + | |
− | |||
− | |||
</div> | </div> |
Latest revision as of 20:22, 18 September 2015
Transparency and negative results in iGEM
While working on our project we have become aware of issues concerning negative results and research transparency.
When we researched older wikis to find inspiration and information for our own project we were impressed by what some iGEM teams had done and eager to build on their accomplishments. However, we soon started noticing a pattern of mixing ideas with results. This sometimes made it difficult to find results and assess what the team had actually accomplished.
We became curious about how the iGEM community perceives and treats negative results and decided to investigate.
Survey to iGEM teams 2015
For this study, we conducted a survey to find out how the iGEM community perceives the reliability of old wikis. Additionally, we wanted to find out how they plan to write their own wiki, with regards to transparency and negative results.
Our population for this survey was all 280 teams registered for iGEM in 2015. Since the population is small, the sample size needs to be large, in proportion, to get good statistical significance. Even for the rather weak confidence level of 90% and a margin of error of 10%, the sample size must be larger than 55 teams. Our goal was to reach this sample size with a good response rate.
We sent a link to the survey by email to 70 teams, chosen at random from all participating teams in 2015. We chose this method over distributing the link on social media to improve randomization of the sample group and to track the response rate. Email addresses were collected from the "About Our Lab" questionnaires on the iGEM website. Invitations to take part in the study were sent on July 29th 2015. Reminders were sent on August 6th and again on August 13th. When the survey closed on August 19th, 44 teams had responded – a response rate of 63%. This sample size was smaller than we expected, and our confidence level is thus 85% with a 15% margin of error.
The survey was comprised by different sections, each one had a set of questions. Sections were answered independently, and it was possible to move backwards in the survey. It was also possible to save the results and continue with the survey at a later date. This survey had three sections, aimed at three areas of inquiry.
- How does the iGEM community use old wikis?
- How does the iGEM community perceive negative results?
- How does the iGEM community treat negative results on their wiki?
The final question in the survey was "Have you discussed with your team before answering the survey?"
How does the iGEM community use old wikis?
BioBricks are like pieces of a machine. To understand how a cogwheel, pulley or lever works in a machine it is best to look at examples. Wikis are important to learn how BioBricks can fit together in large systems. When we researched older wikis we were impressed by what some teams had done and eager to build on their accomplishments. Parts and circuits were often described in language that implied that they worked as intended. Thus, we wanted to investigate how old wikis are used in the iGEM community. When we started looking for results for relevant parts to our project, they were often hard to find. We started noticing a pattern of mixed ideas with results in a way which made it difficult to assess what the team had actually accomplished. To see if this experience was common among other iGEM teams, we wanted to know if they perceived that ideas were difficult to distinguish from results on wikis. Additionally, we wanted to know if they found it easy or difficult to find results.
Even when results are present, the data may be lacking. A result may be presented as positive, but, since wikis are not reviewed before publication, it is good to look at the data before moving forward with a part or circuit. We wanted to know if iGEM teams had been able to find data on the wikis to support the claims. If a team moves forward with a part from a previous project, they may find out later that the part actually never worked as intended. This can be due to difficulty of finding clear results, but it is also possible that some teams leave out negative result and overemphasize positive results. Thus, we wanted to investigate what experience iGEM teams have had when using parts or protocols described on the old wikis.
How does the iGEM community perceive negative results?
Publication of negative results is a point of discussion within the research community. But in research, ideas are usually not published without results. Thus, ideas or innovations that do not work are not published. In the iGEM community the question is even more complicated since there is no review of wikis before publication. Some ideas are presented without any experimental data to test them. This is not strange given the short timeframe and the sky's-the-limit atmosphere of iGEM, but it makes it difficult to tell apart working constructs from lofty ideas. It is possible that some results are missing because they are negative. This is troublesome since a compelling idea might then be attempted again and again by teams that do not publish their negative results. In the second part of the survey we wanted to find out how the iGEM community views negative results.
In the survey we defined negative results. This definition was stated on the top of the page of the section: "A negative result is conclusive and supports the null-hypothesis. By conclusive we mean that the result is significant, has been reproduced multiple times and answers a posed question. In other words, a negative result is one that indicates that your experiment does not work as intended."
We wanted to know if the team had discussed the concept of negative results and how they valued them in relation to positive results. We also wanted to understand why some teams may not include negative results. To investigate this we asked if they worry about how judges would perceive negative results. We also asked if they worried about that negative results might make the wiki confusing to read.
How does the iGEM community treat negative results?
Finally, we wanted to know if teams were planning to report all, some or none of their negative results on their wiki. If they planned to omit some negative results, we wanted to know why.
Results
Part I: How does the iGEM community use old wikis?
Question 1
Figure 1. Answers to survey question 1.
Question 2
Figure 2. Answers to survey question 2.
Question 3
Figure 3. Answers to survey question 3.
Question 4
Figure 4. Answers to survey question 4.
Question 5
Figure 5. Answers to survey question 5.
Summary
The survey shows that most iGEM teams use wikis from previous years for inspiration and to find out if their project ideas have already been explored by other teams. Many teams also use results, protocols and techniques that they find on wikis. Wikis from previous years are clearly an important source of information and inspiration for iGEM teams.
Most teams don’t struggle to find results on wikis but they have difficulties finding significant data to support the results. Many teams also think it is difficult to separate ideas from results. As one iGEM team said: “The teams state what they are doing, but when you try to search for results, they cannot be found.”
About half of the participants had also tried to reproduce claims from other teams. A majority of these teams later found out that the previous team did not achieve what they had claimed. Out of the teams surveyed, 33% tried and failed to reproduce other team’s results, while only 14% percent succeeded. This suggests that reproducibility of results within iGEM is quite low.
Part II: How does the iGEM community perceive negative results?
Question 6
Figure 6. Answers to survey question 6.
Question 7
Figure 7. Answers to survey question 7.
Question 8
Figure 8. Answers to survey question 8.
Question 9
Figure 9. Answers to survey question 9.
Summary
iGEM teams clearly think that negative results are important and they have discussed them within their team. However, they also worry about how negative results may be perceived. Many think that their wiki would look confusing if they include negative results. Almost 80% of the teams are also worried about how negative results would look to the iGEM judges. Some teams particularly expressed their concerns about the judging forms; “We are very concerned that judges perceive negative results as failures, even if important and/or meaningful information is gained. This is largely due to the judging forms which require success to achieve medals”
Part III: How does the iGEM community treat negative results on their wiki?
Question 10
Figure 10. Answers to survey question 10.
Question 11
Figure 11. Answers to survey question 11.
Summary
Despite being worried about the judging, more than 90% of the survey participants said they will include negative results on their wiki. Nevertheless, many will not include all negative results. The most common reasons to exclude negative results were to not make the wiki confusing or because the results were considered unimportant.
In this sense, one team said; “If they are significant, we would like to include them. However, there might be results that are not relevant for our project, or we won't be able to find out and explain why they are negative. This will make the wiki incomplete and difficult to understand.”.
Wiki Evaluation
In the previous chapter, we showed that a vast majority of newly-formed iGEM teams use the wikis of former iGEM teams to get inspired or to check whether their idea has been tried before (See Figure 1). However, about two thirds of this year’s iGEM participants felt that it is rather difficult to find clear and conclusive data. In an attempt to make this subjective impression more quantifiable, we evaluated 19 out of 34 overgraduate teams' wiki from the competition in 2013 and 2014. We put particular focus on how clear information, ideas, experiments and results are presented on each of these iGEM teams' wiki. In order to quantify how well data and information are displayed on wikis, we developed a “Stockholm iGEM wiki pledge”. In this form, we included criteria with particular importance, indispensable for data representation and clarity of experimental findings.
Major parts of the “Stockholm iGEM wiki pledge” are:
A clear border between ideas and results
All ideas are linked to hypotheses, every hypothesis is linked to a follow up
A hypothesis is stated for each experiment
Results are reported equally and thoroughly.
Negative results are reported
Priority is given to conclusive and critical results
Results are presented thoroughly
Borrowed and attributed ideas are declared
Attributions are clearly stated
We want to stress at this point, that the “Stockholm iGEM wiki pledge” is a suggestion from our group for clear delivery of research data on iGEM wikis. Of course, there are other ways to evaluate wiki pages. Nonetheless, based on the survey showed in the previous chapter, the criteria of the “Stockholm iGEM wiki pledge” seem to be in line with the perception of many other iGEM teams.
The attempt to make iGEM wikis quantifiable in their quality represents a very difficult task. Therefore, we needed to have a strict evaluation pattern which prevents subjective influence from single evaluator. Hence, four of our team members evaluated 5-6 wikis independently from each other. They performed their evaluation in a pre-set lime survey following the ”Stockholm iGEM wiki pledge” outlined above.
The teams from 2013 and 2014 were chosen as evaluation basis as we think that particularly overgraduate teams should be aware of the requirements that come along with data representation. By evaluating 19 out of 34 iGEM teams' wiki, we got a significance level of 90%, allowing us to draw first conclusion from our own wiki evaluation.
We evaluated up to 5 hypotheses for a single team, each clearly stated in their own wiki page. By reading through the overview of the project and the experimental results, we examined the relevance of the previously stated ideas to the empirical part of the project. If the team stated less than 5 claims in their wiki page, we assessed the smaller number of the hypotheses accordingly. Less than 5 hypotheses were examined for 4 teams, out of the whole sample population.
After evaluating 19 randomly chosen iGEM wikis, we compiled data and analyzed it by standard statistical tools such as Quick Statistics software and graphical representation of the responses’ distribution.
Using this approach, we aimed to answer the question whether data representation on wikis is a major issue in the iGEM community. We want to improve the quality of future wikis by quantifying the clarity and conclusiveness of previous wikis. Furthermore, we want to identify possible starting points for future iGEM teams, to promote better quality of information representation on wiki pages.
Results
The results obtained from the wiki evaluation let us assess the connection between ideas, hypotheses, and experimental plans.
A majority of the teams from 2013 and 2014 had clearly connected ideas to the experiments, this accounts for almost 70% of the whole randomized wiki population:
Table 1. Distribution of responses to the statement ‘The idea is clearly connected to the experiment’. No answer indicates lack of a clearly stated hypothesis to be assessed.
Answer | Response | Percentage [%] |
---|---|---|
Yes | 66 | 69.47 |
No | 8 | 8.42 |
Uncertain | 9 | 9.47 |
No answer | 12 | 12.64 |
The next question regarded the connection between the purpose of the laboratory experiments and their relevance to theoretical ideas in the project plan. Again, the results of this question’s assessment reveal that, most prevalently, the relation was clear.
However, the increased uncertainty of the evaluation suggests; too ambiguous descriptions, awkwardness of language and lack of straightforward explanations. It also suggests insufficiently explained flow of ideas to experiments.
Table 2. Distribution of responses to the statement ‘The purpose of major experiments connected to this idea is clearly stated’. No answer indicates lack of a clearly stated hypothesis to be assessed.
Answer | Response | Percentage [%] |
---|---|---|
Yes | 67 | 70.53 |
No | 5 | 5.26 |
Uncertain | 12 | 12.63 |
No answer | 11 | 11.58 |
In the next question, we decided to evaluate the follow-up of ideas and their declared outcome. Here, the proportion of positive answers was slightly smaller than in the previous questions, accounting for 60% of all responses. With a stable proportion of the answers indicating no clear follow-up, we could observe more uncertainty among the assessors in this question. Usually, this was caused by vague descriptions of the connection between the idea and the experimental outcome.
Table 3. Distribution of responses to the statement ‘The idea has a follow-up which declares the outcome.” No answer indicates lack of a clearly stated hypothesis to be assessed.
Answer | Response | Percentage [%] |
---|---|---|
Yes | 57 | 60.00 |
No | 9 | 9.57 |
Uncertain | 19 | 20.00 |
No answer | 10 | 10.43 |
Subsequently, we wanted to assess the wikis in terms of negative results. The results at this point were dramatically opposite. Only one of the four examined teams was able to present their negative results. This accounts for 7% of the whole sample of teams from 2013 and 2014.
Figure 12. Distribution of answers to the statement "Negative results are presented" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.
Next, we aimed to assess if results presented on the wikis were conclusive. In this question, the assessors decided that most of the results presented in the wikis were sufficiently conclusive. Interestingly, a relatively high proportion of the assessments were uncertain. This may indicate poor presentation of results or intentional ambiguity.
Figure 13. Distribution of answers to the statement "Presented results are sufficiently conclusive" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.
Eventually, our team wanted to know if the results are conclusive enough to determine if the project worked as described in the idea part of the wiki. Here the distribution of both ‘Yes’ and ‘No’ responses was somewhat alike and differed by only 7 percentage points.
Figure 14. Distribution of answers to the statement "Conclusive results are thorough enough that it is possible to determine if it worked as described in the idea part of the wiki’" evaluated on a per-hypothesis basis. No answer indicates lack of a clearly stated hypothesis to be assessed.
Lastly, some general assessors’ comments indicated poor layout, navigation and intuitiveness of certain wikis. Some figures and formulas also lacked explanation. Nevertheless, the main objection from the assessors side was the definitive lack of negative results.
Conclusions
For this study we wanted to investigate how iGEM teams perceive negative results and the reliability of previous wikis. We also wanted to know how they plan to write their own wiki, with regards to transparency and negative results. Finally, we tried to evaluate how well data and information is displayed on wikis.
The study shows that the iGEM community thinks negative results and transparency are important. iGEM teams want to present their negative results, yet hardly any old teams have included negative results on their wikis. Several factors may have influenced this decision. Nearly all teams worry about how the iGEM judges will perceive negative results. Many also feel pressured to produce functional BioBricks. Defining if a result is positive, negative or inconclusive can also be difficult, especially as most iGEM teams do not have a lot of research experience.
Additionally, creating a straight-forward and user friendly wiki can be challenging. Teams often have to exclude some data or results. Today there are no explicit requirements or rewards for including negative results on the wikis. As wiki freeze approaches, perhaps teams omit them for the sake of clarity or to make a good impression on the judges.
The report also indicates issues regarding lack of significant data and separation of ideas from results. These issues, together with omission of negative results, have also affected many iGEM teams. A third of all teams have tried to reproduce claims made by other teams, only to find out that, the other team never achieved what they had claimed. In a project in which time is so scarce, this is quite a serious problem.
Although this study showed us that the iGEM community struggles with problems, regarding transparency and negative results, this is not unique to iGEM. Professional researchers in all fields also struggle with these issues. It is encouraging that so many iGEM teams discuss these issues and think they are important. This makes us believe the prospect of improving the situation is good.
Recommendations
Taking all the findings from this study into account, we have developed a set of recommendations for future iGEM teams and for the iGEM Foundation. We hope these recommendations can help improve transparency and negative results within iGEM.
Recommendations for Future iGEM Teams
Have a clear border between ideas and results
It is important to differentiate your design ideas from your actual results. This can be done by displaying them on separate pages on your wiki. For clarity you should mark the status of your results as; positive, negative, inconclusive or unfinished.
Report your results equally and thoroughly
Always show conclusive results, even if they don’t prove your hypothesis. Apply statistics to your results whenever possible and include it on your wiki. In addition, it is good to include contact details so that future iGEM teams can contact you with questions or to receive raw data.
Above all, remember the iGEM values; integrity, good sportsmanship, respect and honesty.
Recommendations for the iGEM Foundation
Review the judging criteria
Valuing positive results more than significant results does not create a beneficial research environment in the long run. The iGEM Foundation has a unique opportunity to influence hundreds of future researchers each year. By rewarding teams that present conclusive results, whether they are positive or not, the foundation would emphasize good research conduct. For example; showing that your results are conclusive could be a Silver Medal requirement, while producing a new functional BioBrick could be a Gold Medal requirement.
Promote well structured wikis
Besides the judging criteria wiki requirements and guidelines are a powerful way to influence iGEM teams. Promoting wiki structures that clearly separate ideas from results would benefit future iGEM teams.
Provide information about negative results and transparency
Many iGEM teams have little or no research experience when they join the iGEM competition. The iGEM foundation could help teams by providing information on how to analyze data and draw conclusions from it.