Difference between revisions of "Team:Michigan Software/Description"

Line 1: Line 1:
 
{{Michigan_Software}}
 
{{Michigan_Software}}
 
<html>
 
<html>
 +
<style>
 +
 +
</style>
  
 
<h2> Project Description </h2>
 
<h2> Project Description </h2>
 +
The Problem
 +
At is core, synthetic biology is the practice of genetically engineering novel organisms to perform a particular function. However, <a href="https://peerj.com/articles/148/">recent review studies </a> estimate only 10-25% of published scientific results are reproducible. A <a href="https://2014.igem.org/Team:Michigan_Software/Project#Description">2014 survey</a> conducted by the University of Michigan Biological Software Team confirmed that the repeatability problem exists in synthetic biology, with every scientist surveyed reporting prior struggles with replicating protocols. The majority of these scientists indicate unclear language and missing steps are the greatest contributors to the irreproducibility of synthetic biology protocols. ProtoCat is designed to address both of these issues by making it easier for scientists to share troubleshooting techniques and submit edits to existing protocols.
 +
 +
The Solution
 +
ProtoCat is a free database of crowd sourced protocols designed to to make existing protocols more repeatable and enable more accurate computational models of biological systems. We believe this can most efficiently be accomplished with a commitment to open source protocols and a broader more active community of digital troubleshooters. ProtoCat works to establish such a community by giving anyone with an internet connection or smartphone access to a repository of synthetic biology protocols collected from all over the world. <b>Additionally, ProtoCat encourages the development of higher quality, more repeatable protocols by allowing users to document trails, rate, review, and edit existing methods, and easily locate related protocols</b>
 +
  
Choosing reliable protocols for new experiments is a problem laboratories routinely face. Experimental practices differ immensely across laboratories, and precise details of these practices may be lost or forgotten as skilled members leave the lab. Such fragmentation in protocol methods and their documentation often hampers scientific progress. Indeed, there are few well-defined protocols that are generally agreed upon by the scientific community, in part due to the lack of a system that measures a protocol’s success. In turn, the lack of commonly accepted protocols and inadequate documentation affects experimental reproducibility through method inconsistencies across laboratories. Review studies even estimate that only 10-25% of published scientific results are reproducible, and up to 54% of materials/antibodies/organisms are not identifiable for reproducibility studies (https://peerj.com/articles/148/). This is an alarming figure and suggests that many of the results we base continued research and development on are inaccurate or cannot be replicated independently.
+
<hr>
 +
<h2> INFORMATION DUMP </h2>
 +
Choosing reliable protocols for new experiments is a problem laboratories routinely face. Experimental practices differ immensely across laboratories, and precise details of these practices may be lost or forgotten as skilled members leave the lab. Such fragmentation in protocol methods and their documentation often hampers scientific progress. Indeed, there are few well-defined protocols that are generally agreed upon by the scientific community, in part due to the lack of a system that measures a protocol’s success. In turn, the lack of commonly accepted protocols and inadequate documentation affects experimental reproducibility through method inconsistencies across laboratories. Review studies even estimate that only 10-25% of published scientific results are reproducible, and up to 54% of materials/antibodies/organisms are not identifiable for reproducibility studies (https://peerj.com/articles/148/). This is an alarming figure and suggests that many of the results we base continued research and development on are inaccurate or cannot be replicated independently.
  
 
<p>
 
<p>

Revision as of 00:14, 10 September 2015


Michigan Software 2015

Project Description

The Problem At is core, synthetic biology is the practice of genetically engineering novel organisms to perform a particular function. However, recent review studies estimate only 10-25% of published scientific results are reproducible. A 2014 survey conducted by the University of Michigan Biological Software Team confirmed that the repeatability problem exists in synthetic biology, with every scientist surveyed reporting prior struggles with replicating protocols. The majority of these scientists indicate unclear language and missing steps are the greatest contributors to the irreproducibility of synthetic biology protocols. ProtoCat is designed to address both of these issues by making it easier for scientists to share troubleshooting techniques and submit edits to existing protocols. The Solution ProtoCat is a free database of crowd sourced protocols designed to to make existing protocols more repeatable and enable more accurate computational models of biological systems. We believe this can most efficiently be accomplished with a commitment to open source protocols and a broader more active community of digital troubleshooters. ProtoCat works to establish such a community by giving anyone with an internet connection or smartphone access to a repository of synthetic biology protocols collected from all over the world. Additionally, ProtoCat encourages the development of higher quality, more repeatable protocols by allowing users to document trails, rate, review, and edit existing methods, and easily locate related protocols

INFORMATION DUMP

Choosing reliable protocols for new experiments is a problem laboratories routinely face. Experimental practices differ immensely across laboratories, and precise details of these practices may be lost or forgotten as skilled members leave the lab. Such fragmentation in protocol methods and their documentation often hampers scientific progress. Indeed, there are few well-defined protocols that are generally agreed upon by the scientific community, in part due to the lack of a system that measures a protocol’s success. In turn, the lack of commonly accepted protocols and inadequate documentation affects experimental reproducibility through method inconsistencies across laboratories. Review studies even estimate that only 10-25% of published scientific results are reproducible, and up to 54% of materials/antibodies/organisms are not identifiable for reproducibility studies (https://peerj.com/articles/148/). This is an alarming figure and suggests that many of the results we base continued research and development on are inaccurate or cannot be replicated independently.

To attempt to address these problems, we set out to build a database that integrates a crowd-sourced ratings and comments system to clearly document, rate, elaborate, review, and organize variants of experimental protocols in order to increase the likelihood of reproducible scientific results. This would streamline some of the confusion associated with finding and replicating protocols by creating a database where investigators can upload protocols, comment on them, and even use a rating system to allow the best protocols to rise to the top. Before starting, we designed a survey (which you can check out here) to poll a range of scientific researchers on their experiences trying new protocols. We disseminated it across our networks and over social media to judge the interest and usefulness of our project. For those still interested, the survey will be available here for people to take at least through the end of 2014. So far, we've found that among a diverse and experienced set of respondents, every single scientist has struggled with replicating protocols from other experimenters, with >50% of respondents having difficulty more than 25% of the time.

Additionally, our survey identified that unclear language and missing steps of protocols as the greatest contributors to the irreproducibility of protocols. Furthermore, 100% of respondents indicated they would use a database like this to browse and download protocols, and over 85% indicated they would upload and maintain their own protocols if such a site existed (click here for a complete list of results). With these data and interest in hand, we set out to build ProtoCat.

What is ProtoCat

ProtoCat is a free database of crowd sourced protocols designed to facilitate raising the computational predictability in biological science to the level realized in the physical, chemical, and electrical arts. In other words, ProtoCat's mission is to make existing protocols more repeatable and enable more accurate computational models of biological systems. We believe this can most efficiently be accomplished with a commitment to open source protocols and a broader more active community of digital troubleshooters. ProtoCat works to establish such a community by giving anyone with an internet connection or smartphone open access to a repository of synthetic biology protocols collected from all over the world. Additionally, ProtoCat encourages the development of higher quality, more repeatable protocols by allowing users to document trails, rate, review, edit and reorder individual steps of existing methods, and easily locate related protocols.

Why Synthetic Biology needs ProtoCat

At is core, synthetic biology is the practice of genetically engineering novel organisms to perform a particular function. For example, transforming a new gene into a host organism to enable it to break down waste paper. Although the steps necessary to carry out this process (e.g. extraction, restrictive digest, translation/cloning/PCR, ligation, and transformation) have been well documented, the sensitive and unpredictable nature of biological organisms makes establishing repeatable methods a difficult task. Recent review studies estimate only 10-25% or published scientific results are reproducible. A 2014 survey conducted by University of Michigan Biological Software confirmed the repeatability problem exits in synthetic biology with every single scientist surveyed reporting prior struggles with replicating protocols from other experimenters. The majority of these scientists indicate unclear language and missing steps are the greatest contributors to the irreproducibility of synthetic biology protocols. ProtoCat is designed to address both of these issues by making it easier for scientists to share troubleshooting techniques and submit edits to existing protocols.

The Future of ProtoCat

Future development of ProtoCat focuses on expanding our library of protocols by mining methods from other digital publishing sources (e.g. protocols.io and open wet ware) and aggregating them in one central location. The user interface will also be further developed to automate calculations of protocol parameters like (reagent volumes and reactant proportions). Other improvements to the user experience include direct links to safety and storage information about the materials used in the protocol as well as access to a platform for purchasing requisite materials and equipment from partnering vendors. Finally, the next geneation of ProtoCat incorporates modeling software to enable virtual trials for the purpose of optimizing experimental design. For example, ProtoCat could simulate a PCR of a specified target sequence using a variety of primers to see which primer produces the highest yield or run virtual ligations of two specified sequences at different proportions to discern the ratio which produces the highest plasmid yield.

Tell us about your project, describe what moves you and why this is something important for your team.

What should this page contain?
  • A clear and concise description of your project.
  • A detailed explanation of why your team chose to work on this particular project.
  • References and sources to document your research.
  • Use illustrations and other visual resources to explain your project.

Advice on writing your Project Description

We encourage you to put up a lot of information and content on your wiki, but we also encourage you to include summaries as much as possible. If you think of the sections in your project description as the sections in a publication, you should try to be consist, accurate and unambiguous in your achievements.

Judges like to read your wiki and know exactly what you have achieved. This is how you should think about these sections; from the point of view of the judge evaluating you at the end of the year.


References

iGEM teams are encouraged to record references you use during the course of your research. They should be posted somewhere on your wiki so that judges and other visitors can see how you though about your project and what works inspired you.

Inspiration

See how other teams have described and presented their projects: