Difference between revisions of "Team:SJTU-Software/project"

Line 384: Line 384:
 
       <div class="am-panel-collapse am-collapse downloadContent" id = "do-not-say-5">
 
       <div class="am-panel-collapse am-collapse downloadContent" id = "do-not-say-5">
 
         <div class = "panel-hd">
 
         <div class = "panel-hd">
         <a href = "http://www.igembase.com" target = "_blank">Base mainpage</a>
+
         <a href = "http://www.igembase.com" target = "_blank">Base mainpage</a><br/>
 
         <a href = "https://github.com/igemsoftware/SJTU-Software2015" target = "_blank">Github mainpage</a>
 
         <a href = "https://github.com/igemsoftware/SJTU-Software2015" target = "_blank">Github mainpage</a>
 
         </div>
 
         </div>

Revision as of 22:22, 17 September 2015

Project





Background

As we know, there are more than 20,000 biobricks in iGEM official standard database and the number of biobrick keeps increasing every year. Based on the fact that sequencing technology gave birth to bioinformatics, we assumed that with the explosive increase of biobricks, it will be harder for synthetic biologists to manually find good biobricks which meets the requirements when they are trying to create new devices with existing parts. This issue will inevitably lead to the birth of softwares and databases especially related to synthetic biology and these intelligent tools will further promote the rapid development of synthetic biology. So we integrated biobricks data before September from the iGEM official standard database and then developed a visual online device-designing system for synthetic biology researchers.
Meanwhile, in order to facilitate the researchers to look for better biobricks, we combined the search function and scoring system from the 2014 SJTU Software’s EasyBBK with our own system. In this way, users can find biobricks in line with their requirements more quickly.


Design&Algorithm

Introduction
Our software, BASE, has four functions: search, recommendation, evaluation and upload. Via search function, users can search for parts or devices using IDs or features as keywords. In recommendation interface, users can draw their devices. They can also give some keywords when drag an icon to the chain to get a list of parts which fit the require and other parts best. When using evaluation, users firstly enter a device that they designed, then our software can give advice for each part to improve their devices.Finally, users can upload their device to the IGEM part registry and BASE’s database. For the first three functions, we develop a set of scoring system to evaluate the effectiveness and ease of use of the parts and devices.

Method
We get the data of parts from IGEM part registry. A total of 14971 bio-bricks are recorded in the database. Then we divide them into two groups, parts and devices, according to whether the biobrick has subparts. Among them, ??? are parts and ??? are devices. For each bio-brick, there’re four different websites: http://parts.igem.org/cgi/xml/part.cgi?part=BBa_??? http://parts.igem.org/cgi/partsdb/part_info.cgi?part_name=BBa_??? http://parts.igem.org/partsdb/get_part.cgi?part=BBa_???  http://parts.igem.org/Part:BBa_???:Experience When collecting data, we simply replace the ??? with the bricks’ ID. We then extract information from the websites. The information include Part_status, Sample_status, Part_results, Uses, DNA_status, Qualitative_experience, Group_favorite, Star_rating, Del, Groups, Number_comments, Ave_rating. And we take most of the above factors into account when scoring bio-bricks. As for optimizing the weight of these factors, we firstly analyze the distribution of value of the factors to choose the factors that can distinguish the parts most effectively. Then we select 40 parts and 40 devices as the training sets. Finally we get the weight by combining results of several methods.

Results
1.scores for different values of factors To build a scoring system, we start at giving scores to the values of these factors. With the help of wet lab researchers, we rank the values of discrete type according to their effect on researches, and choose a relatively good method to transform successive values into values between 0 and 1. For discrete values, we have a scoring table as below.

Table1:

Part status Released HQ 2013 1
other 0
Sample status In Stock 1
It's complicated 0.5
For Reference Only 0.25
other 0
DNA Status Available 1
other 0
Part Results Works 1
Issues 0.25
Fails;None;Null 0
Star Rating 1 1
Null 0
Qualitative_
experience
Works 1
Issues 0.25
other 0

For those successive values, such as used times, average rating, number of comments, we develop two scoring methods. The “average rating” factor has only 5 values, so we just simply score it as a arithmetic progression. As for the other two factors, the distribution of values seems very unbalanced. And since we can be convinced that a brick is good when it’s used several tens times and the feedbacks are good, there’s no need to force a brick to get used for a thousand times before it’s recommended to other users, though some of the parts are actually used hundreds or even thousands of times. So we calculate the score by the expression below. Score=log(n+1)/log(nmax+1) The n in the expression refers to the values. By using this expression, we reduce the effect of extreme values and make the scores more convincing.

And the optimized weight of the factors are shown in the table below.
Table2:

Part Status 10
Sample Status 10
DNA Status 10
Part Results 15
Star Rating 10
Qualitative_
experience
5
Used Times 15
Average Rating 20
Number
of Comments
5

The above scoring system are used to evaluate all the bricks in our databases. It become effective in all the functions except upload. However, we still have another scoring system for devices.

2.devices scoring method with relationship between parts This method is mainly used in the evaluation function. For a device which is just designed by users, the score we get through the first method actually mean nothing, as there’re no information for the device on the registry. So we need to develop a new evaluation system based on its composing parts and relationships between the parts. When evaluating the relationships between parts, we take several factors into consideration, such as the frequency and the average score when the parts are used together and so on. Firstly the weight of the two aspects is confirmed. The default ratio is 65% for the parts and 35% for the relationships. In the first aspect, the weight of different types is dynamic. It’s influenced by the number and type of the parts. However it still shows the different significance of the parts. But in the second aspect, all relationships share the same weight. Then the scoring begins. Given that wet lab researchers care more about the outcome of a device, we search for functional coding parts in the device, and optimize it in the first place. After the user locks the functional parts, we start to optimize other parts. The order is decided according to their type and location in the device. Since there’re two scoring system for devices, the weight in the second one is adjusted to make the scores made by different method close so that scores for new devices can have the comparability with those already in the database. 3.Adding parts one by one This method is mainly used in recommendation function. It’s similar to the second one, but it only cares about the new adding part and relationships when doing the recommendations. The weight’s also adjusted to fit the other two method.

Reference

Morgan Madec, Yves Gendrault, Christophe Lallement, Member, IEEE, Jacques Haiech. A game-of-life like simulator for design-oriented modeling of BioBricks in synthetic biology, 34th Annual International Conference of the IEEE EMBS, San Diego, California USA, 28 August - 1 September, 2012

Suvi Santala, * Matti Karp, and Ville Santala, Monitoring Alkane Degradation by Single BioBrick Integration to an Optimal Cellular Framework, Synth. Biol. 2012, 1, 60 −64

Patrick M Boyle1, Devin R Burrill1, Mara C Inniss1, Christina M Agapakis1, Aaron Deardon, Jonathan G DeWerd, Michael A Gedeon, Jacqueline Y Quinn, Morgan L Paull, Anugraha M Raman, Mark R Theilmann, Lu Wang, Julia C Winn, Oliver Medvedik, Kurt Schellenberg, Karmella A Haynes,Alain Viel, Tamara J Brenner, George M Church, Jagesh V Shah1 and Pamela A Silver, A BioBrick compatible strategy for genetic modification of plants, Journal of Biological Engineering 2012, 6:8

Ilya B. Tikh & Mark Held & Claudia Schmidt-Dannert, BioBrickTM compatible vector system for protein expression in Rhodobacter sphaeroides, Appl Microbiol Biotechnol (2014) 98:3111–3119

Jacob E. Vick & Ethan T. Johnson & Swat i Choudhar y & Sarah E. Bloch & Fern ando Lope z-Galleg o & Poonam Sr ivastava & Ilya B. Tikh & Grays on T. Wawrzy n & Claud ia Sc hmidt-Da nnert, Optimized compa tible set of BioBrick™ vectors for met abolic pathway engineering, Appl Microbiol Biotechnol (2011) 92:1275–1286

Methods in Molecular Biology: Synthetic+Gene+Networks, John M. Walker, Wilfried Weber, Martin Fussenegger, Humana Press, 2012


Validaction

勾勾丑丑哒


Improvement


The school of life science and biotechnology of Shanghai Jiao Tong University is one of the best in China. We have a long tradition of taking part in the iGEM competition. However, until 2014, we only had wet lab team for our school and our seniors organized the first software team of SJTU. Thanks to the experience shared by last year’s software team, we have a much more mature software team this year.
Before we considering the project of this year, we firstly conducted carefully investigation on projects of former software teams. During the process, we found a common problem that the former project only got maintenance for the year when the team took the competition. After several years, we cannot contact the people who were responsible for the software then and some of the software are excellent. In order to prevent this problem from happening to our project, we consulted the members of last year’s software team after we determined our main idea. Consequentially, we absorbed the most important results of easyBBK in our software. First of all, as an outstanding software, easyBBK has a highlight in the search function. So we added the search function on our push system (which helps users to construct and design devices. Moreover, we made the scoring system of easyBBK to be more flexible by allowing users to choose the weight of different parts and filter the score range.
Meanwhile, we use the usage count of every existing biobricks to determine the difference of their performances. After standardizing the format of biological data, we can have better and more concrete standards like the expression level to evaluate the property of biobricks. In the end, we hope SJTU software teams can continue to inherit and develop the work of former teams. This would meet the requirements of iGEM competition better.


Demo

Search for device

Enter keyword “protein”
Click button ‘Device’

Use button ‘Advanced’ to change weight.

Set Uses to 1
Set Part Results to 20
Set Confirmed_times to 1
Set Number_comments to 1

Leave the rest as the defaults and click button ‘Sure’.
You will see result like this:

Search for part

Searching for part is a similar process
Enter keyword ”DNA”
Click button ‘Part’.
Change weight the same as the example of searching for device above
(Set Uses to 1;set Part Results to 20;set Confirmed_times to 1;set Number_comments to 1).
The result is as below:

Construct

Let us see a simple example:
Drag the icons and construct a simple device with a regulator,coding area,a reporter and a terminator.

Click button ‘Advise’ and select biobricks for your device.You can fill in functions if you want.

History id will show all ID of biobricks in your device.

When you have finished a device, you can click button

on the bottom right corner to evaluate this device.


Achivement

1.We firstly include relationships between parts to the evaluation of devices.
2.Our software can help in the whole process that a user designs a new device: the optimization of one part and another, and the visualization of the device.
3.Our software enable users to design their personalised weight for part evaluation.
4.Our software help users upload their parts more easily and can expand its own database.
5.Our software is web-based, which is more convenient for users to use.