Difference between revisions of "Team:SJTU-Software/project"
Ricky chen (Talk | contribs) |
|||
(50 intermediate revisions by 2 users not shown) | |||
Line 35: | Line 35: | ||
<link rel="stylesheet" type="text/css" href="https://2015.igem.org/Template:SJTU-Software/projectCss?action=raw&ctype=text/css"> | <link rel="stylesheet" type="text/css" href="https://2015.igem.org/Template:SJTU-Software/projectCss?action=raw&ctype=text/css"> | ||
<link rel="stylesheet" type="text/css" href="https://2015.igem.org/Template:SJTU-Software/mainCss?action=raw&ctype=text/css"> | <link rel="stylesheet" type="text/css" href="https://2015.igem.org/Template:SJTU-Software/mainCss?action=raw&ctype=text/css"> | ||
+ | <style> | ||
+ | table, td, th | ||
+ | { | ||
+ | border:1px solid black; | ||
+ | } | ||
+ | </style> | ||
</head> | </head> | ||
<body class = "am-animation-fade am-animation-delay"> | <body class = "am-animation-fade am-animation-delay"> | ||
Line 43: | Line 49: | ||
</a> | </a> | ||
</div> | </div> | ||
+ | |||
+ | <br/> | ||
<nav data-am-sticky> | <nav data-am-sticky> | ||
− | |||
<ul class="am-avg-sm-7 am-thumbnails mainNav"> | <ul class="am-avg-sm-7 am-thumbnails mainNav"> | ||
<li class = "navItem"> | <li class = "navItem"> | ||
<a href="home"> | <a href="home"> | ||
− | + | <img class="am-thumbnails navPhoto" id = "1" src="https://static.igem.org/mediawiki/2015/a/a0/SJTU-SOFTWARE.nav1-1.png" /> | |
− | + | <p class = "navPhoto">Home</p> | |
− | + | </a> | |
− | + | </li> | |
<li class = "navItem"> | <li class = "navItem"> | ||
− | + | <a href="project"> | |
− | + | <img class="am-thumbnails navPhoto" id = "2" src="https://static.igem.org/mediawiki/2015/0/06/SJTU-SOFTWARE.nav2-2.png" /> | |
− | + | <p class = "navPhoto">Project</p> | |
− | + | </a> | |
</li> | </li> | ||
<li class = "navItem"> | <li class = "navItem"> | ||
− | + | <a href="document"> | |
− | + | <img class="am-thumbnails navPhoto" id = "3" src="https://static.igem.org/mediawiki/2015/8/84/SJTU-SOFTWARE.nav3-1.png" /> | |
− | + | <p class = "navPhoto">Document</p> | |
− | + | </a> | |
</li> | </li> | ||
<li class = "navItem"> | <li class = "navItem"> | ||
− | + | <a href="requirement"> | |
− | + | <img class="am-thumbnails navPhoto" id = "4" src="https://static.igem.org/mediawiki/2015/4/49/SJTU-SOFTWARE.nav4-1.png" /> | |
− | + | <p class = "navPhoto">Requirement</p> | |
− | + | </a> | |
</li> | </li> | ||
<li class = "navItem"> | <li class = "navItem"> | ||
<a href="saftyPolicyconcern"> | <a href="saftyPolicyconcern"> | ||
− | <img class="am-thumbnails navPhoto" id = "5" src="https://static.igem.org/mediawiki/2015/ | + | <img class="am-thumbnails navPhoto" id = "5" src="https://static.igem.org/mediawiki/2015/b/b1/SJTU-SOFTWARE.nav7-1.png" /> |
− | <p class = "navPhoto">Safty | + | <p class = "navPhoto" style = "font-size:14px">Safty&policy<br/>concern</p> |
</a> | </a> | ||
</li> | </li> | ||
<li class = "navItem"> | <li class = "navItem"> | ||
− | + | <a href="Practices"> | |
− | + | <img class="am-thumbnails navPhoto" id = "6" src="https://static.igem.org/mediawiki/2015/d/d0/SJTU-SOFTWARE.nav5-1.png" /> | |
− | + | <p class = "navPhoto">Human<br/>practice</p> | |
− | + | </a> | |
</li> | </li> | ||
<li class = "navItem"> | <li class = "navItem"> | ||
− | + | <a href="team"> | |
− | + | <img class="am-thumbnails navPhoto" id = "7" src="https://static.igem.org/mediawiki/2015/d/dd/SJTU-SOFTWARE.nav6-1.png" /> | |
− | + | <p class = "navPhoto">Team</p> | |
− | + | </a> | |
</li> | </li> | ||
</ul> | </ul> | ||
</nav> | </nav> | ||
+ | <br/> | ||
+ | <br/> | ||
− | <div | + | <div class = "project" id = "block"> |
− | + | <div id = "block1" class = "am-panel-group"> | |
− | + | <div class = " project-content am-panel am-panel-default" data-am-scrollspy="{animation: 'slide-bottom'}"> | |
− | + | <a name = "Background" id = "Background"></a> | |
− | + | <hr class = "border-line"/> | |
− | + | <div class = "backgroundContent am-panel-hd"> | |
− | + | <h4 class = "am-panel-title" data-am-collapse="{parent: '#block1', target: '#do-not-say-1'}"><b>Background</b></h4> | |
− | + | </div> | |
− | + | <div class="am-panel-collapse am-collapse backgroundContent am-in" id = "do-not-say-1"> | |
− | + | <div class = "panel-hd"> | |
− | + | <p class = " Background" style = "line-height:30px"> | |
− | + | As we know, there are more than 20,000 biobricks in iGEM official standard database and the number of biobrick keeps increasing every year. with the explosive increase of biobricks, it will be harder for synthetic biologists to manually find good biobricks which meets the requirements when they are trying to create new devices with existing parts. This issue will inevitably lead to the birth of softwares and databases especially related to synthetic biology and these intelligent tools will further promote the rapid development of synthetic biology. So we integrated biobricks data before September 3,2015 from the iGEM official standard database and then created a visual online device-designing system for synthetic biology researchers.<br/> | |
− | + | Meanwhile,we get some ideas with our own system from 2014 SJTU Software’s EasyBBK that is available for finding good parts but has a lot to be improved. In this way, users can find biobricks with their requirements more quickly.<br/> | |
− | + | </p> | |
− | + | </div> | |
− | + | </div> | |
− | + | </div> | |
− | <div class = "am- | + | </div> |
− | + | <div id = "block7" class = "am-panel-group"> | |
− | + | <div class = " project-content am-panel am-panel-default" data-am-scrollspy="{animation: 'slide-bottom'}"> | |
− | + | <a name = "Design" id = "Design"></a> | |
− | + | <hr class = "border-line"/> | |
− | + | <div class = "designContent am-panel-hd"> | |
− | + | <h4 class = "am-panel-title" data-am-collapse="{parent: '#block7', target: '#do-not-say-2'}"><b>Design</b></h4> | |
− | + | </div> | |
− | < | + | <div class="am-panel-collapse am-collapse backgroundContent" id = "do-not-say-2"> |
− | + | <div class = "panel-hd"> | |
− | + | <p class = "Introduction"><b>Introduction<br/></b> | |
− | + | Our software, BASE, has four functions: search, recommendation, evaluation and upload. Via search function, users can search for parts or devices using IDs or features as keywords. In recommendation interface, users can draw their devices. They can also give some keywords when drag an icon to the chain to get a list of parts which fit the require and other parts best. When using evaluation, users firstly enter a device that they designed, then our software can give a score and advice for each part to improve their devices. Finally, users can upload their device to BASE’s database. For the first three functions, we develop a set of scoring system to evaluate the effectiveness and ease of use of the parts and devices. And the last 3 functions provide a whole set of helping system for device design. | |
− | + | </p> | |
− | + | <p class = "Methods"><b>Method<br/></b> | |
− | + | We get the data of bricks from IGEM part registry. A total of 28,637 biobricks are recorded in the database. Then we divide them into two groups, one for parts and the other for devices, according to whether the biobrick has subparts. Among them, 14,744 are parts and 13,893 are devices. For each biobrick, there’re four different websites:<br/> | |
− | + | http://parts.igem.org/cgi/xml/part.cgi?part=BBa_B0034<br/> | |
− | + | http://parts.igem.org/cgi/partsdb/part_info.cgi?part_name=BBa_B0034<br/> | |
− | < | + | http://parts.igem.org/partsdb/get_part.cgi?part=BBa_B0034 <br/> |
− | + | http://parts.igem.org/Part:BBa_B0034:Experience<br/> | |
− | + | When collecting data, we simply replace the ID “B0034” with other bricks’ ID.<br/> | |
− | + | We then extract information from the websites. The information includes Com_id, Author, Enter_time, Ctype, Part_status, Sample_status, Part_results, Star_rating, Uses, DNA_status, Qualitative_experience, Group_favorite, Del, Groups, Confirmed_times, Number_comments, Ave_rating, Des. And we take 12 of the above factors into account when we're scoring biobricks.<br/> | |
− | + | As for optimizing the weight of these factors, we firstly analyze the distribution of value of the factors to choose the factors that can distinguish the parts most effectively. Then we select 40 parts and 40 devices as the training sets. Finally we get the weight by combining results of several methods. The optimized weight is set as the default weight.<br/> | |
− | + | </p> | |
− | + | <p class = " Results"><b>Results<br/></b> | |
− | + | <b>1.Scores for different values of factors<br/></b> | |
− | + | To build a scoring system, we start at giving scores to the values of these factors. With the help of wet lab researchers, we rank the values of discrete type according to their effect on researches, and choose a relatively good method to transform successive values into values between 0 and 1. | |
− | + | For discrete values, we have a scoring table as below. | |
− | + | </p> | |
− | We get the data of | + | <div class = "am-g"> |
− | http://parts.igem.org/cgi/xml/part.cgi?part= | + | <p><b> Table1:scores for factors'values<br/></b></p> |
− | http://parts.igem.org/cgi/partsdb/part_info.cgi?part_name= | + | <table class = "am-u-sm-5 " > |
− | http://parts.igem.org/partsdb/get_part.cgi?part= | + | <tr> |
− | http://parts.igem.org/Part: | + | <th rowspan = "2">Part status</th> |
− | When collecting data, we simply replace the | + | <td>Released HQ 2013</td> |
− | We then extract information from the websites. The information | + | <td>1</td> |
− | As for optimizing the weight of these factors, we firstly analyze the distribution of value of the factors to choose the factors that can distinguish the parts most effectively. Then we select 40 parts and 40 devices as the training sets. Finally we get the weight by combining results of several methods.<br/> | + | </tr> |
− | + | <tr> | |
− | + | <td>other</td> | |
− | 1. | + | <td>0</td> |
− | To build a scoring system, we start at giving scores to the values of these factors. With the help of wet lab researchers, we rank the values of discrete type according to their effect on researches, and choose a relatively good method to transform successive values into values between 0 and 1. | + | </tr> |
− | + | <tr> | |
− | + | <th rowspan = "4">Sample status</td> | |
− | + | <td>In Stock</td> | |
− | + | <td>1</td> | |
− | + | </tr> | |
− | + | <tr> | |
− | + | <td>It's complicated</td> | |
− | + | <td>0.5</td> | |
− | The n in the expression refers to the values. By using this expression, we reduce the effect of extreme values and make the scores more convincing.< | + | </tr> |
− | And the optimized weight of the factors are shown in the table below.<br/> | + | <tr> |
− | Table2: | + | <td>For Reference Only</td> |
− | + | <td>0.25</td> | |
− | + | </tr> | |
− | + | <tr> | |
− | + | <td>other</td> | |
− | This method is mainly used in the evaluation function. For a device which is just designed by users, the score we get through the first method actually mean nothing, as there’re no information for the device on the registry. So we need to develop a new evaluation system based on its composing parts and relationships between the parts. <br/> | + | <td>0</td> |
+ | </tr> | ||
+ | <tr> | ||
+ | <th rowspan = "2">DNA Status</th> | ||
+ | <td>Available</td> | ||
+ | <td>1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>other</td> | ||
+ | <td>0</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th rowspan = "3">Part Results</th> | ||
+ | <td>Works</td> | ||
+ | <td>1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>Issues</td> | ||
+ | <td>0.25</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>Fails;None;Null</td> | ||
+ | <td>0</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th rowspan = "2">Star Rating</th> | ||
+ | <td>1</td> | ||
+ | <td>1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>Null</td> | ||
+ | <td>0</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th rowspan = "2">Del</th> | ||
+ | <td>No</td> | ||
+ | <td>1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>Yes</td> | ||
+ | <td>0</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th rowspan = "3">Qualitative_<br/>experience</th> | ||
+ | <td>Works</td> | ||
+ | <td>1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>Issues</td> | ||
+ | <td>0.25</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td>other</td> | ||
+ | <td>0</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Used Times</th> | ||
+ | <td></td> | ||
+ | <td>0-1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Average Rating</th> | ||
+ | <td></td> | ||
+ | <td>0-1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Confirmed Times</th> | ||
+ | <td></td> | ||
+ | <td>0-1</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Number of comments</th> | ||
+ | <td></td> | ||
+ | <td>0-1</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <p class = "am-u-sm-6 Results"> | ||
+ | For those successive values, such as used times, average rating, number of comments, we develop two scoring methods. The “average rating” factor has only 5 values, so we just simply score it as a arithmetic progression. As for the other two factors, the distribution of values seems very unbalanced. And since we can be convinced that a brick is good when it’s used several tens times and the feedbacks are good, there’s no need to force a brick to get used for a thousand times before it’s recommended to other users, though some of the parts are actually used hundreds or even thousands of times. So we calculate the score by the expression below. | ||
+ | Score=log(n+1)/log(nmax+1) | ||
+ | The n in the expression refers to the values. By using this expression, we reduce the effect of extreme values and make the scores more convincing. | ||
+ | </p> | ||
+ | </div> | ||
+ | <p>And the optimized weight of the factors are shown in the table below.<br/> | ||
+ | <b> Table2:weight for each factor<br/></b> | ||
+ | </p> | ||
+ | <div class = "am-g"> | ||
+ | <table class = "am-u-sm-5" > | ||
+ | <tr> | ||
+ | <th width = "40%">Part Status</th> | ||
+ | <td width = "60%">7.5</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Sample Status</th> | ||
+ | <td>6.8</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>DNA Status</th> | ||
+ | <td>6.7</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Part Results</th> | ||
+ | <td>11.9</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Star Rating</th> | ||
+ | <td>7.5</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Qualitative_<br/>experience</th> | ||
+ | <td>3.5</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Used Times</th> | ||
+ | <td>13.7</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Average Rating</th> | ||
+ | <td>13.7</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Del</th> | ||
+ | <td>5.5</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Group favorite</th> | ||
+ | <td>2.7</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Confirmed Times</th> | ||
+ | <td>10</td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <th>Number<br/>of Comments</th> | ||
+ | <td>10.5</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <p class = "am-u-sm-6 Results">The above scoring system are used to evaluate all the bricks in our databases. It become effective in all the functions except upload. However, we still have another scoring system for devices. | ||
+ | </p> | ||
+ | </div> | ||
+ | <p class = "Results"> | ||
+ | <b>2.Devices scoring method with relationship between parts<br/></b> | ||
+ | This method is mainly used in the evaluation function. For a device which is just designed by users, the score we get through the first method actually mean nothing, as there’re no information for the device on the registry. So we need to develop a new evaluation system based on its composing parts and relationships between the parts.<br/> | ||
When evaluating the relationships between parts, we take several factors into consideration, such as the frequency and the average score when the parts are used together and so on.<br/> | When evaluating the relationships between parts, we take several factors into consideration, such as the frequency and the average score when the parts are used together and so on.<br/> | ||
− | Firstly the weight of the two aspects is confirmed. The default ratio is 65% for the parts and 35% for the relationships. In the first aspect, | + | Firstly the weight of the two aspects is confirmed. The default ratio is 65% for the parts and 35% for the relationships. In the first aspect, given that wet lab researchers care more about the outcome of a device, the weight of coding types is 80% while that of others is 20%. But in the second aspect, all relationships between coding parts and others share the same weight.<br/> |
− | + | ||
Since there’re two scoring system for devices, the weight in the second one is adjusted to make the scores made by different method close so that scores for new devices can have the comparability with those already in the database.<br/> | Since there’re two scoring system for devices, the weight in the second one is adjusted to make the scores made by different method close so that scores for new devices can have the comparability with those already in the database.<br/> | ||
+ | <b>3.Adding parts one by one<br/></b> | ||
+ | This method is mainly used in recommendation function. It’s similar to the second one, but it only cares about the new adding part and relationships when doing the recommendations. The weight’s also adjusted to fit the other two method.<br/> | ||
+ | </p> | ||
+ | <p class = "Reference"><b>Reference<br/><br/></b> | ||
+ | Morgan Madec, Yves Gendrault, Christophe Lallement, Member, IEEE, Jacques Haiech. A game-of-life like simulator for design-oriented modeling of BioBricks in synthetic biology, 34th Annual International Conference of the IEEE EMBS, San Diego, California USA, 28 August - 1 September, 2012<br/><br/> | ||
+ | Suvi Santala, Matti Karp, and Ville Santala, Monitoring Alkane Degradation by Single BioBrick Integration to an Optimal Cellular Framework, Synth. Biol. 2012, 1, 60 −64<br/><br/> | ||
+ | Patrick M Boyle, Devin R Burrill, Mara C Inniss, Christina M Agapakis, Aaron Deardon, Jonathan G DeWerd, Michael A Gedeon, Jacqueline Y Quinn, Morgan L Paull, Anugraha M Raman, Mark R Theilmann, Lu Wang, Julia C Winn, Oliver Medvedik, Kurt Schellenberg, Karmella A Haynes,Alain Viel, Tamara J Brenner, George M Church, Jagesh V Shah1 and Pamela A Silver, A BioBrick compatible strategy for genetic modification of plants, Journal of Biological Engineering 2012, 6:8<br/><br/> | ||
+ | Ilya B. Tikh & Mark Held & Claudia Schmidt-Dannert, BioBrickTM compatible vector system for protein expression in Rhodobacter sphaeroides, Appl Microbiol Biotechnol (2014) 98:3111–3119<br/><br/> | ||
+ | Jacob E. Vick & Ethan T. Johnson & Swat i Choudhar y & Sarah E. Bloch & Fern ando Lope z-Galleg o & Poonam Sr ivastava & Ilya B. Tikh & Grays on T. Wawrzy n & Claud ia Sc hmidt-Da nnert, Optimized compa tible set of BioBrick™ vectors for met abolic pathway engineering, Appl Microbiol Biotechnol (2011) 92:1275–1286<br/><br/> | ||
+ | John M. Walker, Wilfried Weber, Martin Fussenegger, Methods in Molecular Biology: Synthetic+Gene+Networks, Humana Press, 2012<br/><br/> | ||
+ | </p> | ||
+ | </div> | ||
+ | </div> | ||
+ | </div> | ||
− | 3. | + | <div class = "project-content am-panel am-panel-default" data-am-scrollspy="{animation: 'scale-down'}"> |
− | + | <a name = "Validaction" id = "Validaction"></a> | |
− | + | <hr class = "border-line"/> | |
− | + | <div class = "validactionContent am-panel-hd"> | |
− | </ | + | <h4 class = "am-panel-title" data-am-collapse="{parent: '#block2', target: '#do-not-say-3'}"><b>Validation</b></h4> |
+ | </div> | ||
+ | <div class="am-panel-collapse am-collapse validactionContent" id = "do-not-say-3"> | ||
+ | <div class = "panel-hd"> | ||
+ | <p class = " validaction" style = "line-height:30px">1.<b>Training of the built-in weight for the algorithm including only the information from the websites</b><br/> | ||
+ | For the biobricks (parts and devices) in the database, we give them a score based on their 12 features. We transform the values of feature into values between 0 and 1, and give each feature a weight. In order to set up a appropriate weight for each feature, we choose 40 parts or devices with high values of features as positive samples and choose 40 parts or devices with low values of features as negative samples. Then we change the weight of one feature and fix others to expand the gap between the average of positive samples' score and the average of negative samples' score. By this way, we adjust the weight of each feature to improve the accuracy of distinguishing the great and poor biobricks.<br/> | ||
+ | Using the above method, we find several important features that should have higher weights than others. On the other hand, we give each feature an appropriate weight in consideration of their effects on researches.<br/> | ||
+ | <b>2.Adjust the score from devices scoring method to get close to the method based on biobrick features<br/></b> | ||
+ | For the devices that are not in our database, we score them based on the score of their parts and the score of the connecting between their parts. In order to prove the validity of this algorithm, we choose 24 devices with high score (score>55) in the database as positive samples and choose 18 devices with low score (score<20) in the database as negative samples. Then we use this algorithm to score the positive samples and negative samples. In order to improve the accuracy of algorithm's prediction and balance the error rate of the prediction of two groups, we regard the devices with score beyond 55 as great devices and regard the devices with score below 30 as poor devices. For the devices with score between 30 and 55, our algorithm can not exactly tell you whether they are great devices or not. By that standard, the accuracy of the algorithm's prediction of positive samples and negative samples are 58.3% and 55.6%. For all samples, the accuracy of the algorithm's prediction is 57.1%.<br/> | ||
+ | <b>3.The practical application of our algorithm <br/></b> | ||
+ | We have an collaboration with igem team SJTU-BioX-Shanghai. Our goal is to evaluate their biobrick by scoring two parallel experiment results. We got the new devices and its structure from SJTU-BioX-Shanghai and noticed that the parts were built by themselves not long ago. When we downloaded the database data, they had not uploaded their new biobrick. So we could not find the parts' ID from the new devices in our database.<br/> | ||
+ | In order to evaluate the new parts, we use BLAST to find the parts with most similar sequences compared to the new parts in our database. Then we use the parts found in our database to evaluate the new devices. We found the coding parts in the device are most important, so our algorithm give the coding part a highest weight. <br/> | ||
+ | And the consequence of this collaboration is very ideal. The two devices got two disparate scores, one of which is scored 32.98 and the other is 3.542. In addition, this difference is also reflected by their experiment result that the one with the higher score is chosen as their final biobrick to control the expression of the iron pump.<br/> | ||
+ | So this collaboration proves that our algorithm can evaluate new device roughly on the base of existing part.<br/></p> | ||
+ | </div> | ||
+ | </div> | ||
</div> | </div> | ||
− | + | </div> | |
− | <div class = "am- | + | |
− | + | <div id = "block4" class = "am-panel-group"> | |
− | + | <div class = "project-content am-panel am-panel-default" data-am-scrollspy="{animation: 'scale-down'}"> | |
− | + | <a name = "Download" id = "Download"></a> | |
+ | <hr class = "border-line" /> | ||
+ | <div class = "downloadContent am-panel-hd"> | ||
+ | <h4 class = "am-panel-title" data-am-collapse="{parent: '#block4', target: '#do-not-say-5'}"><b>Download</b></h4> | ||
+ | </div> | ||
+ | <div class="am-panel-collapse am-collapse downloadContent" id = "do-not-say-5"> | ||
+ | <div class = "panel-hd"> | ||
+ | <a href = "http://www.igembase.com" target = "_blank">Base mainpage</a><br/> | ||
+ | <a href = "https://github.com/igemsoftware/SJTU-Software2015" target = "_blank">Github mainpage</a> | ||
+ | </div> | ||
+ | </div> | ||
</div> | </div> | ||
</div> | </div> | ||
− | <div class = "am- | + | <div id = "block5" class = "am-panel-group"> |
− | + | <div class = "project-content am-panel am-panel-default" data-am-scrollspy="{animation: 'scale-down'}"> | |
− | + | <a name = "Achivement" id = "Achivement"></a> | |
− | + | <hr class = "border-line"/> | |
− | + | <div class = "achivementContent am-panel-hd"> | |
− | + | <h4 class = "am-panel-title" data-am-collapse="{parent: '#block5', target: '#do-not-say-6'}"><b>Achievement</b></h4> | |
− | + | </div> | |
− | + | <div class="am-panel-collapse am-collapse achivementContent" id = "do-not-say-6"> | |
− | + | <div class = "panel-hd"> | |
− | + | <p class = "achivement">1.We firstly include relationships between parts to the evaluation of devices.<br/> | |
− | + | 2.Our software can help in the whole process that a user designs a new device: the optimization of one part or the whole device, and the visualization of the device.<br/> | |
− | + | 3.Our software enables users to design their self-defined weights for part evaluation.<br/> | |
− | + | 4.The database of our software can optimize itself by enable users upload their parts and devices to our database.<br/> | |
− | + | 5.Our software is web-based, which is more convenient for users to use.<br/> | |
− | + | ||
− | + | </p> | |
− | + | </div> | |
− | 2.Our software can help in the whole process that a user designs a new device: the optimization of one part | + | </div> |
− | 3.Our software | + | |
− | 4. | + | |
− | 5.Our software is web-based, which is more convenient for users to use. | + | |
− | + | ||
</div> | </div> | ||
</div> | </div> | ||
</div> | </div> | ||
− | + | <div class = "last"> | |
− | + | ||
− | <div class = " | + | |
</div> | </div> | ||
<footer> | <footer> | ||
− | < | + | <div class = "am-g"> |
+ | |||
+ | <img src = "https://static.igem.org/mediawiki/2015/f/f6/SJTU-SOFTWARE.lifelogo.png" class = "am-u-sm-2"/> | ||
+ | <img src = "https://static.igem.org/mediawiki/2015/1/17/SJTU-SOFTWARE.seieelogo.png" class = "am-u-sm-2"/> | ||
+ | <img src = "https://static.igem.org/mediawiki/2015/d/d6/SJTU-SOFTWARE.syglogo.png" class = "am-u-sm-2"/> | ||
+ | |||
+ | <img src = "https://static.igem.org/mediawiki/2015/6/6b/SJTU-SOFTWARE.sbelogo.png" class = "am-u-sm-2"/> | ||
+ | <img src = "https://static.igem.org/mediawiki/2015/0/0f/SJTU-SOFTWARE.miclogo.png" class = "am-u-sm-2"/> | ||
+ | <img src = "https://static.igem.org/mediawiki/2015/9/95/SJTU-SOFTWARE.medialogo.png" class = "am-u-sm-2 logo"/> | ||
+ | |||
+ | </div> | ||
</footer> | </footer> | ||
<!--[if (gte IE 9)|!(IE)]><!--> | <!--[if (gte IE 9)|!(IE)]><!--> |
Latest revision as of 01:00, 19 September 2015
Background
As we know, there are more than 20,000 biobricks in iGEM official standard database and the number of biobrick keeps increasing every year. with the explosive increase of biobricks, it will be harder for synthetic biologists to manually find good biobricks which meets the requirements when they are trying to create new devices with existing parts. This issue will inevitably lead to the birth of softwares and databases especially related to synthetic biology and these intelligent tools will further promote the rapid development of synthetic biology. So we integrated biobricks data before September 3,2015 from the iGEM official standard database and then created a visual online device-designing system for synthetic biology researchers.
Meanwhile,we get some ideas with our own system from 2014 SJTU Software’s EasyBBK that is available for finding good parts but has a lot to be improved. In this way, users can find biobricks with their requirements more quickly.
Design
Introduction
Our software, BASE, has four functions: search, recommendation, evaluation and upload. Via search function, users can search for parts or devices using IDs or features as keywords. In recommendation interface, users can draw their devices. They can also give some keywords when drag an icon to the chain to get a list of parts which fit the require and other parts best. When using evaluation, users firstly enter a device that they designed, then our software can give a score and advice for each part to improve their devices. Finally, users can upload their device to BASE’s database. For the first three functions, we develop a set of scoring system to evaluate the effectiveness and ease of use of the parts and devices. And the last 3 functions provide a whole set of helping system for device design.
Method
We get the data of bricks from IGEM part registry. A total of 28,637 biobricks are recorded in the database. Then we divide them into two groups, one for parts and the other for devices, according to whether the biobrick has subparts. Among them, 14,744 are parts and 13,893 are devices. For each biobrick, there’re four different websites:
http://parts.igem.org/cgi/xml/part.cgi?part=BBa_B0034
http://parts.igem.org/cgi/partsdb/part_info.cgi?part_name=BBa_B0034
http://parts.igem.org/partsdb/get_part.cgi?part=BBa_B0034
http://parts.igem.org/Part:BBa_B0034:Experience
When collecting data, we simply replace the ID “B0034” with other bricks’ ID.
We then extract information from the websites. The information includes Com_id, Author, Enter_time, Ctype, Part_status, Sample_status, Part_results, Star_rating, Uses, DNA_status, Qualitative_experience, Group_favorite, Del, Groups, Confirmed_times, Number_comments, Ave_rating, Des. And we take 12 of the above factors into account when we're scoring biobricks.
As for optimizing the weight of these factors, we firstly analyze the distribution of value of the factors to choose the factors that can distinguish the parts most effectively. Then we select 40 parts and 40 devices as the training sets. Finally we get the weight by combining results of several methods. The optimized weight is set as the default weight.
Results
1.Scores for different values of factors
To build a scoring system, we start at giving scores to the values of these factors. With the help of wet lab researchers, we rank the values of discrete type according to their effect on researches, and choose a relatively good method to transform successive values into values between 0 and 1.
For discrete values, we have a scoring table as below.
Table1:scores for factors'values
Part status | Released HQ 2013 | 1 |
---|---|---|
other | 0 | |
Sample status | In Stock | 1 |
It's complicated | 0.5 | |
For Reference Only | 0.25 | |
other | 0 | |
DNA Status | Available | 1 |
other | 0 | |
Part Results | Works | 1 |
Issues | 0.25 | |
Fails;None;Null | 0 | |
Star Rating | 1 | 1 |
Null | 0 | |
Del | No | 1 |
Yes | 0 | |
Qualitative_ experience |
Works | 1 |
Issues | 0.25 | |
other | 0 | |
Used Times | 0-1 | |
Average Rating | 0-1 | |
Confirmed Times | 0-1 | |
Number of comments | 0-1 |
For those successive values, such as used times, average rating, number of comments, we develop two scoring methods. The “average rating” factor has only 5 values, so we just simply score it as a arithmetic progression. As for the other two factors, the distribution of values seems very unbalanced. And since we can be convinced that a brick is good when it’s used several tens times and the feedbacks are good, there’s no need to force a brick to get used for a thousand times before it’s recommended to other users, though some of the parts are actually used hundreds or even thousands of times. So we calculate the score by the expression below. Score=log(n+1)/log(nmax+1) The n in the expression refers to the values. By using this expression, we reduce the effect of extreme values and make the scores more convincing.
And the optimized weight of the factors are shown in the table below.
Table2:weight for each factor
Part Status | 7.5 |
---|---|
Sample Status | 6.8 |
DNA Status | 6.7 |
Part Results | 11.9 |
Star Rating | 7.5 |
Qualitative_ experience |
3.5 |
Used Times | 13.7 |
Average Rating | 13.7 |
Del | 5.5 |
Group favorite | 2.7 |
Confirmed Times | 10 |
Number of Comments |
10.5 |
The above scoring system are used to evaluate all the bricks in our databases. It become effective in all the functions except upload. However, we still have another scoring system for devices.
2.Devices scoring method with relationship between parts
This method is mainly used in the evaluation function. For a device which is just designed by users, the score we get through the first method actually mean nothing, as there’re no information for the device on the registry. So we need to develop a new evaluation system based on its composing parts and relationships between the parts.
When evaluating the relationships between parts, we take several factors into consideration, such as the frequency and the average score when the parts are used together and so on.
Firstly the weight of the two aspects is confirmed. The default ratio is 65% for the parts and 35% for the relationships. In the first aspect, given that wet lab researchers care more about the outcome of a device, the weight of coding types is 80% while that of others is 20%. But in the second aspect, all relationships between coding parts and others share the same weight.
Since there’re two scoring system for devices, the weight in the second one is adjusted to make the scores made by different method close so that scores for new devices can have the comparability with those already in the database.
3.Adding parts one by one
This method is mainly used in recommendation function. It’s similar to the second one, but it only cares about the new adding part and relationships when doing the recommendations. The weight’s also adjusted to fit the other two method.
Reference
Morgan Madec, Yves Gendrault, Christophe Lallement, Member, IEEE, Jacques Haiech. A game-of-life like simulator for design-oriented modeling of BioBricks in synthetic biology, 34th Annual International Conference of the IEEE EMBS, San Diego, California USA, 28 August - 1 September, 2012
Suvi Santala, Matti Karp, and Ville Santala, Monitoring Alkane Degradation by Single BioBrick Integration to an Optimal Cellular Framework, Synth. Biol. 2012, 1, 60 −64
Patrick M Boyle, Devin R Burrill, Mara C Inniss, Christina M Agapakis, Aaron Deardon, Jonathan G DeWerd, Michael A Gedeon, Jacqueline Y Quinn, Morgan L Paull, Anugraha M Raman, Mark R Theilmann, Lu Wang, Julia C Winn, Oliver Medvedik, Kurt Schellenberg, Karmella A Haynes,Alain Viel, Tamara J Brenner, George M Church, Jagesh V Shah1 and Pamela A Silver, A BioBrick compatible strategy for genetic modification of plants, Journal of Biological Engineering 2012, 6:8
Ilya B. Tikh & Mark Held & Claudia Schmidt-Dannert, BioBrickTM compatible vector system for protein expression in Rhodobacter sphaeroides, Appl Microbiol Biotechnol (2014) 98:3111–3119
Jacob E. Vick & Ethan T. Johnson & Swat i Choudhar y & Sarah E. Bloch & Fern ando Lope z-Galleg o & Poonam Sr ivastava & Ilya B. Tikh & Grays on T. Wawrzy n & Claud ia Sc hmidt-Da nnert, Optimized compa tible set of BioBrick™ vectors for met abolic pathway engineering, Appl Microbiol Biotechnol (2011) 92:1275–1286
John M. Walker, Wilfried Weber, Martin Fussenegger, Methods in Molecular Biology: Synthetic+Gene+Networks, Humana Press, 2012
Validation
1.Training of the built-in weight for the algorithm including only the information from the websites
For the biobricks (parts and devices) in the database, we give them a score based on their 12 features. We transform the values of feature into values between 0 and 1, and give each feature a weight. In order to set up a appropriate weight for each feature, we choose 40 parts or devices with high values of features as positive samples and choose 40 parts or devices with low values of features as negative samples. Then we change the weight of one feature and fix others to expand the gap between the average of positive samples' score and the average of negative samples' score. By this way, we adjust the weight of each feature to improve the accuracy of distinguishing the great and poor biobricks.
Using the above method, we find several important features that should have higher weights than others. On the other hand, we give each feature an appropriate weight in consideration of their effects on researches.
2.Adjust the score from devices scoring method to get close to the method based on biobrick features
For the devices that are not in our database, we score them based on the score of their parts and the score of the connecting between their parts. In order to prove the validity of this algorithm, we choose 24 devices with high score (score>55) in the database as positive samples and choose 18 devices with low score (score<20) in the database as negative samples. Then we use this algorithm to score the positive samples and negative samples. In order to improve the accuracy of algorithm's prediction and balance the error rate of the prediction of two groups, we regard the devices with score beyond 55 as great devices and regard the devices with score below 30 as poor devices. For the devices with score between 30 and 55, our algorithm can not exactly tell you whether they are great devices or not. By that standard, the accuracy of the algorithm's prediction of positive samples and negative samples are 58.3% and 55.6%. For all samples, the accuracy of the algorithm's prediction is 57.1%.
3.The practical application of our algorithm
We have an collaboration with igem team SJTU-BioX-Shanghai. Our goal is to evaluate their biobrick by scoring two parallel experiment results. We got the new devices and its structure from SJTU-BioX-Shanghai and noticed that the parts were built by themselves not long ago. When we downloaded the database data, they had not uploaded their new biobrick. So we could not find the parts' ID from the new devices in our database.
In order to evaluate the new parts, we use BLAST to find the parts with most similar sequences compared to the new parts in our database. Then we use the parts found in our database to evaluate the new devices. We found the coding parts in the device are most important, so our algorithm give the coding part a highest weight.
And the consequence of this collaboration is very ideal. The two devices got two disparate scores, one of which is scored 32.98 and the other is 3.542. In addition, this difference is also reflected by their experiment result that the one with the higher score is chosen as their final biobrick to control the expression of the iron pump.
So this collaboration proves that our algorithm can evaluate new device roughly on the base of existing part.
Achievement
1.We firstly include relationships between parts to the evaluation of devices.
2.Our software can help in the whole process that a user designs a new device: the optimization of one part or the whole device, and the visualization of the device.
3.Our software enables users to design their self-defined weights for part evaluation.
4.The database of our software can optimize itself by enable users upload their parts and devices to our database.
5.Our software is web-based, which is more convenient for users to use.