Difference between revisions of "Team:Cambridge-JIC/MicroMaps"
KaterinaMN (Talk | contribs) |
|||
(9 intermediate revisions by 5 users not shown) | |||
Line 31: | Line 31: | ||
<center><h1 style="line-height:1.295em"> MicroMaps </h1></center> | <center><h1 style="line-height:1.295em"> MicroMaps </h1></center> | ||
<hr> | <hr> | ||
− | <center><p><i>The future of microscopy is (almost) here! Catch a sneakpeak of MicroMaps and play around with our early alpha by getting | + | <center><p><i>The future of microscopy is (almost) here! Catch a sneakpeak of MicroMaps and play around with our early alpha by getting hold of your own OpenScope.</i></p></center> |
<hr> | <hr> | ||
</div></div></section> | </div></div></section> | ||
Line 47: | Line 47: | ||
<li><p>See something you like? <b>capture a raw unprocessed image</b> for later, or <b>drop a pin</b> to return to later!</p></li> | <li><p>See something you like? <b>capture a raw unprocessed image</b> for later, or <b>drop a pin</b> to return to later!</p></li> | ||
<li><p>Need data? use an <b>extensive automated annotation toolkit</b> to measure and characterise your sample. Looking for a specific phenotype? Want to count your cells? Look no further - all of this with the comfort of knowing that you can manually intervene if the computer gets it wrong!</p></li> | <li><p>Need data? use an <b>extensive automated annotation toolkit</b> to measure and characterise your sample. Looking for a specific phenotype? Want to count your cells? Look no further - all of this with the comfort of knowing that you can manually intervene if the computer gets it wrong!</p></li> | ||
− | |||
</ul> | </ul> | ||
<hr> | <hr> | ||
Line 77: | Line 76: | ||
<img src="//2015.igem.org/wiki/images/e/e1/CamJIC-Micromaps-Stitch1.png" style="height:250px;margin:10px"> | <img src="//2015.igem.org/wiki/images/e/e1/CamJIC-Micromaps-Stitch1.png" style="height:250px;margin:10px"> | ||
<img src="//2015.igem.org/wiki/images/f/f7/CamJIC-StretchGoals-MarchantiaStitch.png" style="height:250px;margin:10px"> | <img src="//2015.igem.org/wiki/images/f/f7/CamJIC-StretchGoals-MarchantiaStitch.png" style="height:250px;margin:10px"> | ||
− | <img src="//2015.igem.org/wiki/images/6/6d/CamJIC-Micromaps-StitchSimon.png" style="height:250px;margin:10px"> | + | <img src="//2015.igem.org/wiki/images/6/6d/CamJIC-Micromaps-StitchSimon.png" style="height:250px;margin:10px;background-color:#000;"> |
</center> | </center> | ||
<center><p><i><b>Figure 1</b>: First successful stitching of two images (Nigerian liane). <b>Figure 2</b>: Stitching implemented on macroscopic images of Marchantia polymorpha as part of our <a href="//2015.igem.org/Team:Cambridge-JIC/Stretch_Goals" class="blue">Stretch Goals</a>. Note the accuracy of the stitching. <b>Figure 3</b>: Pretend stitching (performed manually) - shows how MicroMaps is ultimately intended to work.</i></p></center> | <center><p><i><b>Figure 1</b>: First successful stitching of two images (Nigerian liane). <b>Figure 2</b>: Stitching implemented on macroscopic images of Marchantia polymorpha as part of our <a href="//2015.igem.org/Team:Cambridge-JIC/Stretch_Goals" class="blue">Stretch Goals</a>. Note the accuracy of the stitching. <b>Figure 3</b>: Pretend stitching (performed manually) - shows how MicroMaps is ultimately intended to work.</i></p></center> | ||
Line 83: | Line 82: | ||
<p><b>How it works:</b> More concretely, MicroMaps keeps a collection of images it has taken along with the corresponding expected physical coordinates. MicroMaps will request small regions (tiles) of the slide one-by-one to fill up its field of view. When a tile is requested the software will look through its collection to see if it has already captured that region, and will join any seams it finds if multiple images match that tile. If no images match that tile, it will take a series of overlapping images between a nearby (in terms of expected coordinates) image and the desired tile. For each image, it will use the stitching algorithm to determine accurate coordinates representing the image and compare them to the expected coordinates. This is essential to correct for hardware noise and inaccuracies, and will allow a seamless image to be constructed from these small tiles. The accuracy obtained, combined with calibration data, will then allow for precise measurements to be made. The accurate positioning information will also allow pins to be dropped so interesting features can be returned to later.</p> | <p><b>How it works:</b> More concretely, MicroMaps keeps a collection of images it has taken along with the corresponding expected physical coordinates. MicroMaps will request small regions (tiles) of the slide one-by-one to fill up its field of view. When a tile is requested the software will look through its collection to see if it has already captured that region, and will join any seams it finds if multiple images match that tile. If no images match that tile, it will take a series of overlapping images between a nearby (in terms of expected coordinates) image and the desired tile. For each image, it will use the stitching algorithm to determine accurate coordinates representing the image and compare them to the expected coordinates. This is essential to correct for hardware noise and inaccuracies, and will allow a seamless image to be constructed from these small tiles. The accuracy obtained, combined with calibration data, will then allow for precise measurements to be made. The accurate positioning information will also allow pins to be dropped so interesting features can be returned to later.</p> | ||
− | <p><b>Problems:</b> This works well for fixed samples, but what about live samples? With the current difficulties, we are not prepared to apply MicroMaps logic to motile samples. Moving samples are infeasible with current processing delays. For now, we recommend using the <a href="//2015.igem.org/Team:Cambridge-JIC/Webshell" class="blue">WebShell</a>. We are still working on this issue, expect the | + | <p><b>Problems:</b> This works well for fixed samples, but what about live samples? With the current difficulties, we are not prepared to apply MicroMaps logic to motile samples. Moving samples are infeasible with current processing delays. For now, we recommend using the <a href="//2015.igem.org/Team:Cambridge-JIC/Webshell" class="blue">WebShell</a>. We are still working on this issue, expect the ability to follow moving specimens in WebShell v2, and perhaps in MicroMaps v2 with some speed improvements.</p> |
<br> | <br> | ||
<hr> | <hr> | ||
Line 103: | Line 102: | ||
<p><b>The Method:</b> We tested our image processing software on some images of <i>Marchantia</i> gemma on a Petri dish with agar, This was intended to be a step towards our <a href="https://2015.igem.org/Team:Cambridge-JIC/Stretch_Goals" class="blue">Stretch Goal</a> - an automated screening desktop system. To write the software, the <a href="http://opencv.org/" class="blue">OpenCV</a> library was used. Two types of image processing algorithms were implemented:</p> | <p><b>The Method:</b> We tested our image processing software on some images of <i>Marchantia</i> gemma on a Petri dish with agar, This was intended to be a step towards our <a href="https://2015.igem.org/Team:Cambridge-JIC/Stretch_Goals" class="blue">Stretch Goal</a> - an automated screening desktop system. To write the software, the <a href="http://opencv.org/" class="blue">OpenCV</a> library was used. Two types of image processing algorithms were implemented:</p> | ||
<ul> | <ul> | ||
− | <li><p><b>Standard thresholding</b><br>This makes an image grey-scale and searches for the dark areas. We started off with a basic contrast increase to isolate the darker areas of the image, which we assume would correspond to samples. | + | <li><p><b>Standard thresholding</b><br>This makes an image grey-scale and searches for the dark areas. We started off with a basic contrast increase to isolate the darker areas of the image, which we assume would correspond to samples. We then followed the steps in a paper [2] which was supposed to yield much better sample isolation for samples which look faint, and are hard to distinguish from their background. This ended up detecting dents in the agar gel along with the samples. To resolve this issue we came up with the next idea...</p></li> |
+ | <center><img src="https://static.igem.org/mediawiki/2015/c/c1/CamJIC-bdct.png" style="height:250px;margin:10px"></center> | ||
+ | |||
<li><p><b>Colour detection</b><br>An eye dropper was added to select the upper and lower colour darknesses to search for (the user would click to select these colours). These colours correspond to areas of the sample with better and worse illumination respectively.Also, a slider that allows you to change the 'darkness' of the sample colour was added. This generally varies depending on room lighting conditions. With this implementation, the program performed much better, detecting the <i>Marchantia</i> gemma before the agar dents.</p></li> | <li><p><b>Colour detection</b><br>An eye dropper was added to select the upper and lower colour darknesses to search for (the user would click to select these colours). These colours correspond to areas of the sample with better and worse illumination respectively.Also, a slider that allows you to change the 'darkness' of the sample colour was added. This generally varies depending on room lighting conditions. With this implementation, the program performed much better, detecting the <i>Marchantia</i> gemma before the agar dents.</p></li> | ||
</ul> | </ul> | ||
<center><img src="//2015.igem.org/wiki/images/e/ef/CamJIC-Software-ImageRec.jpg" style="width:400px;margin:10px"><p><i><b>Figure 4</b>: Sample recognition working on Petri dish with Marchantia gemma. The program highlights the samples it finds in red. Note that the agar dent is not included in the final output. This was achieved using the color detection algorithm.</i></p></center> | <center><img src="//2015.igem.org/wiki/images/e/ef/CamJIC-Software-ImageRec.jpg" style="width:400px;margin:10px"><p><i><b>Figure 4</b>: Sample recognition working on Petri dish with Marchantia gemma. The program highlights the samples it finds in red. Note that the agar dent is not included in the final output. This was achieved using the color detection algorithm.</i></p></center> | ||
<p><b>Microscopic image processing:</b> The colour detection used above can theoretically be easily adapted to work with fluorescent samples – this would prove useful for sample counting and detection of, for example, samples that successfully express a specific fluorescent protein. A similar strategy can be applied to stained samples with interesting coloured features: for example to recognize stained nuclei (eg. with toluidine blue) and in this way distinguish eukaryotic cells.</p> | <p><b>Microscopic image processing:</b> The colour detection used above can theoretically be easily adapted to work with fluorescent samples – this would prove useful for sample counting and detection of, for example, samples that successfully express a specific fluorescent protein. A similar strategy can be applied to stained samples with interesting coloured features: for example to recognize stained nuclei (eg. with toluidine blue) and in this way distinguish eukaryotic cells.</p> | ||
− | <p>However, we have not implemented sample recognition into MicroMaps Alpha, mostly due to lack of time and difficulties for coping with multicolour images. Still, the script for image recognition is in the <a href="" class="blue">software package</a> for you to try out (and improve). | + | <p>However, we have not implemented sample recognition into MicroMaps Alpha, mostly due to lack of time and difficulties for coping with multicolour images. Still, the script for image recognition is in the <a href="https://github.com/sourtin/igem15-sw/blob/master/img_processing/identificationTesting/marchantiaIdentification_gui.py" class="blue">software package</a> for you to try out (and improve, also in the source code download on the <a href="//2015.igem.org/Team:Cambridge-JIC/Downloads" class="blue">Downloads</a> page under img_processing/identificationTesting/marchantiaIdentification_gui.py). |
<hr> | <hr> | ||
<center><p><i>Image recognition was developed by Ocean, with useful feedback and advice from the rest of the Software team.</i></p></center> | <center><p><i>Image recognition was developed by Ocean, with useful feedback and advice from the rest of the Software team.</i></p></center> | ||
<hr> | <hr> | ||
</p> | </p> | ||
+ | <p style="font-size:80%">References: <br> [2] Chen, L., Chien, C. and Nguyen, X. (2013). An effective image segmentation method for noisy low-contrast unbalanced background in Mura defects using balanced discrete-cosine-transfer (BDCT). <i>Precision Engineering</i>, 37(2), pp.336-344.</p> | ||
Latest revision as of 00:50, 19 September 2015