Difference between revisions of "Team:Cambridge-JIC/MicroMaps"

Line 48: Line 48:
 
<dd>free: BF - Brute-Force, unreliable</dd>
 
<dd>free: BF - Brute-Force, unreliable</dd>
 
</dl>
 
</dl>
<p>Our research in the area indicates that the SIFT + FLANN combination is very good. Further understanding of the subject might be gained from Google's PhotoSphere project.</p>
+
<p>Our research in the area indicates that the SIFT + FLANN combination is very good. Further understanding of the subject might be gained from Google's <a href="http://www.google.co.uk/maps/about/contribute/photosphere/" class="blue">PhotoSphere</a> project.</p>
  
 
*** show some examples of tiling, and intended micromaps with simon's manual tiling ***
 
*** show some examples of tiling, and intended micromaps with simon's manual tiling ***
 
Examples: can see that, under the right conditions, can accurately stitch multiple images, even in 2D (no 2d example)
 
Examples: can see that, under the right conditions, can accurately stitch multiple images, even in 2D (no 2d example)
  
More concretely, micromaps keeps a collection of images it has taken along with where each image is located in space. MicroMaps will request small regions of the slide one-by-one to fill up its field of view, these regions are known as tiles. When a tile is requested the software will look through its collection to see if it has already captured that region, and will join any seams it finds if multiple images match that tile. If no images match that tile, it will take a series of overlapping images between a nearby image and the desired tile. For each image, it will use the stitching algorithm to determine accurate coordinates representing the image and compare them to the expected coordinates; this will be used to correct for hardware noise and inaccuracies, and will allow a seamless image to be constructed from these small tiles. The accuracy obtained, combined with calibration data, will then allow precise measurements to be made.
+
<p><b>How it works:</b> More concretely, MicroMaps keeps a collection of images it has taken along with the corresponding expected physical coordinates. MicroMaps will request small regions (tiles) of the slide one-by-one to fill up its field of view. When a tile is requested the software will look through its collection to see if it has already captured that region, and will join any seams it finds if multiple images match that tile. If no images match that tile, it will take a series of overlapping images between a nearby (in terms of expected coordinates) image and the desired tile. For each image, it will use the stitching algorithm to determine accurate coordinates representing the image and compare them to the expected coordinates. This is essential to correct for hardware noise and inaccuracies, and will allow a seamless image to be constructed from these small tiles. The accuracy obtained, combined with calibration data, will then allow for precise measurements to be made. The accurate positioning information will also allow pins to be dropped so interesting features can be returned to later.</p>
  
The accurate positioning information will also allow pins to be dropped so interesting features can be returned to later.
+
<p><b>Problems:</b> This works well for fixed samples, but what about live samples? With the current difficulties, we are not prepared to apply MicroMaps logic to motile samples. Moving samples are infeasible with current processing delays. For now, we recommend using the <a href="//2015.igem.org/Team:Cambridge-JIC/Webshell" class="blue">WebShell</a>. We are still working on this issue, expect the potential fix in MicroMaps Version 2. Made some progress with an image tracking challenge: each of our software team  was challenged to track some moving pixels in an image ***(ants.gif) to find the hidden message, Will's entry shown in bants.gif***. Perhaps in V.2 an interesting live features (eg. moving cells) will be automatically tracked.</p>
 +
</div></div></section>
  
Problems -- this works well for fixed samples, but what about live samples? With the current difficulties, hard to prepare for moving samples. Moving samples much more difficult, and possibly infeasible with current delays. Best to use webshell for now. Perhaps in micromaps v2 ;)
+
<section style="background-color:#fff">
Still made somre progress on this -- image tracking challenge, each of our software team challenged to track some moving pixels in an image (ants.gif) to find the hidden message, Will's entry shown in bants.gif. Perhaps in webshell v2 an interesting live feature can be automatically tracked, e.g. an e. coli :P
+
    <div class="slide" style="min-height:0px">
 
+
        <div style="width: 80%; margin: 30px 0px;color:#000">
*** TODO: fix tenses ***
+
<h3>Image Processing</h3>
 
+
 
+
 
+
# Image Processing
+
  
 
*** ask ocean!!! ***
 
*** ask ocean!!! ***
Line 76: Line 73:
 
*** ask souradip!!! ***
 
*** ask souradip!!! ***
 
blockly visual programming to assemble simple annotators and microscope commands into complex workflows to automate experiments
 
blockly visual programming to assemble simple annotators and microscope commands into complex workflows to automate experiments
 
   
 
 
 
 
 
 
 
****** ignore below ******
 
**************************
 
 
Welcome to MicroMaps, the new way to interact with microscopes. MicroMaps combines the simplicity of a Google Maps-like interface with the power of a motorised microscope and lets you navigate without worrying about focusing or losing your place.
 
 
 
 
micromaps is a web interface to openscope that allows for a more natural interaction with the hardware
 
google maps-like interface
 
pan, zoom, rotate
 
 
needs stitching
 
- algorithms: proprietary: ....
 
              free: orb, bf ==> example images
 
unfortunately not robust enough to handle arbitrary images as easily as mobile phones do with panos
 
future dev: do a translation-only algorithm, assuming no zoom or rotation;
 
            naïve algo is O(n^4)
 
 
can stitch multiple images together in 2d, though a bit too picky for real world data unfortunately
 
 
architecture: stitching, alignment => feedback on plastic deformation for reliable position information!
 
attempts to normalise images: hsv, histeq, etc
 
 
as a result, micromaps is currently fairly limited and cannot stitch images
 
works well as image viewer :P
 
 
future dev: reliable stitching => alignment
 
            then annotation!
 
 
#annotation -- ocean
 
 
automatic annotation, manual auditing
 
 
examples:...
 
 
if you can't wait, try out our imagej plugin! <link>
 
 
</div></div></section>
 
</div></div></section>
 
 
 
  
 
</html>
 
</html>
 
{{:Team:Cambridge-JIC/Templates/Footer}}
 
{{:Team:Cambridge-JIC/Templates/Footer}}

Revision as of 09:49, 18 September 2015

MicroMaps


The future of microscopy is (almost) here! Catch a sneakpeak of MicroMaps and play around with our early alpha by getting ahold of your own openscope.


What is MicroMaps?

MicroMaps is the new way to interact with microscopes. MicroMaps combines the simplicity of a Google Maps-like web interface [1] (***give attribution to openlayers for the mapping interface***) with the power of a motorised microscope and automated image annotation. Navigate your slide with ease and never worry about losing focus or your bearings ever again (check out our Autofocus algorithm)! Say goodbye to tedious cell counting and phenotype searches!

MicroMaps features:

  • Freely pan around: automates image-taking and stitching to provide a seamless map of your slide - pan, zoom and rotate your samples

  • Using calibration data, measure features with ease, regardless of their orientation or the position of your reticle

  • See something you like? capture a raw unprocessed image for later, or drop a pin to return to later!

  • Need data? use an extensive automated annotation toolkit to measure and characterise your sample. Looking for a specific phenotype? Want to count your cells? Look no further - all off this with the comfort of knowing that you can manually intervene if the computer gets it wrong!

  • Need a custom annotation? Use our easy python libraries and examples to write your own and share it with others on our online annotation repository.


Note: Unfortunately, though most of the groundwork is in place for these features, technical difficulties currently impede our ability to bring all these to you at present. Stay tuned for future updates or read on to see what has already been done behind the scene.


Can't wait? Try out our ImageJ plugin.


Image Stitching

This is the technology that makes MicroMaps all possible...and the reason it is in early alpha. Though the translation mechanism of our microscope allows panning control as fine as a single micron (see our Tech Specs page), the material used for 3D-printing and the quality of the motors used impairs the accuracy with which translation can be achieved. A subtle point of the flexure mechanism, used for stage movement, is that shifting in one direction, also causes a small angle twist of the frame of view. This in turn makes it difficult to know which parts of the slide go where in our interface. Luckily, image stitching algorithms have been created to find where two or more images match and combine them together (such as in the panorama feature on modern smartphones). Using these algorithms we can determine precisely how the microscope imagery should be shown on screen and eliminate the seams between them. We can also use the position information derived to correct for translational inaccuracies so we can know with confidence where you are on your slide and enable you to drop pins on features you like. In addition, the overlaying of the stitched images actually removes the black artefacts deposited by dirt on the CCD or imperfections of the optics.

There are many stitching algorithms that have been developed, though unfortunately some of the best are proprietary. As an open-source alternative, our algorithm is significantly less robust and this has been a major roadblock in the development of MicroMaps. Eventually, we were forced to disable the free panning mechanism on MicroMaps alpha. It has also meant that some of other features, that we had in mind, have not been integrated for now.

Algorithms required for MicroMaps:

feature detecting algorithms: (these find interesting things in images)
proprietary: SIFT, SURF, FAST
free: ORB - actually this algorithm is also pretty good, according to its developers [2]
feature MATCHING algorithms: (these try to match the same interesting things in two images)
proprietary: FLANN - fast and reliable
free: BF - Brute-Force, unreliable

Our research in the area indicates that the SIFT + FLANN combination is very good. Further understanding of the subject might be gained from Google's PhotoSphere project.

*** show some examples of tiling, and intended micromaps with simon's manual tiling *** Examples: can see that, under the right conditions, can accurately stitch multiple images, even in 2D (no 2d example)

How it works: More concretely, MicroMaps keeps a collection of images it has taken along with the corresponding expected physical coordinates. MicroMaps will request small regions (tiles) of the slide one-by-one to fill up its field of view. When a tile is requested the software will look through its collection to see if it has already captured that region, and will join any seams it finds if multiple images match that tile. If no images match that tile, it will take a series of overlapping images between a nearby (in terms of expected coordinates) image and the desired tile. For each image, it will use the stitching algorithm to determine accurate coordinates representing the image and compare them to the expected coordinates. This is essential to correct for hardware noise and inaccuracies, and will allow a seamless image to be constructed from these small tiles. The accuracy obtained, combined with calibration data, will then allow for precise measurements to be made. The accurate positioning information will also allow pins to be dropped so interesting features can be returned to later.

Problems: This works well for fixed samples, but what about live samples? With the current difficulties, we are not prepared to apply MicroMaps logic to motile samples. Moving samples are infeasible with current processing delays. For now, we recommend using the WebShell. We are still working on this issue, expect the potential fix in MicroMaps Version 2. Made some progress with an image tracking challenge: each of our software team was challenged to track some moving pixels in an image ***(ants.gif) to find the hidden message, Will's entry shown in bants.gif***. Perhaps in V.2 an interesting live features (eg. moving cells) will be automatically tracked.

Image Processing

*** ask ocean!!! *** some things to consider: cell counting phenotype screening fluorescence characterisation; relative measurement identify samples (e.g. by shape and colour, maybe more complex features? e.g. eukaryotes -- look for nuclei) ??? *** ask souradip!!! *** blockly visual programming to assemble simple annotators and microscope commands into complex workflows to automate experiments