Difference between revisions of "Team:Cambridge-JIC"
Line 17: | Line 17: | ||
<section style="background-color: #ffa8a3"> | <section style="background-color: #ffa8a3"> | ||
<div class="slide" style="background-image:url(//2015.igem.org/wiki/images/b/b5/CamJIC-Panel-3.png)"> | <div class="slide" style="background-image:url(//2015.igem.org/wiki/images/b/b5/CamJIC-Panel-3.png)"> | ||
− | <div style="width: 78%; | + | <div style="width: 78%; padding-left: 170px; padding-top: 60px; font-size: 20px;" class="padleft"> |
<p>The mechanics of the microscope will be 3D printable, and all other parts will be cheap and accessible. These introduce a novel method (developed by Dr Richard Bowman, Cambridge) for <span class="hl_2">precise positioning and control</span> which exploits the flexibility of the printed parts. The microscope will also utilise the developed-in-Cambridge Raspberry Pi board and camera module for image capture. Ultimately we are aiming for <span class="hl_2">4 micron resolution</span>, both in brightfield and fluorescence modes.</p> | <p>The mechanics of the microscope will be 3D printable, and all other parts will be cheap and accessible. These introduce a novel method (developed by Dr Richard Bowman, Cambridge) for <span class="hl_2">precise positioning and control</span> which exploits the flexibility of the printed parts. The microscope will also utilise the developed-in-Cambridge Raspberry Pi board and camera module for image capture. Ultimately we are aiming for <span class="hl_2">4 micron resolution</span>, both in brightfield and fluorescence modes.</p> | ||
<p>Furthermore, software used to control commercial microscopes is very much focused upon translating the physical experience of using a microscope into a computer. We aim to leverage the full computational potential of a digital microscope, <span class="hl_2">carefully considering functional UX design</span> to allow control (locally and also over a network) via a Google Maps-like interface and implementing <span class="hl_2">background image processing</span>, <span class="hl_2">annotation</span> and <span class="hl_2">stitching</span>, as well as allowing <span class="hl_2">fully autonomous operation</span>.</p> | <p>Furthermore, software used to control commercial microscopes is very much focused upon translating the physical experience of using a microscope into a computer. We aim to leverage the full computational potential of a digital microscope, <span class="hl_2">carefully considering functional UX design</span> to allow control (locally and also over a network) via a Google Maps-like interface and implementing <span class="hl_2">background image processing</span>, <span class="hl_2">annotation</span> and <span class="hl_2">stitching</span>, as well as allowing <span class="hl_2">fully autonomous operation</span>.</p> |
Revision as of 22:51, 29 July 2015