Itai Cohen Group - Research
http://cohengroup.ccmr.cornell.edu/website-categories/research
enLight Microscopy at Maximal Precision
http://cohengroup.ccmr.cornell.edu/research/projects/light-microscopy-maximal-precision
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little better than automated manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call Parameter Extraction from Reconstructing Images (PERI). Our focus here is on images of colloidal images taken with a confocal microscope, but the ideas here are broadly applicable to any form of microscopy. </p>
<p>Below, you can observe the process by which our generative model creates colloidal data. Because we are primarily focused on tracking spherical particals, our model images begin with a Platonic image of dye distributed around perfects spheres (a, top row). Our model goes on to incorporate pixelation, as well as detailed estimates of the spatially-varying illumination created by the microscope light source (a, middle row) and the <a href="https://en.wikipedia.org/wiki/Point_spread_function">point spread function</a> (a, bottom row). Finally, we add a realistic noise level to complete the picture. Panel b) shows real data in the upper left corner, and our generated data (without added noise), and by eye it is already clear that these iimages match well. When we add noise to the bottom right image, the two become virtually indistinguishable. </p>
<p> </p>
<p><img alt="" src="https://cohengroup.lassp.cornell.edu/userfiles/data_generation_0.png" style="float:left; height:443px; width:900px" /></p>
<p> </p>
<p>Using our generative model, we are able to locate the size and position of micron-sized particles with <strong>nanometer precision</strong>. This level of measurement is very near the information-theoretic bound on how accurately these quantities could possibly be measured. Below is a zoomed-in image of a single spherical particle; the inset shows the grey box at the center of the particle, and withing that the estimate of the particle's center from PERI. The estimates illustrated here are on the order of a few percent of a pixel, which translates to a few nanometers for these images!</p>
<p> </p>
<p><img alt="" src="https://cohengroup.lassp.cornell.edu/userfiles/peri_precision.png" style="float:left; height:442px; width:441px" /></p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p> </p>
<p>With this unprecedented level of measurement precision, we open the door to a whole new suite of analyses for colloidal systems that rely on extreme measurement precision. Moreover, the methods used in PERI are broadly applicable, and can be extened to a wide range of systems. Stay tuned for these future applications!</p>
<p><span style="font-size:14px"><strong>Pre print:</strong> <a href="https://arxiv.org/abs/1702.07336">https://arxiv.org/abs/1702.07336</a></span></p>
<p><span style="font-size:14px"><strong>PERI code and tutorial:</strong> <a href="http://www.lassp.cornell.edu/sethna/peri/index.html">http://www.lassp.cornell.edu/sethna/peri/index.html</a></span></p>
<p><span style="font-size:14px"><strong>Github:</strong> <a href="https://peri-source.github.io/peri-docs/">https://peri-source.github.io/peri-docs/</a></span></p>
<p> </p>
<p> </p>
<p> </p>
</div></div></div><div class="field field-name-field-category-s- field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Category(s): </div><div class="field-items"><div class="field-item even"><a href="/website-categories/research" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Research</a></div><div class="field-item odd"><a href="/research-categories/complex-fluids" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Complex Fluids</a></div></div></div>Fri, 07 Apr 2017 13:34:39 +0000scw97238 at http://cohengroup.ccmr.cornell.eduhttp://cohengroup.ccmr.cornell.edu/research/projects/light-microscopy-maximal-precision#commentsRock and Roll: How Fruit Flies Control Their Flight
http://cohengroup.ccmr.cornell.edu/content/rock-and-roll-how-fruit-flies-control-their-flight
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>Tsevi Beatus, John Guckenheimer and Itai Cohen, <br /><em>The Journal of the Royal Society Interface</em> <strong>12</strong>, 20150075 (2015) <a href="https://cohengroup.lassp.cornell.edu/userfiles/pubs/Beatus-RollControl.pdf">PDF</a></p>
<p> </p>
<p>The flight of flapping insects is a complex process that is beautiful to watch. One of the reasons flapping flight is so difficult is that it is inherently unstable: similar to balancing a stick on one's fingertip, flapping flight is subject to rapid instabilities that must be constantly controlled to allow stable flight. For flies, the most unstable motion is rotation about their long body axis, called roll. If a fly did not control its roll angle, it would roll over and crash within just a few wing-beats. Yet flies manage to control this rapid instability and even perform extreme maneuvers, better than any man-made flying device.</p>
<p>We use common fruit flies, like the ones we often find in our kitchen, to study the mechanism insects use to control their unstable roll angle. To study how flies do it, we came up with a way to trip them in mid-air and film how they recover from these stumbles. Specifically, we glue a tiny magnet to the back of each fly and use a magnetic pulse to roll it over in mid-air. We film the fly’s correction maneuvers using three high-speed cameras and measure how the fly is using its wings to recover from the perturbation.</p>
<p> </p>
<div class="media_embed" height="390px" width="650px">
<iframe allowfullscreen="" frameborder="0" height="390px" src="https://www.youtube.com/embed/03OR-fcviGw" width="650px"></iframe></div>
<p> </p>
<p>We found that flies manage to correct for large perturbations that roll them up to 100 degrees within 30 milliseconds. This means that by the time you blink, the fly could have performed this entire correction maneuver 10 times. The flies start to respond to the perturbation within 5ms. This puts the roll correction reflex among the fastest in the animal kingdom.</p>
<p>Flies correct for these roll perturbations by flapping with one wing harder than the other for 2-5 wing-beats. The resulting left-right force imbalance leads to corrective torque. We managed to describe the asymmetric wing motion using a controller model that is mathematically similar to controllers in air-conditioning and cruise-control systems. The model is termed “Proportional-Integral (PI) Controller.”</p>
<p> </p>
<div class="media_embed" height="390px" width="650px">
<iframe allowfullscreen="" frameborder="0" height="390px" src="https://www.youtube.com/embed/nX3egEbX_Po" width="650px"></iframe></div>
<p> </p>
<p>Finally, we tried to challenge the flies with perturbations that they cannot correct for. Rather than exerting a single perturbation pulse that rolls them once, we challenged them with a series of pulses that rolled them over eight full turns. Surprisingly, we find that the fruit flies managed to recover from this extreme perturbation very quickly, within a few wing-beats. We have not yet managed to find a perturbation from which the flies cannot recover. Although these tiny insects are common and often a nuisance, we now have greater appreciation for what amazing fliers they are.</p>
<p>The work was supported by the Cross Disciplinary Postdoctoral Fellowship of the Human Frontier Science Program, and by the Army Research Office.</p>
</div></div></div><div class="field field-name-field-category-s- field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Category(s): </div><div class="field-items"><div class="field-item even"><a href="/website-categories/research" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Research</a></div><div class="field-item odd"><a href="/research-categories/biolocomotion" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Biolocomotion</a></div></div></div>Fri, 18 Sep 2015 20:23:29 +0000je27710 at http://cohengroup.ccmr.cornell.eduhttp://cohengroup.ccmr.cornell.edu/content/rock-and-roll-how-fruit-flies-control-their-flight#commentsStructure-Function Relations in Articular Cartilage's Shear Properties
http://cohengroup.ccmr.cornell.edu/content/structure-function-relations-articular-cartilages-shear-properties
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>Articular cartilage (AC), a biological tissue that protects and lubricates joints, plays a critical role during healthy locomotion. Ongoing work in the Cohen lab has been examining the spatially heterogeneous mechanical properties of this tissue using confocal rheology. This technique allows us to simultaneously deform the tissue with a known stress and measure the local strain field. From this information, we can calculate the local shear properties. Our early work in this field found a highly complaint region localized to the tissue's surface that is between 10 and 100 times more complaint than the bulk, which we believe is vitally important for the the tissue's normal functioning. Naturally, this observation leads to the question: What drives AC's depth-dependent shear properties? In this project, we combined mechanical testing with structural quantification to address exactly this question. Our work was published here, and below, we offer a brief overview of the main results. </p>
<p> </p>
<p><img alt="" src="http://cohengroup.lassp.cornell.edu/userfiles/cartilage_percolation/fig1.jpg" style="height:419px; width:633px" /></p>
<p>This schematic diagram of a bovine knee joint indicaties our sample harvesting sites. Cylindrical plugs of AC were removed from the joint, halved, and separated out for mechanical testing or biochemical analysis. AC is known to have three zones distinguished by collagen fiber orientation as indicated. The coordinate system used throughout this summary is also shown, where the depth-wise direction <em>z</em> is 0 at the surface of the tissue, and increases toward the bone.</p>
<p> <img alt="" src="http://cohengroup.lassp.cornell.edu/userfiles/cartilage_percolation/fig2.jpg" style="height:415px; width:633px" /></p>
<p>The measured depth-dependent shear modulus for the 8 samples we tested is plotted on a log scale as a function of depth, <em>z</em>. Red curves are from the patellofemoral grove (PFG), while blue curves are from the tibial plateau (TP) as indicating in the figure above. The surface region highlighted in gray is found to be 10 to 100 times more compliant than the tissue at greater depths, and when compared to fiber organization (previous figure and data in paper), corresponds to AC's tangential zone.</p>
<p> </p>
<p> <img alt="" src="http://cohengroup.lassp.cornell.edu/userfiles/cartilage_percolation/fig4.jpg" style="height:976px; width:633px" /></p>
<p>One possibility is that the depth-dependent shear properties arise from the depth-dependent collagen architecture, which was shown in the cartoon sketch above. To test this hypothesis, we used quantitative polarized light microscopy to measure alignment of collagen fibers in the tissue. The data, however, shows no particular correlation between fiber organization and shear properties. An alternative hypothesis is that the collagen matrix density drives the observed depth-dependent behaviors. This figure shows (A) Fourier transformed infread imaging (FTIR-I) data that examines the depth-dependent (B) aggrecan volume fraction, <em>v<sub>a</sub></em>, and collagen volume fraction, <em>v<sub>c</sub></em>. (C,D) When plotted against the shear modulus, we find a remarkable correlation that gives a structure-function relation of the form <em>G</em> ~ (<em>v<sub>c</sub></em> - <em>v<sub>0</sub></em>)<em><sup>p</sup></em>. In this expression, <em>v<sub>0</sub></em> is a constant determined by the fits and the exponent <em>p</em> is greater than 1. This functional form suggests a non-tivial relationship between the microscopic architecture and macroscopic properties that can not be reduced to simple continuum elasticity.</p>
<p> </p>
<p><img alt="" src="http://cohengroup.lassp.cornell.edu/userfiles/cartilage_percolation/fig5.jpg" style="height:1059px; width:633px" /></p>
<p>To better understand the experimental data, we model collagen fibers in AC as a 2D network of elastic fibers connected on a disordered kagome lattice and embedded in an elastic medium. (A) The fibers have linearly elastic stretching and bending moduli, as does the embedding medium. In this figure, continuous black lines constitute fibers while light gray lines are missing bonds. In simulations, we vary the fiber volume fraction, <em>v<sub>f</sub></em>, by removing bonds, which ultimately leads to a critical percolation transtion when the network no longer contains a spanning connected cluster. Calculations of the networks shear modulus as a function of elastic moduli are carried out by applying external loading in the simulation, minimizing the system's energy, and examining the overall strain. Zooming in on the sheared lattice, we find affine deformations where light and dark colored bonds overlap, and non-affine deformations where they do not. Schematic diagrams illustrate the various energetic contributions included in the model described by Eq.(1) in the manuscript. (B) In the absence of a reinforcing medium, the four distinct regimes of the model corresponding to the four points labeled I - IV in panel (C) are schematically illustrated. In the presence of a reinforcing medium, such as points V and VI in (D), the strength of the reinforcing medium modulus can be schematically illustrated by shading the background. (C) Exploring the model's dependence on the fiber bending-to-stretching modulus ratio in the absence of a supporting medium reveals a rigidity percolation phase transition as a function of <em>v<sub>f</sub></em>. (D) In the presence of a supporting medium, the phase transition is broadened, and <em>G</em>(<em>v<sub>f</sub></em>) becomes dominated at low <em>v<sub>f</sub></em> by the modulus of the reinforcing medium. All modulus curves are normalized by the value of a fully connected lattice. Collectively, the model has 2 parameters constructed from the ratios of three elastic moduli and is capable of producing structure-function correlations similar to those seen in our experimental data. The goal therefore is to fit the model to the data and see whether the predicted model parameters are biophysically plausible.</p>
<p> <img alt="" src="http://cohengroup.lassp.cornell.edu/userfiles/cartilage_percolation/fig6.jpg" style="height:667px; width:633px" /></p>
<p>(A) Using experimentally measured wet volume fraction data from sample PFG 1, we generated a kagome lattice that is random and isotropic along the direction of shear, but has a varying density of bonds along the <em>z</em> axis. Zoomed insets show the region near the articular surface is below the percolation threshold, while the region at greater <em>z</em> is more well connected. For each inset, the largest percolating cluster is colored red, while the remaining network is colored black. In regions below the percolation threshold, stresses are transmitted by the supporting background medium. (B) For specific model parameters, a comparison between experiment (light gray points) and simulation (dark gray points) show reasonable agreement. The shear modulus is decomposed into contributions from the reinforced fiber network (dashed blue) and the reinforcing medium (solid blue). For low <em>v</em><sub><em>f</em></sub>, the fiber network does not percolate across the system and the reinforcing medium dominates. At the critical connectivity threshold, the fiber network forms a spanning cluster and stresses can be transmitted across the system. This effect rapidly dominates with increasing <em>v<sub>f</sub></em>, but becomes less sensitive at higher volume fractions. (C) The spatially homogeneous <em>v<sub>f</sub></em> is replaced with experimentally measured <em>v<sub>c</sub></em>(<em>z</em>) from sample PFG 1 to generate a depth-dependent <em>G</em>(<em>z</em>) profile (red), which when superimposed on experimental data (light gray lines) show qualitatively similar behavior.</p>
<p> </p>
<p>Overall, this project makes exciting headway toward an understanding of AC's shear properties. It goes beyond the well-established poroelastic theory of cartilage by offering a microscopic description of the structure-function relationship, which is typically treated on a purely phenomenological basis with viscoelastic constitutive relations. Indeed, a publication released by our collaborators in the Bonassar lab (Griffin et. al., J. Ortho. Res., 2014) provides an orthogonal set of measurements that support the ideas proposed here, and contributes to this new direction in cartilage mechanics.</p>
<p> </p>
<p>This work was <a href="http://www.cell.com/biophysj/abstract/S0006-3495%2814%2900854-6" target="_blank">published</a> in <em>Biophysical Journal</em> <strong>107</strong>(7), 2014 and featured as the cover article as well as in the <a href="http://biophysicalsociety.wordpress.com/2014/10/07/1077-cover-blog-post/" target="_blank">Biophysical Society Blog</a>!</p>
<p><a href="http://www.cell.com/biophysj/issue?pii=S0006-3495%2814%29X0020-2" target="_blank"><img alt="" src="http://cohengroup.lassp.cornell.edu/userfiles/cartilage_percolation/Silverberg-bjcover.jpg" style="height:768px; width:591px" /></a></p>
</div></div></div><div class="field field-name-field-category-s- field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Category(s): </div><div class="field-items"><div class="field-item even"><a href="/website-categories/research" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Research</a></div><div class="field-item odd"><a href="/research-categories/mechanics-biological-tissues" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Mechanics of Biological Tissues</a></div></div></div>Wed, 18 Mar 2015 20:22:34 +0000je2779 at http://cohengroup.ccmr.cornell.eduhttp://cohengroup.ccmr.cornell.edu/content/structure-function-relations-articular-cartilages-shear-properties#commentsEnhancing Rotational Diffusion using Shear
http://cohengroup.ccmr.cornell.edu/content/enhancing-rotational-diffusion-using-shear
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>In thermal equilibrium, particles suspended in a fluid randomly move about due to kicks from the fluid molecules, in what is known as Brownian motion or diffusion. Shear a fluid, however, and the particles' diffusion will be greatly enhanced. Why? Diffusion spreads some of the particles to regions of the fluid with different velocities. As the fluid then carries different particles with different speeds, the particles spread out faster, effectively increasing the diffusion. This mechanism, dubbed Taylor dispersion after its discoverer G. I. Taylor, has found a huge suite of applications ranging from understanding drug delivery in the bloodstream to modeling nutrient transfer in soils. Reporting in Physical Review Letters, researchers at Cornell University have shown that rotational diffusion of oblong particles is also enhanced when the suspending fluid is sheared. The enhancement arises due to the coupling of rotational diffusion with the nonuniform rotations of oblong particles in a sheared fluid. Interestingly, the physicists discovered that rotational and translational diffusion are enhanced differently by the applied shear flow. They speculate that this separate tunability of rotational and translational diffusion will find applications in the self-assembly of materials from anisotropic particles. For details, see <a href="http://prl.aps.org/abstract/PRL/v110/i22/e228301">our article </a>in Physical Review Letters.</p>
<p> </p>
<p>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~</p>
<p><u><strong>Finding Particle Orientations and Positions</strong></u></p>
<p>Short version:<br /><em>thresholdedData = <a href="http://cohengroup.ccmr.cornell.edu/userfiles/Leahy_AnisotropicFeaturingCode/particleidentify.pro">particleIdentify</a>( imFilt, thresh )</em><br /><em>particlePositins </em><em>= <a href="http://cohengroup.ccmr.cornell.edu/userfiles/Leahy_AnisotropicFeaturingCode/particleLocate.pro">particleLocate</a>( imFilt, thresholdedData, ratio = pixelMicronRatio )</em></p>
<p>Long version:</p>
<p>To measure rotational diffusion in a colloidal suspension of ellipsoidal particles, we first needed to extract the particle orientations, in addition to the particle position. Since the frequently-used Crocker-Grier-Weeks algorithm (which you can find <a href="http://physics.nyu.edu/grierlab/software.html">here </a>or <a href="http://www.physics.emory.edu/~weeks/idl/three.html">here</a>) was developed to feature only spherical particles, we needed to develop a featuring algorithm which can both 1) feature anisotropic particles, and 2) extract particle orienations. Moreover, since I was considering featuring dimers, ellipsoids, as well as particles of other shapes, I wanted to develop an algorithm which 3) does not rely on the specific details of a particle's shape.</p>
<p> To develop this algorithm, I built off the ideas used in the Crocker-Grier-Weeks featuring algorithms. If you're not familiar with the basics of how to use these programs, you might want to check out Eric Week's excellent tutorials <a href="http://www.physics.emory.edu/~weeks/idl/tracking.html">here </a>and <a href="http://www.physics.emory.edu/~weeks/idl/three.html">here</a>, or Crocker and Grier's classic paper which you can find <a href="http://crocker.seas.upenn.edu/CrockerGrier1996b.pdf">here</a>.</p>
<p> Incidentally, I have written my featuring code to work for arbitrary dimensional images (since all good physicists should do calculations in d-dimensions). While I don't think there are many 4D images of ellipsoidal particles, these IDL programs should work perfectly fine for featuring both 2D images and 3D confocal image stacks of anisotropic particles.</p>
<p>The featuring process can be conceptually divided into 4 steps: 1) Reading in and pre-processing the image. 2) Identifying particles in the image. 3) Finding particle positions and orientations. 4) Postprocessing and data analysis.</p>
<p><strong><u>Part I: Loading in an image and pre-processing the image</u></strong></p>
<p>Before featuring an image to find the particles, you must load it in, subtract any background illumination, and filter out any noise. This portion of the process is the same as for featuring spherical particles, so I won't describe it here. You can use the code that is available from Eric Weeks or David Grier's websites.</p>
<p><strong><u>Part II: Identifying particles</u></strong></p>
<p>At this point you should have an array of a filtered, background-subtracted image, with the particle bright and the background dark. If you are doing any deconvolution on the original image, you should have already done it by this step.</p>
<p>The featuring algorithm currently works by thresholding the image intensities. All voxels (or pixels) above a threshold intensity are identified as belonging to a particle candidates. Groups of adjacent voxels are identified as belonging to the same particle by using a connected component analysis algorithm (label_region in IDL and bwlabel in Matlab); the disconnected clusters are identified as particle candidates. Particle candidates are then separated from featuring artifacts by a series of cutoffs, with cluster size (i.e. number of voxels in the cluster) and mean brightness (i.e. an average brightness over all the voxels in the cluster) being internal to the program.</p>
<p>To do this using my IDL program, type in:</p>
<p> <strong><em>thresholdedData = particleIdentify( imFilt, thresh )</em></strong></p>
<p>where imFilt is the filtered image and thresh is the (absolute) threshold value to distinguish particles from background. There are also optional arguments which you can call. Setting <em>clusSize</em> = [600,3000] would restrict any particles identified to have at least 600 voxels and at most 3000 voxels. Setting <em>briteCut</em> = [10,200] would restrict all identified particles to have an average brightness between 10 and 200, in intensity units. Setting the optional input argument <em>/relative </em>(or <em>relative = 1</em> ) will assume the input threshold <em>thresh</em> is a threshold relative to the minimal and maximal brightness of the image (set between 0 and 1). Finally, <em>particleIdentify</em> allows an optional output <em>info</em> which contains information about the number of voxels above the threshold in the image, number of original particle candidates identified, brightness of each paricle, and number of voxels in each particle.</p>
<p><em>particleIdentify</em> returns an array of the same size and dimensions as the input image, <em>imFilt</em>. The returned array <em>thresholdedData</em> has elements that are everywhere 0, except for the voxels belonging to particles. These voxels have a value corresponding to the particle label. So all the voxels belonging to the first particle have value 1, those belonging to the second particle have value 2, etc. This array is used in the next step of finding the particle positions and orientations.</p>
<p>At this point during the featuring, I usually find it helpful to check to see if I've correctly identified all the particles.</p>
<p><u><strong>Part III: Finding Particle Positions and Orientations</strong></u></p>
<p> At this point you should have two arrays: a noise-filtered image ( <em>imFilt</em> above ) and a array of the voxels identified as particles ( <em>thresholdedData </em>above ). Now we are ready to find the particle positions and orientations.</p>
<p>To find the particle positions, the program takes the brightness-weighted average position of each particle, identified in the previous step. If we view the particle intensities as a distribution, we can think of the position as the first moment of the positions:</p>
<p><em>< x_i > = sum_n (x_i)_i b_n / ( sum_n b_n )</em></p>
<p>where <em><x_i></em> is the i^th component of the particle position, <em>(x_i)_n</em><em> </em>is the i^th component of the position of the n^th voxel, and <em>b_n</em> is the brightness of the n_th voxel. While <a href="http://www.nature.com/nmeth/journal/v9/n7/full/nmeth.2071.html">people have shown</a> that this is not the most accurate or unbiased method for finding particle positions, I've found experimentally that it gives reasonably good (subpixel) accuracy.</p>
<p>To find the particle orientations, instead of looking at a first-order moment of the distribution, we look at the second-order moment, or the covariance matrix, defined in component form by</p>
<p><em>C_ij = sum_n [ (x_i)_n - <x_i> ] * [ (x_j)_n - <x_j> ] * b_n / ( sum_n b_n )</em></p>
<p>The covariance matrix is a rank two symmetric tensor in <em>d</em> dimensions. As such, it will have <em>d</em> eigenvalues and <em>d</em> orthogonal eigenvectors. In principle, the eigenvalues of the covariance matrix should be independent of orientation and the eigenvectors should rotate with the particle, since the covariance matrix is a geometric object. This suggests that we can identify the sorted eigenvectors with the particle's orientation -- for a rodlike particle, the eigenvector with the largest eigenvalue would point along the length of the rod. In practice, however, due to different resolution along each direction and experimental noise, the eigenvalues fluctuate noticeably with particle orientation. Nevertheless, I have found that the principle eigenvector (orientation) is well-characterized. For dimers (aspect ratio ~2) the experimental uncertainty in the orientation is about 5 degrees, which is mostly from the finite number of voxels in the dimer. All else equal, the featuring of longer aspect ratio particles should be more precise.</p>
<p>To find the particle positions and orientations with my IDL code, type:</p>
<p><strong><em>particlePositions = particleLocate( imFilt, thresholdedData, ratio = pixelMicronRatio )</em></strong></p>
<p> Here <em>imFilt </em>is the noise-filtered image (its voxel brightnesses are used to weight the averages above), <em>thresholdedData</em> is the thresholdedData returned by <em>particleLocate</em>, and <em>pixelMicronRatio</em><em> </em>is a d-element array of the pixel-to-micron ratio. The program returns the [d*(d+1)+2, N] array <em>particlePositions</em>. Each row (first element) corresponds to the information for each of the N particles found in the d-dimensional image -- i.e. <em>particlePositions[*,0] </em>contains information about particle 0. The first d elements contain the mean position of the particle. The next d^2 elements give information about the particle orientation. Elements d*n to d*(n+1) are the components of the n^th eigenvector, weighted by the square root n^th eigenvalue. The eigenvectors are returned sorted, so that the one with the larger eigenvalue appears first. The final 2 elements are the total brightness of the particle (sum_n b_n) and the number of voxels in the particle.</p>
<p>In English:<br />
Suppose we have a dimer located at <em>(x,y,z) = </em>(10,20,30), with orientation <em>n = (</em> 1/9, 4/9, 8/9 ), and nothing else in the entire image. Then <em>particlePositions</em> is a [3*(3+1) + 2, 1] = [14,1] element array. <em>particlePositions[ [0,1,2] ] </em>will be [10, 20, 30], the (x,y,z) position of the particle. <em>particlePositions[ [3,4,5] ] </em>will be [ <em>a</em>/9, 4*<em>a</em>/9, 8*<em>a</em>/9 ] -- i.e. the particle orientation weighted by <em>a</em>, the magnitude of the largest eigenvalue. If all you're interested in is the dimer's orientation and position, this is all the information you need. ( The rest of the information I keep for distinguishing real particles from artifacts. If your particle was not a dimer, some of the other eigenvectors may be of use. For instance, the orientation of a disk would be contained in the <em>last</em> eigenvector. A completely anisotropic 3D particle, e.g. an ellipsoid with three unequal axes, would have its orientation contained in the first two eigenvectors. )</p>
<p>You can see a video of what one of my featured dimers looks like below:</p>
<iframe frameborder="0" height="480" src="http://www.youtube.com/embed/Q5o0qTqYc2A" width="640"></iframe><p>
</p>
<p>A couple of quick comments on this method of finding the particle orientation:<br />
1. Finding the orientation via the covariance method will not work for particles of high symmetry. Let's consider a cubic particle with its faces perpendicular to the x,y,z axes. By symmetry, the components along the x-direction will be the same as those along the y- and z- direction.The covariance matrix will then be diagonal in the (x,y,z) basis. Since the covariance matrix is a rank-two tensor, it will then just be a multiple of the identity matrix. In other words, all orientations of a perfect cube will give the same covariance matrix, and you won't be able to find the orientation from the covariance matrix. This will apply to any particle with multiple axes of symmetry -- for instance, you will not be able to find the orientation of a regular tetrahedron or any regular polyhedra this way.<br />
2. On a similar note, the featuring will not distinguish between differences in orientation from <em>n</em> to <em>-n</em>. if a particle's orientation is switched from <em>n</em> to <em>-n</em> the returned orientation will not necessarily change sign, since the covariance matrix does not change with this transformation.<br />
3. Significant distortion along one direction will affect your featured orientations. If you have a confocal with significant distortion along the optical axis, then your particles orientations may be biased along the optical axis, unless you take care to avoid this.</p>
<p><u><strong>Part IV: Postprocessing and data analysis</strong></u></p>
<p>At this stage you should have an array of the particle positions and orientations. I have deliberately left the array in a similar format to that returned by Crocker and Grier's feature.pro, so any particle tracking can be done with their routines the same way you would track spherical particles.</p>
<p>If you're interested in the particle's absolute orientation (as opposed to identifying <em>n</em> with <em>-n</em>), for instance to track rotational diffusion, you can pick the correct sign of the particle orientation based on the particle's orientation at the previous time.</p>
</div></div></div><div class="field field-name-field-category-s- field-type-taxonomy-term-reference field-label-inline clearfix"><div class="field-label">Category(s): </div><div class="field-items"><div class="field-item even"><a href="/website-categories/research" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Research</a></div><div class="field-item odd"><a href="/research-categories/complex-fluids" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Complex Fluids</a></div></div></div>Wed, 18 Mar 2015 20:21:38 +0000je2778 at http://cohengroup.ccmr.cornell.eduhttp://cohengroup.ccmr.cornell.edu/content/enhancing-rotational-diffusion-using-shear#comments