In thermal equilibrium, particles suspended in a fluid randomly move about due to kicks from the fluid molecules, in what is known as Brownian motion or diffusion. Shear a fluid, however, and the particles' diffusion will be greatly enhanced. Why? Diffusion spreads some of the particles to regions of the fluid with different velocities. As the fluid then carries different particles with different speeds, the particles spread out faster, effectively increasing the diffusion. This mechanism, dubbed Taylor dispersion after its discoverer G. I. Taylor, has found a huge suite of applications ranging from understanding drug delivery in the bloodstream to modeling nutrient transfer in soils. Reporting in Physical Review Letters, researchers at Cornell University have shown that rotational diffusion of oblong particles is also enhanced when the suspending fluid is sheared. The enhancement arises due to the coupling of rotational diffusion with the nonuniform rotations of oblong particles in a sheared fluid. Interestingly, the physicists discovered that rotational and translational diffusion are enhanced differently by the applied shear flow. They speculate that this separate tunability of rotational and translational diffusion will find applications in the self-assembly of materials from anisotropic particles. For details, see our article in Physical Review Letters.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**Finding Particle Orientations and Positions**

Short version:

*thresholdedData = particleIdentify( imFilt, thresh )*

*particlePositins **= particleLocate( imFilt, thresholdedData, ratio = pixelMicronRatio )*

Long version:

To measure rotational diffusion in a colloidal suspension of ellipsoidal particles, we first needed to extract the particle orientations, in addition to the particle position. Since the frequently-used Crocker-Grier-Weeks algorithm (which you can find here or here) was developed to feature only spherical particles, we needed to develop a featuring algorithm which can both 1) feature anisotropic particles, and 2) extract particle orienations. Moreover, since I was considering featuring dimers, ellipsoids, as well as particles of other shapes, I wanted to develop an algorithm which 3) does not rely on the specific details of a particle's shape.

To develop this algorithm, I built off the ideas used in the Crocker-Grier-Weeks featuring algorithms. If you're not familiar with the basics of how to use these programs, you might want to check out Eric Week's excellent tutorials here and here, or Crocker and Grier's classic paper which you can find here.

Incidentally, I have written my featuring code to work for arbitrary dimensional images (since all good physicists should do calculations in d-dimensions). While I don't think there are many 4D images of ellipsoidal particles, these IDL programs should work perfectly fine for featuring both 2D images and 3D confocal image stacks of anisotropic particles.

The featuring process can be conceptually divided into 4 steps: 1) Reading in and pre-processing the image. 2) Identifying particles in the image. 3) Finding particle positions and orientations. 4) Postprocessing and data analysis.

__Part I: Loading in an image and pre-processing the image__

Before featuring an image to find the particles, you must load it in, subtract any background illumination, and filter out any noise. This portion of the process is the same as for featuring spherical particles, so I won't describe it here. You can use the code that is available from Eric Weeks or David Grier's websites.

__Part II: Identifying particles__

At this point you should have an array of a filtered, background-subtracted image, with the particle bright and the background dark. If you are doing any deconvolution on the original image, you should have already done it by this step.

The featuring algorithm currently works by thresholding the image intensities. All voxels (or pixels) above a threshold intensity are identified as belonging to a particle candidates. Groups of adjacent voxels are identified as belonging to the same particle by using a connected component analysis algorithm (label_region in IDL and bwlabel in Matlab); the disconnected clusters are identified as particle candidates. Particle candidates are then separated from featuring artifacts by a series of cutoffs, with cluster size (i.e. number of voxels in the cluster) and mean brightness (i.e. an average brightness over all the voxels in the cluster) being internal to the program.

To do this using my IDL program, type in:

*thresholdedData = particleIdentify( imFilt, thresh )*

where imFilt is the filtered image and thresh is the (absolute) threshold value to distinguish particles from background. There are also optional arguments which you can call. Setting *clusSize* = [600,3000] would restrict any particles identified to have at least 600 voxels and at most 3000 voxels. Setting *briteCut* = [10,200] would restrict all identified particles to have an average brightness between 10 and 200, in intensity units. Setting the optional input argument */relative *(or *relative = 1* ) will assume the input threshold *thresh* is a threshold relative to the minimal and maximal brightness of the image (set between 0 and 1). Finally, *particleIdentify* allows an optional output *info* which contains information about the number of voxels above the threshold in the image, number of original particle candidates identified, brightness of each paricle, and number of voxels in each particle.

*particleIdentify* returns an array of the same size and dimensions as the input image, *imFilt*. The returned array *thresholdedData* has elements that are everywhere 0, except for the voxels belonging to particles. These voxels have a value corresponding to the particle label. So all the voxels belonging to the first particle have value 1, those belonging to the second particle have value 2, etc. This array is used in the next step of finding the particle positions and orientations.

At this point during the featuring, I usually find it helpful to check to see if I've correctly identified all the particles.

**Part III: Finding Particle Positions and Orientations**

At this point you should have two arrays: a noise-filtered image ( *imFilt* above ) and a array of the voxels identified as particles ( *thresholdedData *above ). Now we are ready to find the particle positions and orientations.

To find the particle positions, the program takes the brightness-weighted average position of each particle, identified in the previous step. If we view the particle intensities as a distribution, we can think of the position as the first moment of the positions:

*< x_i > = sum_n (x_i)_i b_n / ( sum_n b_n )*

where *<x_i>* is the i^th component of the particle position, *(x_i)_n** *is the i^th component of the position of the n^th voxel, and *b_n* is the brightness of the n_th voxel. While people have shown that this is not the most accurate or unbiased method for finding particle positions, I've found experimentally that it gives reasonably good (subpixel) accuracy.

To find the particle orientations, instead of looking at a first-order moment of the distribution, we look at the second-order moment, or the covariance matrix, defined in component form by

*C_ij = sum_n [ (x_i)_n - <x_i> ] * [ (x_j)_n - <x_j> ] * b_n / ( sum_n b_n )*

The covariance matrix is a rank two symmetric tensor in *d* dimensions. As such, it will have *d* eigenvalues and *d* orthogonal eigenvectors. In principle, the eigenvalues of the covariance matrix should be independent of orientation and the eigenvectors should rotate with the particle, since the covariance matrix is a geometric object. This suggests that we can identify the sorted eigenvectors with the particle's orientation -- for a rodlike particle, the eigenvector with the largest eigenvalue would point along the length of the rod. In practice, however, due to different resolution along each direction and experimental noise, the eigenvalues fluctuate noticeably with particle orientation. Nevertheless, I have found that the principle eigenvector (orientation) is well-characterized. For dimers (aspect ratio ~2) the experimental uncertainty in the orientation is about 5 degrees, which is mostly from the finite number of voxels in the dimer. All else equal, the featuring of longer aspect ratio particles should be more precise.

To find the particle positions and orientations with my IDL code, type:

*particlePositions = particleLocate( imFilt, thresholdedData, ratio = pixelMicronRatio )*

Here *imFilt *is the noise-filtered image (its voxel brightnesses are used to weight the averages above), *thresholdedData* is the thresholdedData returned by *particleLocate*, and *pixelMicronRatio** *is a d-element array of the pixel-to-micron ratio. The program returns the [d*(d+1)+2, N] array *particlePositions*. Each row (first element) corresponds to the information for each of the N particles found in the d-dimensional image -- i.e. *particlePositions[*,0] *contains information about particle 0. The first d elements contain the mean position of the particle. The next d^2 elements give information about the particle orientation. Elements d*n to d*(n+1) are the components of the n^th eigenvector, weighted by the square root n^th eigenvalue. The eigenvectors are returned sorted, so that the one with the larger eigenvalue appears first. The final 2 elements are the total brightness of the particle (sum_n b_n) and the number of voxels in the particle.

In English:

Suppose we have a dimer located at *(x,y,z) = *(10,20,30), with orientation *n = (* 1/9, 4/9, 8/9 ), and nothing else in the entire image. Then *particlePositions* is a [3*(3+1) + 2, 1] = [14,1] element array. *particlePositions[ [0,1,2] ] *will be [10, 20, 30], the (x,y,z) position of the particle. *particlePositions[ [3,4,5] ] *will be [ *a*/9, 4**a*/9, 8**a*/9 ] -- i.e. the particle orientation weighted by *a*, the magnitude of the largest eigenvalue. If all you're interested in is the dimer's orientation and position, this is all the information you need. ( The rest of the information I keep for distinguishing real particles from artifacts. If your particle was not a dimer, some of the other eigenvectors may be of use. For instance, the orientation of a disk would be contained in the *last* eigenvector. A completely anisotropic 3D particle, e.g. an ellipsoid with three unequal axes, would have its orientation contained in the first two eigenvectors. )

You can see a video of what one of my featured dimers looks like below:

A couple of quick comments on this method of finding the particle orientation:

1. Finding the orientation via the covariance method will not work for particles of high symmetry. Let's consider a cubic particle with its faces perpendicular to the x,y,z axes. By symmetry, the components along the x-direction will be the same as those along the y- and z- direction.The covariance matrix will then be diagonal in the (x,y,z) basis. Since the covariance matrix is a rank-two tensor, it will then just be a multiple of the identity matrix. In other words, all orientations of a perfect cube will give the same covariance matrix, and you won't be able to find the orientation from the covariance matrix. This will apply to any particle with multiple axes of symmetry -- for instance, you will not be able to find the orientation of a regular tetrahedron or any regular polyhedra this way.

2. On a similar note, the featuring will not distinguish between differences in orientation from *n* to *-n*. if a particle's orientation is switched from *n* to *-n* the returned orientation will not necessarily change sign, since the covariance matrix does not change with this transformation.

3. Significant distortion along one direction will affect your featured orientations. If you have a confocal with significant distortion along the optical axis, then your particles orientations may be biased along the optical axis, unless you take care to avoid this.

**Part IV: Postprocessing and data analysis**

At this stage you should have an array of the particle positions and orientations. I have deliberately left the array in a similar format to that returned by Crocker and Grier's feature.pro, so any particle tracking can be done with their routines the same way you would track spherical particles.

If you're interested in the particle's absolute orientation (as opposed to identifying *n* with *-n*), for instance to track rotational diffusion, you can pick the correct sign of the particle orientation based on the particle's orientation at the previous time.