A Harvard-PSC collaboration is capturing and processing massive amounts of high-resolution image data from which to trace a wiring diagram of the visual cortex

PHOTO: Harvard Group

Clay Reid (left), Davi Bock and Wei-Chung Lee of Harvard University in their electron microscopy lab at Harvard.


PHOTO: Harvard Group

Art Wetzel and Greg Hood, National Resource for Biomedical Supercomputing, Pittsburgh Supercomputing Center

A convoluted mass of tissue comprising roughly as many nerve cells, called neurons, as there are stars in the Milky Way, about 100 billion, the human brain may be the most complex structure in the universe. How much do we know about this intricately interconnected biochemical-electrical processing network that makes cognition possible — everything we call thought, creativity, emotion, memory, vision and more?

The answer, despite amazing advances in brain science over the past half century: not much, really. Probably the most studied brain activity is vision, with the most studied region being the primary visual cortex — an area in the back of the brain that processes visual stimuli. Brilliant experiments over the past 50 years have shown that neurons are organized in the visual cortex according to function — the ability to recognize particular kinds of visual information. Nevertheless, we have almost no understanding of how the neurons interconnect.

“What determines what a cortex does?” asks Clay Reid, professor of neurobiology at the Harvard Medical School and Center for Brain Science. “What makes us human? I think it’s almost unarguable that what determines the differences — either between different cortices that do different things in the human brain, or the difference, for instance, between the human brain and the mouse brain — is the connections.”

For Reid, the next big step is to identify the brain's wiring diagram at the level of the individual neuron connections. During the past year, Reid, Ph.D. student Davi Bock, and postdoctoral fellow Wei-Chung Lee have used innovative, high-throughput transmission electron microscopy (TEM) to capture high-resolution images of sections from a mouse visual cortex. These sections — from a volume of brain containing 100 identified neurons and portions of many more — correspond to live experiments showing which neurons perform particular visual operations.

Collaborating closely with Reid’s group, scientists Art Wetzel and Greg Hood from PSC's National Resource for Biomedical Supercomputing (NRBSC, see p. 14) received and processed about a terabyte of TEM data per day between April and September of this year. The data comprise thousands of high-resolution sections, each about 40 nanometers thick (a few hundred atoms). With funding from the NRBSC and Harvard, they managed the prodigious data transfer and developed software to process and assemble the images for viewing and analysis.

Having assembled an unprecedented dataset, this Harvard-NRBSC-PSC collaboration is now positioned to begin reverse engineering the mouse visual cortex — to get the neuron-to-neuron wiring diagram. “We're at a very exciting point,” says Reid. “We’ve demonstrated the ability to generate a new class of data, and we’re beginning to do some science. We're looking toward synaptic-level anatomical reconstruction of cortical circuits of known function — the wiring diagram.”

Selectivity of Neural Function

Reid’s work on the visual cortex follows from renowned work at Harvard by David Hubel and Torsten Wiesel in the late 1950s and early 60s — for which they won the 1981 Nobel Prize in Physiology or Medicine. By inserting electrodes into the brain of an anesthetized cat, Hubel and Wiesel did breakthrough experiments showing that the primary visual cortex is organized in relation to features seen in the visual field of each eye.

Neurons that respond to vertical features — trees or telephone poles, for instance — are grouped together in patches called orientation columns. Other orientation columns respond to horizontal lines, or forty-five degree lines. Hubel and Wiesel showed that from simple parts of the overall visual stimulus the cortical system builds up the complex image we see in our mind's eye.

Neuron Function
From two-photon calcium imaging of a mouse visual cortex, this image shows individual neurons color-coded according to orientation of visual stimuli to which they respond: vertical (green), horizontal (red), left oblique (yellow) or right oblique (blue).

This work showed, says Reid, that something in the cortical network creates this ability of neurons to function selectively, and to understand how this happens will require knowing in detail how the neurons connect. “We know the wiring diagram in very broad strokes,” he continues, “but not at the level of single neurons, and that's what we're interested in.”

About five years ago, Reid began a series of experiments with a technique called two-photon calcium imaging — a powerful method that makes it possible to see the activity of distinct neurons in a living brain. Rather than inserting an electrode, as Hubel and Wiesel did, Reid uses fluorescent dye that’s sensitive to calcium levels, an indicator of a firing neuron. As an anesthetized small mammal sees the vertical, horizontal and oblique bars used by Hubel and Wiesel, the researchers focus the microscope (through a small hole in the skull) on the visual cortex, at single-cell resolution to a depth of 400 micrometers (400 millionths of a meter). Via the fluorescent indicators, they watch as individual neurons fire in response to the visual stimuli.

Reid’s findings from two-photon imaging — reported in Nature (2005) — show distinct differences between rats and cats in how neurons are functionally grouped. For Reid, this work also brought into focus the prospect of wiring diagrams. The high resolution possible with the two-photon microscope showed, he says, that “you can actually build a 3D model of where neurons are in the living brain.”

“These results,” he wrote in Nature, “indicate that cortical maps can be built with single-cell precision.”

A Terabyte a Day

If each section were a thin slice of cheese, it would be bigger than an NBA basketball court.

To take the next step, Reid, Bock, Lee and their collaborators at Harvard built a customized TEM with a four-camera array that can image cortical circuits over an area encompassing hundreds of micrometers. “We need very large images,” says Reid, “and many of them.” The result — a system that can capture a terabyte per day of image data from serial sections of the visual cortex.

Over the six-month run of data transmission, Wetzel and Hood, with assistance from PSC systems and networking staff, transferred TEM datasets daily from a small Linux computer at Harvard to PSC. PSC network engineers worked with staff at Harvard to tune Internet protocols and used PSC's HPN-SSH patch to improve performance in the OpenSSH program used for secure data transfer. In total the PSC-NRBSC team gathered more than 110 terabytes of raw and semi-processed TEM images. “This is near the limits,” says Wetzel, “of what can be sent using commodity best effort network service — susceptible to variations in bandwidth due to other traffic.”

At PSC this data goes to an archiver, while portions used for the next step — image alignment — get copied to an NRBSC cluster and specialized deskside machines. Because the four-camera parallel TEM imaging field captures thousands of images (up to nearly 14,000 frames) for each serial section, the frames must be stitched into a single-section mosaic before they can be stacked into a reconstructed 3D cortical volume.

Stitching the Mosaic
This reconstructed section (119,600 x 88,400 pixels) includes a full resolution inset showing alignment at a frame boundary. The contrast between frames is artificial in order to show the boundary.

Wetzel has focused on the stitching, the first step of which is to detect and correct for variations due to motion as a section repositions in the camera field and also for deformations that can happen as a section is in vacuum under the high-energy electron beam. “The most pervasive distortion that occurs,” says Wetzel, “is warping — like localized stretchings of a rubber sheet. These can’t be completely identified in individual planes but require comparison with nearby planes to partly separate changes due to distortion and those due to actual features of the tissue.”

To get an idea of the amount of cortical information captured in each section, Reid makes an analogy to slicing a wedge of cheese. The cortical tissue is sectioned with an “ultramicrotome” — a high precision diamond knife. If each slice were a millimeter thick like a thin slice of cheese (instead of 40 nanometers), and the lateral dimensions increased by the same proportion, each slice would cover an area bigger than an NBA basketball court.

Zooming In on the Visual Cortex
The vertical extent of a section (bottom) encompasses depth from the cortical surface (grey matter) to pia (white matter). At low resolution, you see only the blood vessels (lighter features). Zooming in [top right] by a factor of roughly five shows the blood vessels more clearly, and begins to show neuron nuclei (dark spots). Zooming in by another factor of 10, you begin to see “what we care about,” says Reid, “the wires — axons and dendrites, color coded yellow and green respectively,” with magenta representing the neuron cell body.

Wetzel uses various software methods — fast Fourier transform correlations and other search methods — to find features in overlapping tiles for alignment into a single mosaic. His approach first produces low-resolution estimates — as much as 80X reduced — so that tile matching can bridge regions that appear “large” and mostly featureless at full resolution (capillaries, for example), and then can progress hierarchically to encompass the full dataset.

Hood has focused on finding the best ways to stack the thin section images into 3D volumes for viewing and analysis — the proof of the pudding for wiring diagrams. He applies a pairwise non-linear method to register each reconstructed section with its neighboring section. He then uses a multi-resolution algorithm to “relax” the dewarping across the pairwise registrations, producing a global alignment for the entire stack. A test alignment of five 2D sections (143 gigabytes of raw data) has shown the viability of this approach. All the alignment processing is parallelized and scales well to large stacks of large-area sections.

“We have this huge, unprecedented dataset with the resolution of electron microscopy,” says Reid, “that encompasses 12 out of 100 cells for which we know the function. We’re beginning to make 3D models of this data, to analyze in detail, and to start drawing the wiring diagram.”

© Pittsburgh Supercomputing Center, Carnegie Mellon University, University of Pittsburgh
300 S. Craig Street, Pittsburgh, PA 15213 Phone: 412.268.4960 Fax: 412.268.5832

This page last updated: May 18, 2012