In an age of "big data," a single computer cannot always find the solution a user wants. Computational tasks must instead be distributed across a cluster of computers that analyse a massive data set together, researchers said.
New technologies for monitoring brain activity are generating unprecedented quantities of information. That data may hold new insights into how the brain works - but only if researchers can interpret it.
To help make sense of the data, neuroscientists can now harness the power of distributed computing with Thunder, a library of tools developed at the Howard Hughes Medical Institute's Janelia Research Campus.
Group leaders Jeremy Freeman, Misha Ahrens, and colleagues at Janelia and the University of California, Berkeley, used Thunder to quickly find patterns in high-resolution images collected from the brains of active zebra-fish and mice with multiple imaging techniques.
More From This Section
They have used Thunder to analyse imaging data from a new microscope that Ahrens and colleagues developed to monitor the activity of nearly every individual cell in the brain of a zebrafish as it behaves in response to visual stimuli.
New microscopes are capturing images of the brain faster, with better spatial resolution, and across wider regions of the brain than ever before.
Yet all that detail comes encrypted in gigabytes or even terabytes of data. On a single workstation, simple calculations can take hours.
"For a lot of these data sets, a single machine is just not going to cut it," Freeman said.
It's not just the sheer volume of data that exceeds the limits of a single computer, Freeman and Ahrens say, but also its complexity.
The research was published in the journal Nature Methods.