Covering Disruptive Technology Powering Business in The Digital Age

image
How one hospital is using big data to save lives
image
June 24, 2016 News

Life sciences research is trying to tackle some of the biggest issues facing mankind: the race to cure cancer, saving lives with personalised medicine, supporting sustainable food production, and illuminating elemental biological systems. Researchers across the globe are working to fundamentally change the quality of people’s lives.

Unsurprisingly, this research generates massive amounts of data – today’s sequencers and other high-data throughput instruments can produce more than two terabytes of data per day. Next-generation instruments will be producing almost a petabyte of raw data every day.

The Center for Pediatric Genomic Medicine at Children’s Mercy Kansas City sees the challenges of that data growth every day.

It was founded with the aim of diagnosing rare diseases. One of the first cases involved two young sisters who were both suffering from a degenerative illness. One sister was in a wheelchair and the younger sister was starting to show signs that her health was deteriorating towards a similar outcome to that of her sibling.

Before sequencing their genomes, the two sisters had made 32 separate visits to specialists over the course of five years.

After sequencing both sisters’ genomes, the Center found a mutation – their parents had a defective gene, which the daughters both inherited.

With a diagnosis, the Center was able to put treatments in place and improve their outcomes dramatically. That was made possible by the ability to rapidly sequence their genomes at Children’s Mercy.

The Center embraces advanced technologies to help accelerate rapid genome sequencing. It’s broken the 48-hour window required for its STAT-Seq test (the fastest whole genome analysis in the world) by applying advanced compute, storage and accelerated sequencing technologies.

It started with the fundamental pillar of the diagnosis of rare diseases, and has since branched out into pharmacogenomics and cancer genomics programs.

Data tsunami

Genome sequencing is very compute and data intensive, which puts pressure on the Center’s IT team to deliver ample processing power and data storage to support both whole genome and exome sequencing.

With the additional sequencing from the new programs, the Center is generating data at an exponential rate. With a modest seven sequencers, the Center is capable of doing 64 whole genomes every six days, each genome representing about 170 gigabytes of data.

That’s roughly 11 terabytes of storage a week required for raw data. As the Center considers routine sequencing of people within the cancer and pharmacology clinics, it has become even more conscious of the requirements that its storage infrastructure must meet.

Children’s Mercy has invested substantially in a high-performance compute (HPC) cluster environment with over 40 Linux nodes totaling around 1,300 cores.

An entire suite of both in-house and externally developed software tools is utilised to meet a variety of sequencing and compute demands.

The traditional scale-out NAS storage the Center had in place, however, lacked the scalability in performance and capacity to address demanding data creation and access needs. It needed a more flexible, powerful approach than scale-out NAS could deliver.

For example, its planned deployment of the Edico DRAGEN Bio-IT processor, an FPGA-based genomic analysis acceleration technology, required the transfer of data to local SSDs.

With the new 1.2 petabytes of storage from data direct networks, there’s now sufficient performance to run the Edico processor directly from shared storage, removing the complexity and time involved with the data transfer.

The Center has ultimately eliminated the need for expensive local SSDs and stepped closer to accomplishing its goal to complete the whole genome sequencing process in 26 hours – a lofty goal, but one that’s now achievable.

Taming the data deluge

Time is, quite obviously, of the essence when decoding genomes of seriously ill newborns to find the genetic causes of their illness and initiate viable treatment options.

Bringing closure to a long and arduous diagnostic odyssey for children, such as the two sisters, with hard-to-diagnose illnesses is equally important.

With more than 6.4 billion bases in a person’s DNA, encompassing 22,000 genes that code for nearly 100,000 proteins, it’s easy to understand the escalating demand for HPC and high performance storage.

The Center’s goal is to keep pace with the data deluge in both its clinical and research environments so it can quickly analyse data to produce meaningful insights.

To do this, its storage needs to remove informatics bottlenecks and have strong performance, but most importantly it needs to be scalable.

DDN storage, along with the IBM Spectrum Scale-based parallel file system, has accomplished these objectives.

While the challenge at Children’s Mercy is specific to genomics, the data deluge is affecting the entire scientific community.

By leveraging the very best technologies, it’s not only possible to manage the deluge, it’s possible to develop and improve existing solutions.

This article was originally published on www.information-age.com and can be viewed in full

(0)(0)

Archive