The role of genomics in disease research involves determining how a patient would react to a particular drug. The process is referred to as “Pharmacogenomics” or “Personalized medicine”. Big data capabilities and advancements in computational techniques have been leading a revolution in the genomics space that has helped reduce the cost of genome sequencing from millions to thousands of dollars in the past two decades. In this post, we will discuss how big data is helping genomics transition from a purely research-oriented field into a day-to-day clinical application role, and explore ways to increase the adoption of personalized medicine in a clinical setting.
The Promise of Big Data
Advances in genome data analysis have led to major developments in the field of personalized medicine by exploring treatments based on the molecular profile of the patient’s tumor. For example, a large clinical trial on personalized medicine was recently announced by the National Cancer Institute (NCI) at the American Society of Clinical Oncology (ASCO) conference, held recently in Chicago. The trial named MATCH (Molecular Analysis of Therapy Choice) aims to determine the effectiveness of cancers treated according to molecular profiles. Once tumor samples are collected from the cancer patients, genome sequencing would be done to determine the relevant genetic abnormality in patients. Those selected patients would then be eligible to join the treatment portion of the clinical trial and receive the drug to target the abnormality. These genome sequencing projects usually generate terabytes of data and extensive computational processes are used in data analysis to derive insights and make better decisions on patient diagnosis and treatment.
Overcoming Challenges
Though there are numerous tools available to analyze sequenced genomes, the ability to use the information in a real world setting is limited. There are major challenges with the current genomic visualization process that could impact the field of personalized medicine and raise questions about the validity of computational genome analysis in associating genomes to clinical features. There is a need for algorithmic improvement that would enhance the analysis and visualization process by better utilizing currently available annotations to turn raw genome sequence data into a more clinically meaningful insight.
There is no better time than now for researchers and clinicians involved in genome analysis projects to get help on these issues from companies specializing in big data analytics. At Saama, we specialize in big data and analytics solutions and services, and have built and delivered impactful business solutions to the most complex big data problems involving analysis and visualization. Our pre-built algorithms can be customized according to requirements and we have experience developing big data solutions in just weeks, rather than months. Saama’s proprietary Fluid Analytics Engine™ (FAE) is an advanced analytics platform that utilizes machine learning algorithms and data mining techniques to dig deeper into genome datasets and could associate genetic characteristics to clinical features. Adding to this is FAE’s out-of-the-box infrastructure, which is able to extract, analyze and share huge volumes of data applied across the numerous applications.
One of the important applications of big data analytics is in sharing genomic data on a common platform where we can compare the genomic data of each patient for a variety of research and clinical purposes in order to provide better diagnosis and treatment. Though sharing of genome data of individual patients is limited due to privacy concerns, there is a possibility to overcome some of the limitations by restricting data access to physicians and researchers, and also removing personally identifiable patient information. For example, the physician can have the ability to filter a set of patients who share a common set of mutations on a particular gene, and then see which treatment worked in that particular set of patients. It’s time that we prioritize big data genomic projects in clinical medicine and take a step forward to meet the requirements of analyzing millions of human genome sequences in the near future.
So What’s Next?
Once we overcome the technical challenges in big data processes, we should focus on enhancing the rate of improvement in transitioning research-oriented genomics into day-to-day clinical practice. Currently, genomic testing is mainly conducted in big research and hospital organizations that have the necessary resources and budget, resulting in a smaller test population. In order to reach the maximum patient pool and improve genomic technology adoption in a clinical setting, there is a need to lower the investment and testing costs. To reduce complexity in adoption, we have to create a simplified workflow that starts from collecting patient samples to the point of reporting to the physicians, while ensuring compliance. It would also be advantageous to offer incentives that would encourage physicians to order genomic testing and base their treatment decisions on test results.
It’s time to let the world of personalized medicine change our lives!
Sources:
- NCI Match Trial Information: http://www.cancer.gov/about-cancer/treatment/clinical-trials/nci-supported/nci-match
- Visualizing Genomes: techniques and challenges (2010) by Cydney B Nielsen et.al., http://www.nature.com/nmeth/journal/v7/n3s/full/nmeth.1422.html
- Blog Article: http://www.rdmag.com/articles/2014/02/why-big-data-isnt-big-problem-genomic-medicine