OtherPapers.com - Other Term Papers and Free Essays
Search

Big Data Research Paper

Essay by   •  December 7, 2017  •  Research Paper  •  2,067 Words (9 Pages)  •  1,316 Views

Essay Preview: Big Data Research Paper

Report this essay
Page 1 of 9

Name: Sidra Ahmed

Instructor: Dr Chi Zhang

Course: IT6523

Big Data Revolution In Healthcare

Introduction

We are currently in the era of big data in which big data technology is being rapidly applied to biomedical and health-care fields. Large amounts of clinical data are generated at unprecedented speed every second. Before discussing big data practice in the healthcare environment, it is important to look at what the “BIG DATA” is and what are its important attributes.

Big data is a term for data sets that are so large or complex that traditional data processing applications are inadequate to deal with them. It consists of large bodies of unstructured or raw data which cannot be processed using conventional, largely relational data processing techniques”.” IBM characterizes big data to have four dimensions: volume, velocity, variety, and veracity which all correlate to form a collection of data elements whose size, speed, type, and/or complexity require one to seek, adopt, and invent new hardware and software mechanisms in order to successfully store, analyze, and visualize the data”

        Regarding these various perspectives, the main purpose of this study to review the application of big data in healthcare industry what are its success factor and challenges and how it can help in improving the overall quality of health.

Big Data In Healthcare

         “By definition, big data in healthcare refers to electronic health data sets so large and complex that they are difficult (or impossible) to manage with traditional software and/or hardware; nor can they be easily managed with traditional or common data management tools and methods”. Big data in healthcare is overwhelming not only because of its volume but also because of the diversity of data types and the speed at which it must be managed. The totality of data related to patient healthcare and well-being make up “big data” in the healthcare industry. It includes clinical data from CPOE and physician’s written notes and prescriptions, medical imaging, laboratory, pharmacy, insurance, and other administrative data), patient data in electronic patient records (EPRs), machine generated/sensor data, such as from monitoring vital signs, social media posts, including Twitter feeds , status updates on Facebook and other platforms, and web pages. and less patient-specific information, including emergency care data, news feeds, and articles in medical journals. This data is spread among multiple healthcare systems, health insurers, researchers, government entities, and so forth. Healthcare is a prime example of how the three Vs of data, velocity (speed of generation of data), variety, and volume, are an innate aspect of the data it produces.

Benefits Of Big Data In Healthcare

Healthcare is one of the top social and economic issues in many countries. In the United States, although healthcare expenditures are the highest of any developed country, at 15.3% of GDP, however the system is struggling with making the best use of resources in order to reduce medical costs and improve the quality of healthcare. Big data in healthcare can be used to improve the effectiveness and efficiency of prediction and can open new paradigm of research in this field. Effectively processed data can improve outcomes for individual patients through personalization of predictions, earlier diagnosis, better treatments, and improved decision support for clinicians. These improvements can eventually lead to lowered costs for the healthcare system. Earlier detection of adverse drug reactions can also be improved which will also allow personalized medicine analysis. This in turn should lead to improved treatment responses for biologically or clinically defined patient subgroups, which will also avoid unnecessary rejection of potent drugs and devices. As a result, patient communities will benefit and the unsustainable trend of escalating costs in hospital and community care management as well as diagnostic and drug development costs will slow down.

Patient-level medical records provide information that enables to understand both occurrences and sequencing of patient events, clinical diagnoses, and prescribed medications. Through big data analytical solutions, these massive and disparate datasets at the macro-level can be analyzed to gain insights into patterns, trends, correlations, and clusters of medical and demographic information. With the increasing quantity of genomic information, big data will be needed to process and derive meaning from the large, complex volume of information. With information that is captured in real-world clinical settings and genomic sequences, researchers will be able to analyze new dimensions of evidence and outcomes. 

Successful Cases Of Big Data Application in Healthcare

  • Saving Premature Babies Using Big Data

         Researchers at Toronto’s Hospital for Sick Children applied big data to save the lives of premature babies.  By converting a set of already-collected vital signs into an information flow of more than 1,000 data points per second, the researchers created an algorithm to predict which children are most likely to develop a life-threatening infection. Now, doctors can act earlier and better treat these patients.

  • Project Data Sphere

Drugs for cancer treatment have seen tremendous change in recent years.  One sign of success is that oncology drug approvals are becoming more and more common.  Through Project Data Sphere, companies researching in cancer drugs have started sharing patient-level clinical trial data.  The aggregation of this data, with respect to barriers including intellectual property rights, privacy, cost, and other challenges has created new insights for cancer researchers.

  • Genome-Wide Association Studies (GWAS). 

One application of mining large datasets that has been particularly productive in the research community is the search for genome-wide associations. If a given drug is “40% effective in treating cancer,” another interpretation could be that the drug is 100% effective for patients with a certain genetic profile. However, genomic data is Big Data. The data in a single human genome includes approximately 20,000 genes. Stored in traditional data platforms, this is the equivalent of several hundred gigabytes. Combining each genome with one million variable DNA locations produces the equivalent of about 20 billion rows of data per person. HIV researchers in the European Union worked with IBM, applying big data tooling to perform clinical genomic analysis. By assisting HIV researchers in optimizing therapies for patients and participating in the EuResist project, IBM big data tooling played a key role in helping researchers understand clinical data from numerous countries in order to discover treatments based on accumulated empirical data

...

...

Download as:   txt (13.5 Kb)   pdf (164.3 Kb)   docx (14.8 Kb)  
Continue for 8 more pages »
Only available on OtherPapers.com