Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Microservices Expo

Containers Expo Blog: Article

Why Life Sciences R&D Teams Like Data Virtualization

Accelerating data integration key to business success

In the life sciences industry,  the latest blockbuster drug or device can mean billions in revenue.

But developing these new offerings often takes ten or more years.

Moving ahead a year or two can be worth hundreds of millions.

The question is how?

Data, Data, Data
Data is the life blood of new product development.  Research findings, clinical trial results, manufacturing process validation, and more are information-intensive activities where each new data point can result in a major shift in plans and timing.

To accelerate time to market in this data rich environment, life science R&D teams look to IT.

Unfortunately, IT is not always well equipped to meet this challenge.  The problem is not a lack of data.  The problem is a lack of data integration agility.

Most life sciences companies have made significant investments in their data.  These investments have resulted in data silos and complexity which slow down their ability to respond to new information requests.  To overcome these silos, life sciences companies are seeking new ways to integrate their new product development data.

How Data Virtualization Helps
Data virtualization
has been adopted by the vast majority of pharmaceutical vendors and has recently seen increasing acceptance at medical device makers.  Use of data virtualization to integrate product development data has been the primary use case.  The reasons are several:

  • Gaining Timely Insight – Up-to-the-minute data is critical throughout every stage in the new product cycle.  Data virtualization provides query optimization algorithms and techniques that deliver timely information whenever needed.
  • Seeing the Complete Picture – Multiple types of data from multiple sources must be combined to provide researchers, analysts, and managers with the full picture thata effective decision making requires.  Data virtualization provides data federation that virtually integrates multiple data sources data in memory, without the cost and overhead of physical data consolidation in data warehouses.
  • Controlling Data Proliferation – Identifying and understanding data assets distributed across a range of R&D repositories and locations requires significant manual effort. Data virtualization provides data discovery that saves time by automating entity and relationship identification and accelerating data modeling.
  • Addressing Data Complexity – Incredible complexity challenges IT’s ability to leverage existing R&D data for new R&D questions.  Data virtualization provides powerful data abstraction tools that simplify complex data, transforming it from native structures and syntax into easy-to-understand, reusable views and data services with common, business-friendly semantics.
  • Improving Data Availability – With so many technologies, formats and standards, successfully surfacing R&D life cycle data consumes significant IT resources.  Data virtualization supports numerous standards-based data access, caching and delivery options that allows IT to flexibly publish all the information that R&D users require.
  • Providing Proper Data Controls – Data is a critical asset that must be governed, especially in life science R&D with its myriad compliance requirements.  Data virtualization provides data governance that centralizes metadata management, ensures data security and improves data quality to meet these stringent control requirements.
  • Environment of Non-Stop Change – Ever changing research results, clinical trial findings, and compliance requirements make frequent change inevitable. Data virtualization provides a loosely-coupled data virtualization layer, rapid development tools, automated impact analysis and extensible architecture to provide the information agility required to keep pace.

Pfizer Finds a Successful Formula
The R&D team at Pfizer was an early adopter of data virtualization with a number of positive business benefits.

Their successes have been recognized in a number of recent articles including:

Getting Started
Fortunately the path to successful data virtualization adoption is far shorter than the new drug or device development path.  But as with R&D, integrating available data is the key.

Here are some great data sources to help you get started:

More Stories By Robert Eve

Robert Eve is the EVP of Marketing at Composite Software, the data virtualization gold standard and co-author of Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility. Bob's experience includes executive level roles at leading enterprise software companies such as Mercury Interactive, PeopleSoft, and Oracle. Bob holds a Masters of Science from the Massachusetts Institute of Technology and a Bachelor of Science from the University of California at Berkeley.

Microservices Articles
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...