Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Steve Wilson, Stackify Blog, Derek Weeks

Related Topics: Microservices Expo, Containers Expo Blog, @CloudExpo

Microservices Expo: Blog Feed Post

The Golden Age of Data Mobility?

Those masters were written on every platform imaginable – from Novell Netware to Windows to Linux to Solaris

image I was working for a mid-sized enterprise as an IT manager, a project that was on the cutting edge of technology at the time, and because it was on the cutting edge, we were using a whole slew of different embedded applications and their masters to collect data. Those masters were written on every platform imaginable – from Novell Netware to Windows to Linux to Solaris – and in every language that was common on each of the platforms. Our job was to make sense of it all. The information these systems collected was billing data, they all collected similar datasets, but all in different manners and all used different ways to store the data in databases. And they all used different RDBMS’s. We had Oracle, MS SQL Server, Sybase, MySQL, IBM UDB, and a few you’ve not likely heard of. We had our own datacenter, and it was a non-stop flurry of activity just trying to consolidate the data and get it into a consistent format and centralized for billing on a single DBMS. We had custom code, Extract Transform and Load (ETL) systems, extraction systems that we then loaded the resulting data from into our central database, all just to get the data in one place.

That’s the worst case I’ve ever been involved in, but seriously every place I’ve worked has had multiple database vendors because we live in the age of purchased applications, and even when a vendor says “oh yeah, we support X, Y, and Z”, smart IT folks immediately ask which one they develop for primarily, because that’s the one that will get the first attention when updates occur, and it is the one most likely to be stable. So while you theoretically could standardize on a single database, and every enterprise I’ve ever worked at has either wanted to or said they did… But purchased applications make it highly unlikely that they ever will.

Image Courtesy of www.servermachine.net

Still, you need a way to communicate that data back and forth, and when the enterprise shifted to “buy before build”, that’s where the programmers went – to integration duties to try and straighten out communications. Your purchased (or service) shipping system needs to update inventory, which is a different system on a different database, etc. We’ve got about a decade of this, and most IT shops have a relatively stable environment that transfers data back and forth as needed, but is  high maintenance, since every release that changes tables or columns evokes a new round of integration work. And unless you’re terribly lucky, no two purchased packages are on the same update cycle.

It is not my habit to plug specific products in this blog, even F5 products. I like to keep it useful to you and figure that if you find it useful, F5 indirectly gets the name recognition. F5 has thus far allowed me the freedom to do just that, and this blog is not a sign of some major shift. While I am going to  plug a specific product, it is not an F5 product. I’m going to tell you how all of the pain caused by the above issues can be alleviated, using Oracle Goldengate. Oracle is a partner of F5, and our uber-smart Business Development and Product Management Engineering teams have been working with Oracle on the Goldengate product and how it fits into our partnership. I was brought in to produce some collateral, and after reading up on Goldengate, fell in love.

It is not often that I, after more than a decade working in IT and several years as a Technology Editor, get excited about a product, but Goldengate fits the bill. It solves a problem that other solutions (like ETL engines) could be hacked to solve, but it does it directly and simply.

Oracle acquired Goldengate in mid-2009, and because it is not my job to pay attention to this stuff, the importance of the announcement flew under my radar. That being the case, I figure it might well have flown under your radar also. The architecture of Goldengate is, like most technology, simple to understand at the 50,000 foot level, and I’ll direct you to Oracle’s Goldengate website if you need more info. You purchase two copies of Goldengate, one to be the source and one to be the destination. The source reads log files and generates a binary representation called a trail file. There is another process on the source called the data pump that then sends this data out across the network to the destination. A piece of software called the Collector picks up the incoming stream and writes it out to a new trail file, then a final process called Replicat reads this binary trail file and creates transactions from it to submit to the database.

This sounds like an optimized database replication tool, which in itself would be kind of cool but not real earth-shattering. The reason this tool caught my attention (and garnered enough excitement to warrant a blog) is that the source RDBMS and the target RDBMS do not have to be the same vendor. Yes indeed, you read that right. Think of it as heterogeneous near-real-time replication. Have a purchased application that runs on SQL Server but your core datacenter RDBMS is UDB? No problem, purchase SQL Server for the source and UDB for the target, configure and tune, and then tell the DBAs where to find the replica of the data. So you create a separate tablespace and just dump into it. If nothing else, you only have to back up the big master database.

In the case of serious integration issues with many systems on many RDBMS’s needing to talk, this is a lot cleaner than what most of us are doing. And a lot faster to adapt to changing table/column configurations. If this had been available on that first project I reference above, perhaps my team wouldn’t have grown so quickly from tiny to huge. We’d have still needed DBAs and Systems Admins and Engineers, but developer count might have been smaller since almost all of our developer hours were database integration time. We only developed a few applications, our policy was definitely “purchase if possible”. I know in mergers and acquisitions space this tool would also be a huge boon. “We need to move data from our new subsidiary into our systems” is perhaps the most dreaded M&A phrase an IT person can hear. Or second most if “and you’re in charge of the integration, be done by Monday?” is first most dreaded.

I haven’t used Goldengate, and I know there are a host of ETL solutions that could be hacked to perform this job, but they list all of the major database vendors on their supported RDMBS list, and Oracle is pretty good about providing solid support before issuing such a statement. And the relative simplicity is striking. Sure it will take installation on two (or more) systems, and configuration of both the networking component and the trail file component – it has to know what data you want replicated, and where to send that data – but that’s much less work than writing or hacking tools to do the same job.

So it is worth checking out. I know I would if I was still in IT management. Life is complex enough, let me move all of my data to one DBMS and do all of my calculations, reporting, tabulation, etc. there. And since it is essentially a replication tool, I’d also replicate it off so things like reporting weren’t bogging down the primary database.

And yeah, we have tools to make it even better. If you’re thinking of running Goldengate over the WAN, watch for updates from our BIG-IP WOM team, but I’m sticking with my general rule not to plug products.

It certainly does appear that Goldengate is going to usher in the golden age of data mobility, which would be good, data integration is one of the sticking points in highly adaptable IT.

 


 

Connect with Don: Connect with F5:
linkedin rss facebook twitter o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1]

Related Articles and Blogs:

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
API Security is complex! Vendors like Forum Systems, IBM, CA and Axway have invested almost 2 decades of engineering effort and significant capital in building API Security stacks to lockdown APIs. The API Security stack diagram shown below is a building block for rapidly locking down APIs. The four fundamental pillars of API Security - SSL, Identity, Content Validation and deployment architecture - are discussed in detail below.
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...