Click here to close now.




















Welcome!

Microservices Expo Authors: SmartBear Blog, Liz McMillan, Pat Romanski, Elizabeth White, Lori MacVittie

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Agile Computing, @BigDataExpo, SDN Journal

@CloudExpo: Article

Creating Harmony When Cloud and On-Premise Worlds Collide

Integrating data across diverse SaaS applications with existing on-premise solutions has proved exceptionally challenging

In recent years, IT departments have been confronted with the convergence of several highly disruptive trends that have fundamentally altered the enterprise IT landscape, particularly when it comes to how data and applications are managed. Mobility and the rise of BYOD (bring your own device), as well as the growth of social media and the electronic information it generates, have each proved transformative. But perhaps no shift has been more seismic than the adoption of cloud and SaaS-based applications led by CIOs who see the value proposition associated with outsourcing many complex IT operations.

However, integrating data across diverse SaaS applications with existing on-premise solutions has proven exceptionally challenging. To streamline this integration without slowing adoption, IT stakeholders are turning to cloud-based integration solutions that can curtail complexity and IT oversight while enabling organizations to better leverage their information capital to drive business objectives. Indeed, according to a recent report by analyst firm MarketsandMarkets, the global Cloud Brokerage Services (CSB) market is on track to grow from $1.57 billion in 2013 to $10.5 billion by 2018, a compound annual growth rate of more than 45% over the five year period.

In this article, we will provide advice to IT leaders for creating sustainable environments using hybrid integration between SaaS technologies and existing on-premise applications. We will also explore the top considerations for building out a successful cloud integration strategy that offers the scalability and flexibility to withstand fluctuations in enterprise data management needs.

Start by Asking the Right Questions
Over the past few years, "Cloud" has transformed from the buzzword of the moment - all the rage but lacking concrete definition - to an efficient, widely recognized enabler of scalable IT operations. Despite the increasing ubiquity and viability of the cloud delivery model, it's important to remember that cloud is not "IT in a box." No one cloud service provider can meet all the complex IT needs of a single organization. By and large, enterprises evaluate and onboard an array of purpose-built solutions from diverse cloud providers. As a result, the need to successfully integrate them not only with each other, but also with traditional on-premise application-to-application (A2A) and business-to-business (B2B) systems is critical. The multitude of complex integrations - A2A, B2B, and on-premise applications to SaaS/cloud applications, and cloud-to-cloud (C2C) - requires a clear-cut integration strategy.

A critical first step in developing an integration strategy is to ask and answer a few key questions, the first of which is "what problem is the integration solving?" While achieving streamlined integration between cloud-based systems like Magento, NetSuite, SAP, Ariba, and salesforce.com is one aspect of a full-fledged strategy, it's important to remember the challenge extends beyond cloud-to-cloud integration. In reality, what many people today refer to as "cloud integration" is actually hybrid integration - integration not only between cloud systems, but between cloud and on-premise applications. Determining the specific integration goal - whether it is strictly cloud-to-cloud, or a larger hybrid model - ensures the strategy scales to both immediate and long-term integration needs.

Once you consider what problem the integration will solve, it's important to consider how integration will solve the problem. As the number of systems to be integrated grows, the number of potential interface points expands exponentially, and traditional, manually driven point-to-point integration can quickly become overwhelming. Each time an individual application is altered, or a trading partner changes its specification interface, IT must review all external connections for potential impact. An upgrade cycle for a large ERP system may spawn dozens, hundreds, or even thousands of integration projects across several departments and external trading partners.

Continuing to rely on this point-to-point integration model will become untenable as cloud adds another layer of complexity to the integration landscape. In order to avert chaos, enterprises are actively leveraging integration to create an interconnected web that holistically addresses data management and integration challenges across all of these disparate systems and applications. If an integration strategy is designed with a broader goal in mind, it is much more likely that the same strategy can be leveraged not only to solve immediate integration challenges, but future demands as well.

Identifying where integration is needed and how it can benefit an organization is an important first step. But once the decision has been made to move forward, there are a few key considerations that CIOs must take into account to successfully build out a strategy with staying power.

Reading the Signs: Spotting and Addressing Complexity
Anticipating the areas in which integration complexity is most likely to arise is crucial to the development of a flexible, cost-effective integration strategy. The following are two of the usual suspects of which CIOs should be aware:

  1. SaaS APIs: Many cloud providers promise to deliver a simple-to-use web API, but this is rarely the reality. Specifications for many SaaS APIs can run into the dozens, if not hundreds, of pages long, and can be a major headache for internal teams unfamiliar with the nuances of integration. Moreover, APIs often evolve over time as SaaS applications evolve, generating a source of ongoing complexity.
  2. Data Translation: The potential for complexity, however, does not end once the APIs are successfully integrated. Translating data between different SaaS applications, as well as between SaaS and on-premise systems, can be challenging, and this translation should be factored into the complexity calculus. Data that is not properly translated will be rendered useless, and backtracking to fix the glitch can add time and expense to business-critical projects. As a general rule, a bug that costs one dollar to fix during development will cost 10 dollars to fix during quality assurance, and 100 dollars fix once in production. This backtracking approach can prove particularly brittle when new systems are added to the ecosystem.

A Long-Term Vision: Thinking Beyond the First Integration Project
Integration with cloud is often a daunting prospect, particularly for businesses just beginning to onboard cloud applications as part of their IT strategy. The immensity of a single cloud integration can produce tunnel vision for IT teams, who get so bogged down in an initial project that they fail to consider the long-term implications of the integration and how it will ultimately fit into the overarching IT architecture - a problem already amply demonstrated with the pitfalls of the point-to-point approach. However, the inevitable complexity of integrating multiple applications over time should be sufficient incentive to give any CIO pause before creating a strategy tailor-made for a single integration project.

Even though it will likely require greater upfront investment and effort, organizations must settle on a cohesive sourcing strategy for integration that meets their individual needs. There are three fundamental options for this strategy: a do-it-yourself (DIY) approach based solely on existing knowledge of on-premise software; a DIY approach using a customer-driven integration Platform-as-as-Service (iPaaS); or outsourcing integration entirely to a third-party integration brokerage provider. When determining which of these strategies to adopt, it is important to consider the following:

  1. First, consider the deployment timeline. As departments across the enterprise demand rapid access to new and greater functionality offered by diversifying SaaS applications, IT departments are under mounting pressure to test, procure and deploy these solutions. This is where a CSB can help speed things up based on their experience working with various customers, implementation scenarios and technologies. Even as deployment windows tighten, however, many businesses are only just beginning to build out core competency around integration. For those with the strictest timelines, the option to build out an internal integration function may have already passed, and it may become necessary to bring in a third-party integration provider. While some may initially view these external integration providers as a Band-Aid solution, working with a specialized integration broker can often be the best long-term solution, especially when it comes to cloud integration where existing IT teams may have less familiarity.
  2. Second, consider the cost for integration in the long term. As the complexity of cloud integration projects continues to increase, building out an internal team will require a capital investment in expert personnel and software. Although it requires greater initial investment, this relatively fixed capital expenditure may be a better use of resources for some organizations. For others, such a large capital expenditure may not be feasible or efficient. Outsourcing projects to an integration broker shifts the cost of integration as an operating expense, reducing or eliminating the up-front cost, and providing a more scalable, recurring cost-structure.
  3. Once these factors have been weighed, the next decision is: in-house or external? Although SaaS applications for both back-office systems and B2B processes can offer tremendous efficiencies, the coordination and integration required on the back end is no simple matter. While building out in-house integration capabilities is important for some organizations due to commercial or other business considerations, companies that choose this route must recognize it early and take a proactive approach to cultivating the expert staff and resources that will be required to effectively manage and complete integration projects. For those businesses that don't have compelling reasons to keep the integration function in-house, outsourcing may prove more efficient. Cloud Services Brokers (CSBs) have existing integration infrastructure that can be leveraged for rapid deployment, and can increase capacity on demand, offering scalability when and where it's needed most. CSBs also deliver experience and collective intelligence around integration that can offer efficiencies beyond what can be accomplished with internal resources alone.

The key criteria and requirements around data management continue to expand, and cloud integration is at the nexus of this expansion. By planning and executing a comprehensive integration strategy that can efficiently and consistently scale to the evolving integration requirements of the business - including traditional on-premise, back-office systems and cloud-based applications - IT can help ensure the long-term scalability and business success. Whether the decision is to bring integration capabilities in-house, outsource integration needs, or use some combination of both, the time to start developing a plan is now.

More Stories By Rob Fox

Rob Fox is Vice President of Application Development for Liaison Technologies, and the architect for several of Liaison’s data integration solutions. Liaison Technologies is a global provider of cloud-based integration and data management services and solutions. He was an original contributor to the ebXML 1.0 specification, is the former chair of marketing and business development for ASC ANSI X12, and a co-founder and co-chair of the Connectivity Caucus. Connect with Rob on Twitter: @robert_fox1

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Microservices has the potential of significantly impacting the way in which developers create applications. It's possible to create applications using microservices faster and more efficiently than other technologies that are currently available. The problem is that many people are suspicious of microservices because of all the technology claims to do. In addition, anytime you start moving things around in an organization, it means changing the status quo and people dislike change. Even so, micr...
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...
Alibaba, the world’s largest ecommerce provider, has pumped over a $1 billion into its subsidiary, Aliya, a cloud services provider. This is perhaps one of the biggest moments in the global Cloud Wars that signals the entry of China into the main arena. Here is why this matters. The cloud industry worldwide is being propelled into fast growth by tremendous demand for cloud computing services. Cloud, which is highly scalable and offers low investment and high computational capabilities to end us...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Our guest on the podcast this week is Adrian Cockcroft, Technology Fellow at Battery Ventures. We discuss what makes Docker and Netflix highly successful, especially through their use of well-designed IT architecture and DevOps.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...