Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Stackify Blog, Andreas Grabner

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog, Agile Computing, @CloudExpo, FinTech Journal

@DevOpsSummit: Blog Post

DIY Enterprise DevOps | @DevOpsSummit @Datical #DevOps #Microservcies

Insights into the DIY DevOps Dilemma

In Enterprise DevOps, It’s Not Always Better to Roll Your Own

I read an insightful article this morning from Bernard Golden on DZone discussing the DevOps conundrum facing many enterprises today – is it better to build your own DevOps tools or go commercial?  For Golden, the question arose from his observations at a number of DevOps Days events he has attended, where typically the audience is composed of startup professionals:

I have to say, though, that a typical feature of most presentations is a recitation of the various open source products and components and how they integrated them to implement their solution. In a word, how they created their home-grown solution. Given that many of these speakers hail from startups with small teams and a focus on conserving cash, this approach makes sense. Moreover, given that these are typically small teams working at companies following the Lean Startup approach, using open source that allows rapid change as circumstances dictate makes sense as well. And, in any case, startups need to solve problems today because who knows what the future will bring?

That last part is what sparks the question – what does the future hold?  For that startup that begins to scale and grow, what are the future implications of building and, more importantly, trying to maintain a homegrown solution as more teams, products, and use cases proliferate?  “And for enterprises, which must plan for the future,” Golden writes, “an approach that doesn’t have a long-term time horizon is problematic, to say the least.”

The first issue Golden sees in a DIY DevOps approach is the unspoken presumption that the same intensity of interaction and collaboration experienced at a startup can scale to a larger organization, or is achievable within a large enterprise.  Golden writes, “in an enterprise, the kind of ‘he sits two seats away from me, so I can just turn to him and ask a question’ is unachievable,” arguing that, “solutions based on proximity and immediate response to problems is not scalable.”  Large IT organizations are going to need a solution that scales enough to cover the myriad of different applications that are developed and supported, and in Golden’s opinion “Homegrown solutions invariably are written for a limited use case that reflects the situation at the moment and are difficult to modify when new requirements appear associated with a new use case.”

This perspective is interesting to me for the simple fact that I’ve read a great deal about how a number of large enterprises like Macy’s, Nationwide and Highmark, heck, even IBM, are in various stages of tackling this issue right now, and are reporting a great deal of success in their efforts.  The DevOps leaders in these organizations have embraced the idea of a DevOps culture where development and operations collaborate closely together and are working hard to systematize those interactions.  On the flip side, though, these organizations are, to Golden’s point, leveraging commercial DevOps solutions pretty heavily in order to achieve their goals for technical processes like Continuous Delivery.

Another issue Golden sees in the DIY DevOps approach is the potential for promoting the unique snowflake problem to a system-level issue rather than just a one-off application issue.  “It’s fantastic that the application resources themselves are standardized [in DevOps], but a bespoke system invariably falls further and further behind commercial systems, particularly those that take responsibility for selecting, integrating, and supporting one or more open source components,” Golden argues.  In this scenario, the vendor supported open source solution benefits from the wide community of developers working to make it better, increasing the rate of innovation over a homegrown solution.  Additionally, the vendor becomes responsible “to make sure all the components are properly integrated” to the benefit of all customers, particularly those in large organizations.

We’ve seen this scenario play out many times with our customers.  Built on Liquibase, the leading open source solution for versioning and migrating the database, the task for Datical is to ensure the solution is viable for large enterprises in terms of supporting their myriad use cases as well as their requirements for scalability and reliability.  It’s rather often that we’ll be approached by a team who has invested years in supporting Liquibase within their organization, but are at a point now where either new requirements dictate the reallocation of resources to more strategic initiatives, or they simply want to get out from under the overhead created by maintaining their homegrown Liquibase implementation.  It’s perhaps even more often that a large team investigating Liquibase as a possible solution contacts us because they themselves have realized the kind of investment they will have to make, in terms of time and money, in order to customize Liquibase to their use cases and environments.

The final issue Golden raises in the DIY DevOps dilemma is that of continuity.  “It’s fantastic that you have a member of your staff who is talented and creative and puts together your DevOps system,” writes Golden, “However, someday he or she will be gone, and someone else will have to maintain the system.”  Going back to Golden’s argument that the enterprise has to plan for long-term time horizons, this is an important point to consider.  IT often complains of the cost of supporting and maintaining legacy systems, and in some cases it’s possible that a DIY DevOps solution will end up being one of those legacy systems.  You could certainly argue that an internal DevOps system, because of its high visibility, will have staff members clamoring to work on it after the original maintainer departs, but it’s still an issue that should be carefully analyzed and examined before committing to a course of action.

All of these issues lead to Golden’s closing argument, which is salient.  When considering a DIY DevOps approach, what you’re really thinking about is how you’re going to allocate your finite resources towards achieving your goals.  If resources are committed to developing and maintaining a DevOps system or suite of tools, then those resources can’t be used elsewhere.  In companies that were born in the cloud and whose business models rest upon their ability to devise new and innovative technologies, rolling their own DevOps probably makes sense.  For a large commercial bank, however, with core competencies in things like finance and investment, it is probably the better course of action to purchase a commercial DevOps solution instead, freeing up precious resources to focus on serving their customers through innovative financial products and services.

More Stories By Rex Morrow

Rex is the Marketing Director at Datical, a venture-backed software company whose solution, Datical DB, manages and simplifies database schema change management in support of high velocity application releases. Prior to Datical, Rex co-founded Texas Venture Labs, a startup accelerator at the University of Texas, and received his MBA from the McCombs School of Business. Before graduate school, Rex served as a Captain in the U.S. Army, and was awarded two bronze stars during combat deployments in Iraq.

Microservices Articles
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...