Welcome!

Microservices Expo Authors: Dalibor Siroky, Elizabeth White, Pat Romanski, John Katrick, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog, @CloudExpo, FinTech Journal

@DevOpsSummit: Article

You Need #DevOps | @DevOpsSummit @DMacVittie #CD #APM #Monitoring

The problem is right in front of us, we’re confronting it every day, and yet a ton of us aren’t fixing it for our organizations

For those unfamiliar, as a developer working in marketing for an infrastructure automation company, I have tried to clarify the different versions of DevOps by capitalizing the part that benefits in a given DevOps scenario. In this case we’re talking about operations improvements. While devs – particularly those involved in automation or DevOps will find it interesting, it really talks to growing issues Operations are finding.

The problem is right in front of us, we’re confronting it every day, and yet a ton of us aren’t fixing it for our organizations, we’re merely kicking the ball down the road.

The problem? Complexity. Let’s face it, the IT world is growing more complex by the week. Sure, SaaS simplified a lot of complex apps that either weren’t central to the business we’re in or were vastly similar for the entire market, but once you get past those easy pickings, everything is getting more complex.

As I’ve mentioned in the past, we now have OpenStack on OpenStack. Yes, that is indeed a thing. But ignoring nested complexities to solve complexity issues (that is the stated purpose of OoO), rolling out an enterprise NoSQL database or even worse a Big Data installation is a complex set of multiple systems, some of which might be hosted in virtuals or the cloud, adding yet another layer of configuration complexity. The same is true for nearly every “new” development going on. Want SDN? Be prepared to install a swath of systems to support it. The list goes on and on. In fact, what started this thought for me was digging into Kubernetes. Like most geeks, I started with the getting started app – we have devolved to “try first, read later” in our industry, for good or bad. The Kubernetes Getting Started Guide is a good example of how bad our complexity has gotten. To make use of the guide you need Docker, GKE, and GCR, then you need to use bash, Node, and a command line with an array of parameters that, because you’re just getting started, you have no idea what they’re doing.

We need time to get this stuff going, and time is something that we increasingly over the last decade or so (at least) have less of. The amount and complexity of the gear Operations is overseeing has been increasing, the number of instances – be they virtual or cloud – has also, all at a faster rate than staff at most organizations. And that’s a growing problem too.

One does not simply “deploy Kubernetes” it appears. One has to work at it, like one has to struggle with Big Data installs or UCE configuration, or even in some orgs, Linux installations (which are still handled individually and done by hand in more places than makes sense to me – but I work for a company that sponsors a Linux install automation open source project, so perhaps my view is jaded by that experience).

To find the time to figure out and implement toolsets like Kubernetes and OoO, whose stated goals are to make your life easier in the long run, we need to remove the overhead of day-to-day operations. That’s where DevOPS comes in. If the man-hours to deploy a server or an app can be reduced to zero or near zero by the use of automation tools and a strong DevOps focus, then that recovered time can be reinvested in new tools to help improve operations. Yes, it’s a vicious circle, you need time to get time… But simple, easy-to-master tools can free time to tackle the more complex. Something like my employers’ Stacki project that is a simple “drop in the ISO, answer questions about the network, install, then learn a simple command line”. There are a lot of sophisticated tools out there that follow this type of install pattern and free up an impressive amount of time. Most of the application provisioning tools out there are relatively painless to set up these days (though that wasn’t always true), and can reap benefits quickly also, for example. My first run with Ansible, by way of explaining that statement, had me deploying apps in a couple of hours. While it would take longer to set it up and configure it to deploy complex datacenter apps, the fact is that most of us can find a few hours in the course of a couple weeks, particularly if we convince management of the potential benefits beforehand. As an added benefit, application provisioning tools are increasingly including network provisioning for most vendors, further reducing time spent doing manual tasks (once again, after you figure it out).

And that’s the real reason we need DevOPS. People talk about repeatability, predictability, reduced human errors… All true, but they come with their own trade-offs. The real reason is to free time so we can focus on more complex systems being rolled out and get them set without interrupting our day to do standard maintenance work that consumes an inordinate amount of time.

In the end, isn’t that what we all would love to have – the repeated steps largely automated so that we can look into new tools that improve operations or help drive the organization forward? Take some time and invest in cleaning up ops, so that you can free time to help move things forward. It’s worth the investment. In the case of servers, man-hours invested to get from nothing to hundreds of machines can be reduced from hundreds of machines * hours per machine to “Tell it about IPs and boot the machines to be configured”. That’s huge. Even if you sit and watch the installs to catch any problems, the faster server provisioning toolsets will be done with those hundreds of machines in an hour or two. Which means even after troubleshooting any problems, you’re likely to be off doing something else the next day. Not a bad ROI, if you invest the little bit of time to get started. Reinvest some of that savings in the next automation tool and compound the return… Soon you’re in nirvana, researching and implementing, while installs, reinstalls, and fixes to broken apps are handled by reviewing a report and telling the system in question (app or server provisioning systems) to fix it or install it.

It’s pretty clear that complexity will continue to increase, and tools to simplify that complexity will continue to come along. It is definitely worthwhile to invest a little time in those tools so you can invest more in those new systems.

But that’s me, I’m a fan of looking into the possible, not doing the same stuff over and over. I always assume most of IT is the same, if only they had the time. And we can have the time, so let’s do it.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...