|By JP Morgenthal||
|January 14, 2014 04:15 PM EST||
To lower IT operational costs and/or to become more agile, the business must simplify the processes to deliver and manage infrastructure and the applications running on that infrastructure. Focusing on one without the other is simply applying yet another band-aid to an already hampered environment. Delivering IT as a service requires transformative efforts across all of IT and a re-evaluation of the metrics currently used to judge success. Achieving these goals demands a new platform and approach to delivering data and applications to users.
I was recently reviewing a reference architecture for Infrastructure-as-a-Service (IaaS) and was confounded by the sheer complexity still required to deliver what amounts to a starting point for the higher level task of deploying software. Perhaps I set the bar too high, but if I were a CIO, any new infrastructure investment I made today would need to be part of a self-aware automatically elastic resource pool. That is, when I plug in the new hardware (e.g., server, storage, network) I’m asked a couple of basic questions about allocation and voila the hardware is automatically incorporated into one or more resource pools. Moreover, there's software that sits on top of that pool that allocates it out to users on a metered basis. Any further time spent on operational configuration, engineering and deployment is simply wasted effort.
This vision of elastic and automated configuration is not a pipe-dream otherwise it would be too costly for a business like Amazon to deliver its service at the price point that it does. If Amazon required the amount of human involvement in managing the infrastructure that most IT organizations currently require the cost of delivering their EC2 and S3 services would not be sustainable let alone continue to shrink.
As CIO of Company A, it then occurs to me, I have two choices: a) change the way I operate to look more like Amazon, or b) let Amazon do what they do best and subscribe to their services. The former is a daunting task for sure. Transforming IT to deliver as a service has an impact deep into the DNA of the business itself. There’s a whole lot of risk and no guarantee of success. Moreover, the key metric to demonstrate success is elusive at best as it needs to be a combination of productivity, costs and agility weighted against a number that is very difficult to compute based on the current architecture. Hence, how can I demonstrate to executive management and the Board that their investment in transformation was money well spent? The latter is interesting, but I need to overcome internal perceptions surrounding lack of security and the issues related to CapEx versus OpEx spending. Either way, or maybe I choose a combination of both in a hybrid architecture, all I have to show at the end of the day is a better way to allocate infrastructure for purposes of deploying and managing applications; at which point we encounter a whole new set of complexities.
One of the biggest hurdles surrounding application management is the diversity and lack of integration of the application stacks. Consider a basic financial application workload that is used companywide. There’s a database, which must be able to handle hundreds of transactions a minute. There’s the application itself which is comprised of some core executable modules and a bunch of business rules that are defined within the application. The core executables probably run in some form of middleware that enables it to scale, which is a separate layer that must be individually deployed and managed. Of course, this is the bare minimum to make the application work. There also needs to be some security layers made up of anywhere from five to ten applications and tools for managing disaster recovery, high-availability and extracting data for external reporting. All these layers must be integrated to work seamlessly. Moreover, this is just one of many applications an enterprise operates.
Now, imagine that entire integrated application stack just described is a complete package (workload) that must now be made to work with a given hardware infrastructure, usually chosen by a separate team that has little to no experience with the application architecture or worse selected by procurement off an approved vendor list. Moreover, it requires a combination of skills from both application and operations teams to figure out how best to scale the application over time as demand rises. As George Takei, of Star Trek fame likes to say, “oh myyy”! In short, there’s just too many moving parts requiring too much human intervention and too much engineering and configuration relative to the value received. It’s no wonder business is seriously considering alternative means of acquiring IT services, such as Software-as-a-Service (SaaS). For this reason over the next few years IT must and will start to seriously consider Platform-as-a-Service (PaaS).
Most times I recommend evolution over revolution as it is usually more palatable by the business. However, continuing to deliver IT in this manner is unsustainable and is not delivering the agility and speed require by the business. PaaS changes the way we think about building applications and leveraging PaaS capabilities will require applications to be rewritten or migrated. These applications will not directly implement a unique instance of application infrastructure, but will leverage services that are designed to meet stated service levels. In turn, this simplifies the deployment and operation of that application as well as opens the architecture up to leverage services with better economics over time. It will remove the requirement for an infrastructure and operations team to figure out how to optimize a selected set of hardware to meet the goals of a single application and let them focus instead on how to deliver services.
Moreover, PaaS delivers a shared common platform, which is where PaaS becomes such a critical component to tying together ITaaS, IaaS and DevOps into a singular goal of design, build, test, run and terminate. IaaS is merely a commodity layer that offers high value compute services and can support the selected PaaS. DevOps provides the framework and structure for configuration of the application support services, such as networking, security, authorization, backup, etc.
In the end, instead of multiple disjointed teams each focused on their own specializations, there will be a single team focused on the goal of delivering IT services to the business. This is all driven by the focus and perspective fostered through a move toward PaaS.
As operational failure becomes more acceptable to discuss within the software industry, the necessity for holding constructive, actionable postmortems increases. But most of what we know about postmortems from "pop culture" isn't actually relevant for the software systems we work on and within. In his session at DevOps Summit, J. Paul Reed will look at postmortem pitfalls, techniques, and tools you'll be able to take back to your own environment so they will be able to lay the foundations for h...
Oct. 13, 2015 12:30 PM EDT Reads: 204
DevOps Summit, taking place at the Santa Clara Convention Center in Santa Clara, CA, and Javits Center in New York City, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
Oct. 13, 2015 12:15 PM EDT Reads: 153
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
Oct. 13, 2015 12:00 PM EDT Reads: 294
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
Oct. 13, 2015 12:00 PM EDT Reads: 205
Now, with more hardware! September 21, 2015, Don MacVittie, Sr. Solutions Architect. The “continuous” trend is continuing (get it..?), and we’ll soon reach the peek of the hype cycle, with continuous everything. At the pinnacle of the hype cycle, do not be surprised to see DDOS attacks re-branded as “continuous penetration testing!” and a fee … Read More Continuous Provisioning
Oct. 13, 2015 11:45 AM EDT
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
Oct. 13, 2015 11:00 AM EDT Reads: 322
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Oct. 13, 2015 10:00 AM EDT Reads: 224
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...
Oct. 13, 2015 10:00 AM EDT Reads: 292
DevOps Summit at Cloud Expo 2014 Silicon Valley was a terrific event for us. The Qubell booth was crowded on all three days. We ran demos every 30 minutes with folks lining up to get a seat and usually standing around. It was great to meet and talk to over 500 people! My keynote was well received and so was Stan's joint presentation with RingCentral on Devops for BigData. I also participated in two Power Panels – ‘Women in Technology’ and ‘Why DevOps Is Even More Important than You Think,’ both ...
Oct. 13, 2015 10:00 AM EDT Reads: 8,771
DevOps is here to stay because it works. Most businesses using this methodology are already realizing a wide range of real, measurable benefits as a result of implementing DevOps, including the breakdown of inter-departmental silos, faster delivery of new features and more stable operating environments. To take advantage of the cloud’s improved speed and flexibility, development and operations teams need to work together more closely and productively. In his session at DevOps Summit, Prashanth...
Oct. 13, 2015 10:00 AM EDT Reads: 271
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data ...
Oct. 13, 2015 09:30 AM EDT Reads: 203
There’s no shortage of guides and blog posts available to provide you with best practices in architecting microservices. While all this information is helpful, what doesn’t seem to be available in such a great number are hands-on guidelines regarding how microservices can be scaled. Following a little research and sifting through lots of theoretical discussion, here is how load-balancing microservices is done in practice by the big players.
Oct. 13, 2015 09:00 AM EDT Reads: 160
As the world moves towards more DevOps and microservices, application deployment to the cloud ought to become a lot simpler. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. In his session at 17th Cloud Expo, Raghavan "Rags" Srinivas, an Architect/Developer Evangeli...
Oct. 13, 2015 09:00 AM EDT Reads: 290
What Is Emergent About Emergent Architecture? By @TheEbizWizard | @DevOpsSummit #DevOps #BigData #API
All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness! If only it were that easy, right? I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?
Oct. 13, 2015 08:00 AM EDT Reads: 470
Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales - most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services - like enter and update record. That allows the customer service team to writ...
Oct. 13, 2015 05:45 AM EDT Reads: 543
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
Oct. 13, 2015 05:00 AM EDT Reads: 390
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Oct. 13, 2015 05:00 AM EDT Reads: 1,094
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on DevOps.com. We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
Oct. 13, 2015 05:00 AM EDT Reads: 403
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.
Oct. 13, 2015 04:00 AM EDT Reads: 364
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Oct. 13, 2015 03:00 AM EDT Reads: 327