Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Pat Romanski, Elizabeth White, Lori MacVittie, Liz McMillan, Cloud Best Practices Network

Related Topics: @CloudExpo Blog, @MicroservicesE Blog, @ContainersExpo, @BigDataExpo Blog, SDN Journal, @DevOpsSummit Blog

@CloudExpo Blog: Blog Feed Post

Without a Strong PaaS, ITaaS, DevOps & IaaS Fall Short

Delivering IT as a service requires transformative efforts across all of IT

To lower IT operational costs and/or to become more agile, the business must simplify the processes to deliver and manage infrastructure and the applications running on that infrastructure. Focusing on one without the other is simply applying yet another band-aid to an already hampered environment. Delivering IT as a service requires transformative efforts across all of IT and a re-evaluation of the metrics currently used to judge success. Achieving these goals demands a new platform and approach to delivering data and applications to users.

I was recently reviewing a reference architecture for Infrastructure-as-a-Service (IaaS) and was confounded by the sheer complexity still required to deliver what amounts to a starting point for the higher level task of deploying software. Perhaps I set the bar too high, but if I were a CIO, any new infrastructure investment I made today would need to be part of a self-aware automatically elastic resource pool. That is, when I plug in the new hardware (e.g., server, storage, network) I’m asked a couple of basic questions about allocation and voila the hardware is automatically incorporated into one or more resource pools. Moreover, there's software that sits on top of that pool that allocates it out to users on a metered basis. Any further time spent on operational configuration, engineering and deployment is simply wasted effort.

This vision of elastic and automated configuration is not a pipe-dream otherwise it would be too costly for a business like Amazon to deliver its service at the price point that it does. If Amazon required the amount of human involvement in managing the infrastructure that most IT organizations currently require the cost of delivering their EC2 and S3 services would not be sustainable let alone continue to shrink.

As CIO of Company A, it then occurs to me, I have two choices: a) change the way I operate to look more like Amazon, or b) let Amazon do what they do best and subscribe to their services. The former is a daunting task for sure. Transforming IT to deliver as a service has an impact deep into the DNA of the business itself. There’s a whole lot of risk and no guarantee of success. Moreover, the key metric to demonstrate success is elusive at best as it needs to be a combination of productivity, costs and agility weighted against a number that is very difficult to compute based on the current architecture. Hence, how can I demonstrate to executive management and the Board that their investment in transformation was money well spent? The latter is interesting, but I need to overcome internal perceptions surrounding lack of security and the issues related to CapEx versus OpEx spending. Either way, or maybe I choose a combination of both in a hybrid architecture, all I have to show at the end of the day is a better way to allocate infrastructure for purposes of deploying and managing applications; at which point we encounter a whole new set of complexities.

One of the biggest hurdles surrounding application management is the diversity and lack of integration of the application stacks. Consider a basic financial application workload that is used companywide. There’s a database, which must be able to handle hundreds of transactions a minute. There’s the application itself which is comprised of some core executable modules and a bunch of business rules that are defined within the application. The core executables probably run in some form of middleware that enables it to scale, which is a separate layer that must be individually deployed and managed. Of course, this is the bare minimum to make the application work. There also needs to be some security layers made up of anywhere from five to ten applications and tools for managing disaster recovery, high-availability and extracting data for external reporting. All these layers must be integrated to work seamlessly. Moreover, this is just one of many applications an enterprise operates.

Now, imagine that entire integrated application stack just described is a complete package (workload) that must now be made to work with a given hardware infrastructure, usually chosen by a separate team that has little to no experience with the application architecture or worse selected by procurement off an approved vendor list. Moreover, it requires a combination of skills from both application and operations teams to figure out how best to scale the application over time as demand rises. As George Takei, of Star Trek fame likes to say, “oh myyy”! In short, there’s just too many moving parts requiring too much human intervention and too much engineering and configuration relative to the value received. It’s no wonder business is seriously considering alternative means of acquiring IT services, such as Software-as-a-Service (SaaS). For this reason over the next few years IT must and will start to seriously consider Platform-as-a-Service (PaaS).

Most times I recommend evolution over revolution as it is usually more palatable by the business. However, continuing to deliver IT in this manner is unsustainable and is not delivering the agility and speed require by the business. PaaS changes the way we think about building applications and leveraging PaaS capabilities will require applications to be rewritten or migrated. These applications will not directly implement a unique instance of application infrastructure, but will leverage services that are designed to meet stated service levels. In turn, this simplifies the deployment and operation of that application as well as opens the architecture up to leverage services with better economics over time. It will remove the requirement for an infrastructure and operations team to figure out how to optimize a selected set of hardware to meet the goals of a single application and let them focus instead on how to deliver services.

Moreover, PaaS delivers a shared common platform, which is where PaaS becomes such a critical component to tying together ITaaS, IaaS and DevOps into a singular goal of design, build, test, run and terminate. IaaS is merely a commodity layer that offers high value compute services and can support the selected PaaS. DevOps provides the framework and structure for configuration of the application support services, such as networking, security, authorization, backup, etc.

In the end, instead of multiple disjointed teams each focused on their own specializations, there will be a single team focused on the goal of delivering IT services to the business. This is all driven by the focus and perspective fostered through a move toward PaaS.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is an internationally renowned thought leader in the areas of IT transformation, modernization, and cloud computing. JP has served in executive roles within major software companies and technology startups. Areas of expertise include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. He routinely advises C-level executives on the best ways to use technology to derive business value. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@MicroservicesExpo Stories
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. ...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
Conferences agendas. Event navigation. Specific tasks, like buying a house or getting a car loan. If you've installed an app for any of these things you've installed what's known as a "disposable mobile app" or DMA. Apps designed for a single use-case and with the expectation they'll be "thrown away" like brochures. Deleted until needed again. These apps are necessarily small, agile and highly volatile. Sometimes existing only for a short time - say to support an event like an election, the Wor...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Data center models are changing. A variety of technical trends and business demands are forcing that change, most of them centered on the explosive growth of applications. That means, in turn, that the requirements for application delivery are changing. Certainly application delivery needs to be agile, not waterfall. It needs to deliver services in hours, not weeks or months. It needs to be more cost efficient. And more than anything else, it needs to be really, dc infra axisreally, super focus...
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
Many people recognize DevOps as an enormous benefit – faster application deployment, automated toolchains, support of more granular updates, better cooperation across groups. However, less appreciated is the journey enterprise IT groups need to make to achieve this outcome. The plain fact is that established IT processes reflect a very different set of goals: stability, infrequent change, hands-on administration, and alignment with ITIL. So how does an enterprise IT organization implement change...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations migh...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Mashape is bringing real-time analytics to microservices with the release of Mashape Analytics. First built internally to analyze the performance of more than 13,000 APIs served by the mashape.com marketplace, this new tool provides developers with robust visibility into their APIs and how they function within microservices. A purpose-built, open analytics platform designed specifically for APIs and microservices architectures, Mashape Analytics also lets developers and DevOps teams understand w...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous chall...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud envir...
Containers and Docker are all the rage these days. In fact, containers — with Docker as the leading container implementation — have changed how we deploy systems, especially those comprised of microservices. Despite all the buzz, however, Docker and other containers are still relatively new and not yet mainstream. That being said, even early Docker adopters need a good monitoring tool, so last month we added Docker monitoring to SPM. We built it on top of spm-agent – the extensible framework f...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
SYS-CON Events announced today that the "Second Containers & Microservices Conference" will take place November 3-5, 2015, at the Santa Clara Convention Center, Santa Clara, CA, and the “Third Containers & Microservices Conference” will take place June 7-9, 2016, at Javits Center in New York City. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.