Click here to close now.




















Welcome!

Microservices Expo Authors: Joe Pruitt, Lori MacVittie, Carmen Gonzalez, AppDynamics Blog, Elizabeth White

Related Topics: SDN Journal, Microservices Expo, Microsoft Cloud, Containers Expo Blog, @CloudExpo, @BigDataExpo, @DevOpsSummit

SDN Journal: Article

DevOps: Bringing 'Life' to Application Lifecycle Management

The DevOps methodology is a straightforward and obvious initiative to cater for the changing face of application development

For most organizations application releases are analogous to extremely tense and pressurized situations where risk mitigation and tight time deadlines are key. This is made worse with the complication of internal silos and the consequent lack of cohesion that exists not just within the microcosm of IT infrastructure teams but also amongst the broader departments of development, QA and operations. Now with the increasing demand on IT from application and business unit stakeholders for new releases to be deployed quickly and successfully, the interdependence of software development and IT operations are being seen as an integral part to the successful delivery of IT services. Consequently businesses are recognizing that this can't be achieved unless the traditional methodologies and silos are readdressed or changed. Cue the emergence of a new methodology that's simply called DevOps.

The advancement and agility of web and mobile applications has been one of the key factors that have led many to question the validity or even practicality of the traditional waterfall methodology of software development.  The waterfall's rigorous methodology of conception, initiation, analysis, design, construction, testing, production/implementation and maintenance in an age when the industry demands "agility" can almost seem archaic. While no one can dispute the waterfall methodology's relevance, certainly not companies such as Sony who suffered the embarrassment of the rootkit bug, but with web and mobile app releases needing to be rapidly and regularly deployed, can companies really continue to proceed down a long a continuous integration process?

Much of the problem stems from legacy IT people cultures as opposed to the methodology itself where each individual is responsible for their sole role, within their specific field, within their particular department. Consequently within the same company the development team is often seen as the antithesis of operations with their constant drive for change in needing to meet user needs for frequent delivery of new features. In stark contrast operations are focused on predictability, availability and stability, factors that are nearly always put at risk whenever development request a "change" to be introduced.

This disengagement is further exacerbated with development teams delivering code with little or no involvement from their operations teams. Additionally to support their rapid deployment requirements, development teams will use tools that emphasize flexibility and consequently bear little or no resemblance to the rigid performance and availability-based toolsets of operations. In fact it would be rare to find either operations or development teams being aware of their counterparts toolsets yet alone taking any interest in potentially sharing or integrating them.

Alternatively you have the operations team that will do everything they can to stall any changes and new features that are being proposed to the production environment in an attempt to mitigate any unwanted risk. Eventually when development teams are allowed to get their software release picked up by operations it's usually after operations have gone through a laborious process of script creation and config file editing to accommodate the deployment on a production runtime environment that is significantly different to the one used by development.

Indeed it's commonplace to see inconsistencies between the runtime environment the development teams have used to run their code upon (typically low resourced desktops) and the high resource server OS based environments utilized by operations. With development having tested and successfully run everything on a Windows 7 desktop, it's no surprise that once operations deploy it on a Unix-based server with different Java versions, software load balancers and completely different properties files, etc., that failure and chaos ensues during a "Go Live". What follows is the internal blame game where operations will point to an application that isn't secure, needs restarting and isn't easy to deploy while development will claim that it worked perfectly fine on their workstations and hence operations should be capable of seamlessly scaling and making it work on production server systems.

Subsequently this is what the panacea being termed DevOps was established to address.  DevOps from its outset works to push for collaboration and communication between the development, operations and quality assurance teams. Based on the core concept of unifying processes into a comprehensive "development to operations" lifecycle, the aim is to inculcate an end-to-end sense of ownership and responsibility for all departments. While the QA, development and operations teams have unique methods and aims in the process, they are all part of a single goal and overarching methodology. This entails providing the development team more environmental control while concurrently ensuring operations have a better understanding of the application and its infrastructure requirements. This involves operations even taking part (and consequently having co-ownership) of the development of applications that they can in turn monitor throughout the development to deployment lifecycle.

The result is an elimination of the blame culture especially in the case of any application issues as both software development and operational maintenance is a co-owned process. Instead of operations blaming development for a flaky code and development blaming operations for an unstable infrastructure, the trivial and time consuming internal finger pointing practices are replaced with a traceable root cause analysis between all departments as a single team. Consequently application deployment becomes more reliable, predictable and scalable to the business' demands.

Additionally DevOps calls for a unified and automated tooling process. The evolution of web applications and Big Data has led to infrastructure needing to scale and grow considerably quicker. This means that the traditional model of fire fighting and reactive patching and scripting are no longer a viable option. The need for automation and unified tools whether for deployment, workflows, monitoring, configuration etc. is a must not just to meet time constraints but also to safeguard against configuration discrepancies and errors. Hence the growing awareness of DevOps has aided an emergence in the market of open source software that deal with this very challenge ranging from configuration management and monitoring tools such as Rundeck, Vagrant, Puppet and Chef. While these tools are familiar to development teams the aim is to also make them the concern and interest of operations.

The DevOps methodology is a straightforward and obvious initiative to cater for the changing face of application development and deployment. Despite this it's greatest challenge lies within people and their willingness to change. Both development and operations teams need to remove themselves from their short term silo-focused objectives to the broader long term goals of the business. That necessitates that the objective should be a concerted and unified effort from both teams to have applications deployed in minimum time with minimum risk. I've often worked with operations staff who have little or no idea of how the applications they're supporting are related to the products and services their companies are delivering and how in turn they are generating revenue as well as providing value to the end user. Additionally I've worked with development teams that were outsourced from another country where communication was non-existent not just because of the language barrier. As the demands from the business on IT rapidly increase and change so too must the silo mindset. DevOps is aiming at initiating an inevitable change; those that resist may find that they themselves will get changed. As for those that embrace it, they may just find application releases a lot less painful.

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@MicroservicesExpo Stories
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization. In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss: The acceleration of application delivery for the business with DevOps
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, S...
Software is eating the world. The more it eats, the bigger the mountain of data and wealth of valuable insights to digest and act on. Forward facing customer-centric IT organizations, leaders and professionals are looking to answer questions like how much revenue was lost today from platinum users not converting because they experienced poor mobile app performance. This requires a single, real-time pane of glass for end-to-end analytics covering business, customer, and IT operational data.
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
"ProfitBricks was founded in 2010 and we are the painless cloud - and we are also the Infrastructure as a Service 2.0 company," noted Achim Weiss, Chief Executive Officer and Co-Founder of ProfitBricks, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction an...
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Summer is finally here and it’s time for a DevOps summer vacation. From San Francisco to New York City, our top summer conferences list is going to continuously deliver you to the summer destinations of your dreams. These DevOps parties are hitting all the hottest summer trends with Microservices, Agile, Continuous Delivery, DevSecOps, and even Continuous Testing. Move over Kanye. These are the top 5 Summer DevOps Conferences of 2015.
Countless business models have spawned from the IaaS industry. Resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his General Session at 16th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, broke down what we've got to work with and discuss the benefits and pitfalls to discover how we can best use them to d...