Welcome!

Microservices Expo Authors: Carmen Gonzalez, Kalyan Ramanathan, Liz McMillan, Sematext Blog, Pat Romanski

Related Topics: Microservices Expo, Microsoft Cloud, Containers Expo Blog, API Journal, Agile Computing, @CloudExpo

Microservices Expo: Article

Better Services Mean Better Performance

Insurance leader AIG drives business transformation and IT service performance through center of excellence model

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how global insurance leader American International Group (AIG) has leveraged a performance center of excellence (COE) to help drive business transformation.

We learn in our discussion how AIG's Global Performance Architecture Group improved performance of their services to deliver better experiences and payoffs for businesses and end-users alike.

Here to explore these and other enterprise IT issues, we're joined by our co-host for this sponsored podcast, Chief Software Evangelist at HP, Paul Muller.

And we also welcome our special guest, Abe Naguib, Senior Director of AIG’s Global Performance Architecture Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Many organizations are now focusing more on the user experience and the business benefits and less on pure technology -- and for many, it's a challenge. From a very high level, how do you perceive the best way to go about a cultural shift, or an organizational shift, from a technology focus more toward this end-user experience focus?

The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money.

Naguib: There are several paradigms involved from the COO and CFO’s push on innovation and efficiency. A lot of the tooling that we use, a lot of the products we use, help to fully diversify and resolve some of the challenges we have. That’s to keep change running.

Abe Naguib

The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money. That’s a tall order and, he has to predict benefit, gauge value, maintain integrity, socialize, and evolve the strategy of business ideas on how technology should run.

We have to manage quite a few challenges from the demand of operating a global franchise. Our COE looks at various levels of optimization and one key target is customer service, and factors that drive the value chain.

That’s aligning DevOps to business, reducing data-center sprawl, validating and making sense of vendors, products, and services, increasing the return on investment (ROI) and total cost of ownership (TCO) of emerging technologies, economy of scale, improving services and hybrid cloud systems, as we isolate and identify the cascading impacts on systems. These efforts help to derive value across the chain and eventually help improve customer value.

Gardner: Paul Muller, does this jibe with what you're seeing in the field? Do you see an emphasis that’s more on this sort of process level, when it comes to IT with of course more input from folks like the COO and the chief financial officer?

Level of initiatives

Muller: As I was listening to Abe's description I was thinking that you really can tell the culture of an organization by the level of initiatives, and thinking that it has. In fact, you can't change one without changing the other. What I've just described is a very high level of cultural maturity.

Paul Muller

We do see it, but we see it in maybe 10 to 15 percent of organization that have gone through the early stages of understanding the performance and quality of applications, optimizing it for cost and performance, but then moving through to the next stage, reevaluating the entire chain, and looking to take a broader perspective with lots of user experience. So it's not unique, but it's certainly used among the more mature in terms of observational thinking.

Gardner: Tell us about AIG, its breadth, and particularly the business requirements that your Global Performance Architecture Group is tasked with meeting.

Naguib: AIG is a leading international insurance organization, across 130 countries. AIG’s companies serving commercial, institutional, individual customers, through one of the world’s most extensive property/casualty networks, are leading providers of life insurance and retirement services in the US.

Among the brand pillars that we focused on are integrity, innovation, and market agility across the variety of products that we offer, as well as customer service.

Bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity.

With AIG’s mantra of "better, faster, cheaper," my organization’s people, strategy, and comprehensive tools help us to bridge these gaps that a global firm faces today. There are many technology objectives across different organizations that we align, and we utilize various HP solutions to drive our objectives, which is getting the various IT delivery pistons firing in the same direction and at the right time.

These include performance, application lifecycle management (ALM), and business service management (BSM), as well as project and portfolio management (PPM). Over time our Global Performance organization has evolved, and our senior manager realized our strategic benefit and capability to reduce cost, risk, and mitigate production and risk.

Our role eventually moved out of quality assurance's QA’s functional testing area to focus on emphasizing application performance, architecture design patterns, emerging technologies, infrastructure and consolidation strategies, and risk mitigation, as well increasing ROI and economy of scale. With the right people, process, and tools, our organization enabled IT transparency and application tuning, reduced infrastructure consumption, and accelerated resolution of any system performances in dev and production.

The key is bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity. Now, our leaders seek our guidance to help tune IT at some degree of financial performance to unlock optimal business value.

Culture of IT

Gardner: Is that a pattern you're seeing, that the people in QA are in the sense breaking out of just an application performance level and moving more into what we could call IT performance level?

Naguib: In the last six or seven years, there's less focus on just basic performance optimization. The focus is now on business strategy impact on infrastructure CAPEX, and OPEX. Correlating business use cases to impact on infrastructure is the golden grail.

I always say that software drives the hardware.

Once you start communicating to CIOs the impact of a system and the cost of hosting, licensing, headcount, service sprawl, branding, and services that depend on each other, we're more aligning DevOps with business.

Muller: I just had a conversation not three weeks ago with a financial institution in another part of the world. I asked who is responsible for your end-to-end business process -- in this case I think it was mortgage origination -- and the entire room looked at each other, laughed, and said "We don't know."

So you've really got this massive gap in terms of not just IT process maturity, but you also have business-process maturity, and it's very challenging, in my experience, to have one without having the other.

Gardner: I think we have to recognize too that most businesses now realize that software is such an integral part of their business success. Being adept at software, whether it's writing it, customizing it, implementation and integration, or just overall lifecycle has become kind of the lifeblood of business, not just an element of IT. Do you sense that, Abe, that software is given more clout in your organization?

Naguib: Absolutely Dana. I truly believe that. I've been kind of an internal evangelist on this, but I always say that software drives the hardware. Whether I communicate with the enterprise architects, the dev teams, the infrastructure teams, software frankly does drive the hardware.

That's really the key point here. If you start managing your root cost and performance from a software perspective and then work your way out, you’ve got the key to unlocking everything from efficiencies to optimizing your ROI and to addressing TCO over time. It's all business driven. Know your use cases. Know how it impacts your software, which impacts your infrastructure.

Converged infrastructure

Gardner: Just being productive for its own sake isn’t good enough in this economy. We have to show real benefits, and you have to measure those benefits. Maybe you have some way to translate how this actually does benefit your customers. Any metrics of success you can share with us, Abe?

Naguib: Yes, during our initial requirements-gathering phase with our business leaders, we start defining appropriate test-modeling strategy, including volumetrics, and managing and understanding the deployment pattern with subscriber demographics and user roles. We start aligning DevOps organizations with business targets which improves delivery expectations, ROI, TCO, and capacity models.

The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations.

Then, before production, our Application Performance Engineering (APE) team identifies weak spots to provide the production team with a reusable script setting thresholds on exact hotspots in a system, so that eventually in production, they can take appropriate productive measures. Now, this is value add.

Muller: As we’re seeing across the planet at the moment, there's a recognition that to bring great software and information is really a function of getting Layers 1 through 7 in the technology stack working, but it's also about getting Layer 8 working. Layer 8, in this case, is the people. Unfortunately, being technologists, we often forget about the people in this process.

What Abe just described is a great representation of the importance of getting not just a functional part of IT, in this case quality and performance working well, but it's about recognizing the software will one day be delivered to operational staff to internally monitor and manage it in a production setting.

The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations, to help them accelerate the release of quality applications, and to automate things like threshold setting, and optimize monitoring of metrics ahead of time. Rather than discovering that an application might fail to perform in a production setting, where you've got users screaming at you, you get all of that work done ahead of time.

Sharing and trust

You create a culture of sharing and trust between development, quality, and operations that frankly doesn’t exist in a lot of process where the relationship between development and operations is pretty strained.

Gardner: Abe, how do you measure this? We recognized the importance of the metrics, but is there a new coin of the realm in terms of measurement? How do you put this into a standardized format that you’re going to take to your CFO and your COO and say here’s what's really happening?

Naguib: That's a good question. Tying into what Paul was saying, nobody cared about whether we improved performance by three seconds or two seconds. You care at the front end, when you hear users grumbling. The bottom line is how the application behaves, translating that into business impact as well as IT impact.

Business impact is what are the dollar values to make key use cases and transactions that don't scale. Again, software drives the hardware. If an application consumes more hardware, the hardware is cheap now-a-days, but licenses aren’t. You have database and you have middleware products running in that environment, whether it's on-premise or in the cloud.

The point is that impact should be measured, and that's how we started communicating results through our organization. That's when we started seeing C-level officers tuning in and realizing the impact of performance of both to the bottom line, even to the top line.

We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization.

Our role is to provide more insight earlier and quicker to the right people at the right time.

Leveraging HP’s partnership and solutions helped us to address technologies, whether Web 2.0, client-server, legacy systems, Web, cloud-based, or hybrid models. We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization, whether on premise or cloud.

Muller: In the enterprise today, it's all about getting your ideas out of your head and making them a reality. As Abe just described, most of the best ideas today that are on their way into business processes you can ultimately turn into software. So success is really all about having the best applications and information possible.

Understand maturity

The challenge is understanding how the technology, the business process and the benefits come together and then orchestrating that the delivery of that benefit to your organization. It's not something that can be done without a deliberate focus on process. Again, the challenge is always understanding your organization's maturity, not just from an IT standpoint, but importantly from a broader standpoint.

Naguib: What's the common driver for all? Money talks. Translating things into a dollar value started to bring groups together to understand what we can do better to improve our process.

What we're seeing more is that it's not just internal dev and ops that we're aligning with, or even our business service level expectations. It's also partnerships with key vendors that have opened up the roadmap to align our technologies, requirements, and our challenges into those solutions.

The gains we make are simple. They can be boiled down into three key benefits: savings, performance, and business agility. Leveraging HP's ALM solutions helps us drive IT and business transformation and unlock resources and efficiencies. That helps streamline delivery and an increased reliability of our mission critical systems.

After we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

My favorite has always been HP's LoadRunner Performance Center. It’s basically our Swiss Army Knife to support diverse platform technologies and align business use cases to the impact on IT and infrastructure via SiteScope, HP SiteScope.

We're able to deep dive into the diagnostics, if needed. And the best part is, after we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

So the tools are there. The best part is integrated, and actually work together very well.

Gardner: It really sounds like you've grabbed onto this system-of-record concept for IT, almost enterprise resource planning (ERP) for IT. Is that fair?

Naguib: That's a good way to put it.

Muller: One of the questions I get a lot from organizations is how we measure and reflect the benefit. What hard data have you managed to get?

Three-month study

Naguib: IDC came in and did an extensive three-month study, and it was interesting what they have found. We've realized a saving of more than $11 million annually for the past five years by increasing our economy of scale. Scale on a system allows more applications on the same host.

It's an efficiency from both hardware and software. They also found that our using solutions from HP increased staff productivity by over $300,000 a year. Instead of fighting fires, we're actually now focusing on innovation, and improving business reliability by over $600,000 a year.

So all that together shows a recoup, a five-year ROI, about 577 percent. I was very excited about that study. They also showed that we resolved mean time resolution over 70 percent through production debugging, root cause, and resolution efforts.

So what we found, and technologists would agree with me, is that today, with hardware being cheaper than software, there is a hidden cost associated with hosting an application. The bottom line, if we don’t test and tune our applications holistically, either the architecture, code, infrastructure, and shared services, these performance issues can quickly degrade quality of service, uptime, and eventually IT value.

I have a saying, which is that quality costs money but bad quality costs more.

Gardner: Abe, any recommendations that you might have for other organizations that are thinking of moving in this direction and that want to get more mature, as Paul would say. What are some good things to keep in mind as you start down this path?

Naguib: Besides software drives the hardware -- and I can't stress that enough -- are all the ways to understand business impact and translate whatever you're testing into the business model.

What happens to the scenarios such as outages? What happens when things are delayed? What is the impact on business operability, productivity, liability, customer branding. There are so many details that stem from performance. We used to be dealing with the "Google factor" of two-second response time, but now, we're getting more like millisecond response, because there are so many interdependencies between our systems and services.

Another fact is that a lot of products come into our doors on a daily basis. Modern technologies come in with a lot of promises and a lot of commitments.

Identify what works

So it's being able to weed through the chaff, identify what works, how the interdependencies work, and then, being able to partner with vendors of those solutions and services. Having tools that add transparency into their products and align with our environment helps bring things together more. Treating IT like a business by translating the impact into dollar value, helps to get lined up and responsive.

Muller: It might be a little controversial here, but the first step for progress on all of this is look in the mirror and understand your organization and its level of maturity. You really need to assess that very self-critically before you start. Otherwise, you're going to burn a lot of capital, a lot of time, and a lot of credibility trying to make a change to an organization from state A to state B. If you don’t understand the level of maturity of your present state before you start working on the desired state, you can waste a lot of time and money. It's best to look in the mirror.

The second step is to make sure that, before you even begin that process, you create that alignment and that desired state in the construct of the business. Make sure that your maturity aligns to the business's maturity and their goal. I just described the ability to measure the business impact in terms of revenue of IT services. Many companies can’t even do something as fundamental as that. It can be really hard to drive alignment, unless you’ve got business-IT alignment ahead of time.

I have said this so many times. The technology is a manageable problem, Layers 1 through 7, including management software to a certain degree, have solved problems the most time. Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.

Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.

I always recommend bringing along some sort of management of organizational change function. In our case, we actually have a number of trained organizational psychologists working for us who understand what it takes to get several hundred, sometimes several thousand, people to change the way they behave, and that’s really important. You’ve got to bring the people along with it.

Gardner: I'd like to thank our supporter for this series, HP Software, and remind our audience to carry on the dialogue with Paul Muller through the Discover Performance Group on LinkedIn, and also to follow Raf on his popular blog, Following the White Rabbit.

You can also gain more insights and information on the best of IT performance management at http://www.hp.com/go/discoverperformance.

And you can always access this and other episodes in our HP Discover Performance Podcast Series at hp.com and on iTunes under BriefingsDirect.

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@MicroservicesExpo Stories
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long develop...
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.