Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Carmen Gonzalez, Liz McMillan, Cloud Best Practices Network

Blog Feed Post

AppDynamics Pumps up the Jam in San Francisco

It’s been a week since we hosted AppJam Americas, our first North American user conference in San Francisco. With myself as master of ceremonies, and a minor wardrobe malfunction at the start (see video at the end of this post), the entire day was a huge success for us and our customers. One thing that stuck in my mind was that applications today have become way more complex to manage—and strategic monitoring has become key to mastering that complexity. Simply put, SOA+Virtualization+Big Data+Cloud+Agile != Easy.

The day started with Jyoti Bansal, our CEO and Founder outlining his vision to be the world’s #1 solution for managing modern web applications. The simple facts are that applications have become more dynamic, distributed and virtual. All of these factors have increased their operational complexity, and log files and legacy monitoring solutions are ill-suited to the task.

Jyoti then outlined our core design principles around Business Transaction Monitoring, Self-learning, intelligence and the need to keep app management simple.  He then suggested what the audience could expect from AppJam: “AppJam is about sharing knowledge, learning best practices, guiding our direction and Jamming.” (We’re pretty sure by “jamming” he meant “partying.”)

With the intro from Jyoti done, it was time for me to nose dive the stage and introduce our first customer speaker – Ariel Tsetlin from Netflix.

How Netflix Operates & Monitors in the Cloud

With 27 million customers around the world, Neflix’s growth over the past three years has been meteoric. In fact, they found that they couldn’t build data centers fast enough. Hence, they moved to the public cloud in AWS for better agility.

In his session, Ariel talked about Netflix’s architecture in the cloud and how they built their own PaaS in terms of apps and clusters on top of Amazons IaaS. One unique thing Netflix does is bake their OS, middleware, apps and monitoring agents into a single image rather than using a tool like Chef or Puppet to manage application configuration and deployment separately from the underlying OS, middleware and tools. Everything is automated and managed at the instance level, with developers given the freedom and responsibility to deploy whenever they want to. That’s pretty cool stuff when you consider that developers now manage their own capacity and auto-scaling within the Cloud.

Ariel then talked about the assumption that failure is inevitable in the Cloud, with the need to plan and design around the fact that every part of the application can and will fail at some point. Testing for failure through “monkey theory” and Netflix’s “Simian Army” allows them to simulate failure at every level of the application, from randomly killing instances to taking out entire availability zones in AWS.

From a monitoring perspective, Netflix uses internally developed tools and AppDynamics, which are also baked into their AWS images. Doing so allows developers to live and die by monitoring in production through automated alerts and problem discovery. What’s perhaps different is that Netflix focuses their monitoring at the service level (e.g. app cluster), rather than at the infrastructure level–so they’re really not interested in CPU or memory unless it’s impacting their end users or business transactions.

Finally, Ariel spoke about AppDynamics at Netflix, touching on the fact they monitor over 1 million metrics per minute across 400+ business transactions and 300+ application services, giving them proactive alerts with URL drill-down into business transaction latency and errors from self-learned baselines. Overall, it was a great session for those looking to migrate and operate their application in the Cloud.

When Big Data Meets SOA

Next up was Bob Hartley, development manager from Family Search, who gave an excellent talk about managing SOA and Big Data behind the world largest genealogy architecture. With almost 3 billion names indexed and 550+ million high resolution digital images, FamilySearch has over 20 petabytes of data which needs to be managed by their Java and Node.JS distributed architecture spanning 5,000 servers. What’s scary is that this architecture and data is growing at a rapid pace, meaning application performance and scalability is fundamental to the success of Family Search.

After a brief intro, Bob started to talk about his Big Data architecture in terms of what technologies they were using to manage search queries, images, and people records. Clusters of Apache Lucene, SOLR, and custom map-reduce combined with traditional relational database technology such as Oracle, MySQL, and Postgres.

Bob then talked about his team’s mission – to enable business agility through visibility, responsiveness, standardization, and vendor independence. At the top of this list was to provide joy for customers and stakeholders through delivering features that matter faster.

Bob also emphasized the need for repeatable, reliable and automated processes, as well as the need to monitor everything so his team could manage the performance of their SOA and Big Data application through continuous agile release cycles. Family Search has gone from a 3-month release cycle to a continuous delivery model in which changes can be deployed in just 40 minutes. That’s pretty mind blowing stuff when you consider the size and complexity of their environment!

What’s interesting is that Release != Deploy at FamilySearch; they incrementally roll out out new features to different sets of users using flags, allowing them to test and tease features before making them available to everyone. Monitoring is at the heart of their continuous release cycle, with Dev and Ops using baselines and trending to determine the impact of new features on application performance and scalability.

In terms of the evaluation process, the company looked at 20 different APM vendors over a 6 month period before finally settled on AppDynamics due to our dynamic discovery, baselining, trending, and alerting of business transactions. As Bob said, “AppDynamics gave us valuable performance data in less than one day. The closest competitors took over 2 weeks just to install their tools.”

Today, a single AppDynamics management server is used in production to monitor over 5,000 servers, 40+ application services, and 10 million business transactions a day. Since deployment, Family Search has managed to find dozens of problems they’ve had for years, and have managed to scale their application by 10x without increasing server resources. They’ve also seen MTTD drop from days to minutes and MTTR drop from months to hours and minutes.

Bob finished his talk with his lessons learned for managing SOA, Big Data and Agile applications: “Keep Architecture Simple,” “Speed of delivery is essential,” “Systems will eventually fail,” and “Working with SOA, Big Data and Agile is hard.”

How AppDynamics is accelerating DevOps culture at Edmunds.com

After lunch, John Martin, Senior Director of Production Engineering, spoke about DevOps culture at Edmunds.com and how AppDynamics has become central to driving team collaboration. After a brief architecture overview outlining his SOA environment of 30 application services, John outlined what DevOps meant to him and his team – “DevOps is really about Collaboration – the most challenging issues we faced were communication.” Openly honest and deeply passionate throughout his session, John talked about three key challenges his team faced over the years that were responsible for the move to DevOps:

1. Infrastructure Growth

2. Communication Failure

3. Go Faster & Be Efficient

In 2005 Edmunds.com had just 30 servers; by the end of this year that figure will have risen to 2,500. Through release automation using tools such as Bladelogic and Chef, John and his team are now able to perform a release in minutes versus the 8 hours it took back in 2005.

John gave an example on communication failure in which development was preparing for a major release at Edmunds.com using a new CMS platform. This release was performance-tested just two weeks prior to go-live. Unfortunately the new platform showed massive scalability limitations, causing Ops to work around the clock to over-provision resources as a tactical fix. Fortunately the release was delivered on time and the business was happy. However, they suffered as a technology organization due to finding architecture flaws so late in the game – “We needed a clear picture of what went wrong and how we were going to prevent such breakdown in future.”

Another mistake with a release in 2010 which forced a major re-think between development and operations. It was this occurrence that caused Edmunds.com to get really serious about DevOps. In fact, the technical leads got together and reorganized specialized teams within Dev and Ops to resolve deployment issues and shed pre-conceptions on who should do what.  The result was improved relationships, better tooling, and a clearer perspective on how future projects could work.

John then touched on the tools that were accelerating DevOps culture, specifically Splunk for log files and AppDynamics for application monitoring. “AppDynamics provides a way for Dev and Ops to speak the same language. We’ve saved hundreds of hours in pre-release tests and discovered many new hotspots like the performance of our inventory business transaction which increased by 111%.” In fact, within the first year, AppDynamics generated a ROI of $795,166 with year 2 savings estimated at a further $420k. John laughed, “As you can see, AppDynamics wasn’t a bad investment.”

John ended his session with 5 tips for ensuring that DevOps succeeds in an organization: Be honest, communicate early and often, educate, criticize constructively, and create champions. Overall, a great session on why DevOps is needed in today’s IT teams.

Zero to Production APM in 30 days (while sending half a billion messages per day)

The final customer session of the day came from Kevin Siminski, Director of Infrastructure Operations at ExactTarget and it was definitely worth waiting for. Kevin actually kicked off his talk by describing a weekly product tech sync meeting which he had with his COO. The meeting was full with different stakeholders from development and operations who were discussing a problem that they were currently experiencing in production.

“I literally got my laptop out, brought up the AppDynamics UI and in one minute we’d found the root cause of the problem,” Kevin said. Not a bad way to get his point across of why the value of Application Performance Management (APM) in 2012 is so important.

Kevin then gave a brief intro to ExactTarget and the challenges of powering some of the world’s top brands like Nike, BestBuy and Priceline.com. ExactTarget’s .NET messaging environment is highly virtualized with over 5,000 machines that generate north of 500 million messages per day across multiple Terabytes of databases.

Kevin then touched on the role of his global operations team and how his team’s responsibility had shifted over the last four years. “My team went from just triaging system alerts to taking a more proactive approach on how we managed emails and our business. Today my team actively collaborates with development, infrastructure and support teams.” All these teams are now focused and aligned on innovation, stability, performance and high availability.

Kevin then outlined his 30-day implementation plan for deploying AppDynamics across his entire environment using a single dedicated systems engineer and an AppDynamics SaaS management server for production. Week 1 was spent on boarding the IT-security team, reviewing config mgmt and testing agent deployment to validate network and security paths. Week 2 involved deploying agents to a few of the production IIS pools and validating data collection on the AppDynamics management server. Week 3 saw all agents pushed to every IIS pool with collection mechanism sent to disabled. The config mgmt team then took over and “owned” the deployment process for go live. Week 4 saw all services and AppDynamics agents enabled during a production change window with all metrics closely monitored throughout the week to ensure no impact or unacceptable overhead.

AppDynamics’ first mission was to monitor the ExactTarget application as it underwent an upgrade to its mission-critical database from SQL Server 2003 to 2008. It was a high-risk migration as Kevin’s team were unable to assess the full risk due to legacy application components, so with all hands on the deck they watched AppDynamics as the migration happened in real-time. As the switch was made, application calls per minute and response time remained constant but application errors began to spike. By drilling down on these errors in AppDynamics, the dev team was quickly able to locate where they were coming from and resolve the application exceptions.

Today, AppDynamics is used for DevOps collaboration and feedback loops so engineers get to see the true impact of their releases in a production environment, a process that was requested by a product VP outside of Kevin’s global operations team. Overall, Kevin relayed an incredible story of how APM can be deployed rapidly across the enterprise to achieve tangible results in just 30 days.

A nice surprising statistic that I later realized in the evening was that the total number of servers being monitored by AppDynamics across our four customer speakers was well over 20,000 nodes. Having been in the APM market for almost 10 years I’m struggling to think of another vendor with such successful large scale production deployments.

Here’s a link to the photo gallery of AppJam 2012 Americas. A big thank you to our customers for attending and we’ll see you all next year!

For those keen to see my stage nosedive here you go:

Appman.

Read the original blog entry...

More Stories By Jyoti Bansal

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@MicroservicesExpo Stories
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
Here’s a novel, but controversial statement, “it’s time for the CEO, COO, CIO to start to take joint responsibility for application platform decisions.” For too many years now technical meritocracy has led the decision-making for the business with regard to platform selection. This includes, but is not limited to, servers, operating systems, virtualization, cloud and application platforms. In many of these cases the decision has not worked in favor of the business with regard to agility and cost...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...