Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

A Developer’s Perspective | @DevOpsSummit #DevOps #APM #Monitoring

Since moving to a model where developers own their services, there’s a lot more developer independence

A Developer's Perspective
By Eric Sigler

"Walking over to the Ops room - I don't feel like I ever need to do that anymore."

In the run up to our latest release of capabilities for developers, I sat down with David Yang, a senior engineer here at PagerDuty who's seen our internal architecture evolve from a single monolithic codebase to dozens of microservices. He's the technical lead for our Incident Management - People team, which owns the services that deliver alert notifications to all 8,000+ PagerDuty customers. We sat down and talked about life after switching to teams owning the operations of their services. Here are some observations about the benefits and drawbacks we've seen:

On life now that teams own their services:
Since moving to a model where developers own their services, there's a lot more developer independence. A side effect is that we've minimized the difficulties in provisioning and managing infrastructure. Now, the team wants to optimize for the least amount of obstacles and roadblocks. Supporting infrastructure teams are geared toward providing better self-service tools to minimize the need for human intervention.

The shift to having developers own their code reduces cycle time from when someone says, "this is a problem," to when they can actually fix the problem, which has been invaluable.

On cultural change:
By having people own more of the code, and have more responsibility in general for the systems they operate, you essentially push for a culture that's more driven towards getting roadblocks out of the way - like each team is more optimized towards, "how can I make sure I'm not ever blocked again" or "not blocked in the future." It's a lot more apparent when we are blocked. Before, I had to ask ops every time we wanted to provision hosts, and I just accepted it. Now my team can see its roadblocks better because they aren't hidden by other teams' roadblocks.

We have teams that are focused a lot more on owning the whole process of delivering customer value from end to end, which is invaluable.

On how this can help with the incident response process:
There are clearer boundaries of service ownership. It's easier to figure out which specific teams are impacted when there's an operability issue. And the fact that I know the exact procedure to follow - and it's more of an objective procedure of, "this is the checklist" - that is great. It enables me to focus 100% on solving the problem and not on the communication around the incident.

On what didn't work so well:
Not to say that owning a service doesn't come with its own set of problems. It requires dedicated time to tend to the operational maintenance of our services.  This ultimately takes up more of the team's time, which is especially an issue with legacy services where they may be knowledge gaps. In the beginning, we didn't put strong enough guardrails in place to protect operability work in our sprints. That's being improved by leveraging KPIs [such as specific scaling goals and operational load levels] to enable us to make objective decisions.

On the future:
[Of balancing operations-related work vs. feature development work] teams are asking: "How do I leverage all of this stuff day-to-day? How do I make even more objective decisions?" - and driving to those objective decisions by metrics.

Everything in our product development is defined in, "what is customer value", "what is success criteria," etc. I think trying to convey the operational work in the same sense helps make it easier to prioritize effectively. We're all on the same team and aligned to the same goal of delivering value to our customers, and you have to resolve the competing priorities at some point.

Trying to enact change within an organization around operations requires a lot of collaboration. It also takes figuring out what the right metrics are and having a discussion about those metrics.


Image: "Magnifying glass" is copyright (c) 2013 Todd Chandler

The post A Developer's Perspective appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

Microservices Articles
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...