Welcome!

Microservices Expo Authors: Pat Romanski, AppDynamics Blog, Liz McMillan, Elizabeth White, Stackify Blog

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Blog Post

Struggling to Scale Agile? | @DevOpsSummit #DevOps #Agile #DigitalTransformation

Ensuring better adoption and sustenance

From slow and rigid to fast and flexible. That was the promise of Agile to IT. What it meant was that ideas coming from business would be converted into working software rapidly. It meant that IT will be more responsive to what business wants and needs and will not spend an eternity analysing the requirements, writing detailed specifications and then sending them back to business to review. Some of that has happened. Product backlogs are more informal and easier to maintain and update than formal specifications. IT pushes business to prioritize and tries to break the requirements down so that they are both manageable and valuable and can be delivered in short sprints of 2~3 weeks. So far so good. However, what happens after that is a bit complicated.

In the recent past, as I observe agile projects in organization after organization, sometimes as a consultant, sometimes as a Scrum Master and sometimes just as an invisible fly on the wall, I notice organizations struggling to scale. There is a high degree of awareness and obsession with the mechanics of agile but without an equivalent grounding on the fundamentals. Yes, there is SAFE, LeSS, etc., but frameworks are good when you know how to apply them in your context. Otherwise, you get drowned under new rules and terminology. Here is a summary of my recommendations for organizations trying to scale Agile:

1. Architecture
Agile discourages grand upfront design. Recommendation is to do a design which is sufficient for the prioritized user stories and then iteratively improve it so that it is able to accommodate newer stories. The overall architecture thus evolves and emerges. This approach is fine when you have a single scrum team working on a set of stories. The moment more scrum teams come into the picture, the concept of emergent design becomes a bit fuzzy. Two or more scrum teams working on a set of user stories without a bare minimum architectural skeleton as a reference will pose huge integration and collaboration challenges. In these scenarios, it is wise to invest some upfront effort in creating an architectural framework which all the teams agree to and will collectively evolve as the software grows. I see this explicit guideline / practice missing in many instances.

2. Work Allocation
One of the basic premise of agile was to break functional silos and align people to what the customer wants. By advocating the practice of having cross functional scrum teams, agile has solved this problem. But in many organizations I've seen functional silos giving way to scope silos - individual scrum teams too focussed only on their features or themes or scope of work at the cost of the overall system. Scrum masters need to be wary of this. This goes back to how work is allocated or pulled by individual scrum teams. If scrum teams pull items from the product backlog strictly by features assigned to them, risks of scope silos are higher. There are no easy answers. There is a dependency on the overall architecture of the product (if there is one and in a multi scrum team scenario there should be atleast a skeletal version) and the extent to which there are dependencies and modularity. In my view, to the extent possible, teams should have the freedom to pull items from the entire product backlog and not restricted to selected features or themes / epics.

3. Ownership
The Product Owner owns the system and is responsible that the right product is built. In my view, he/she owns the behaviour of the system and controls its destiny. But who owns the architecture, design and other internals of the system? You can alter the internals and yet get the same behaviour - who controls what can be changed and what cannot be?. In a multi scrum team structure it is very important to fix that ownership clearly. Related to the issue of work allocation explained above, there has to be a team who is responsible for the overall system internals and not just a set of themes or features. Projects in real life have a finite end date and products have a future state, post which only incremental changes are made to it. This means development effort over the life of the product will peak and then plateau. Extra capacity has to be released for other productive work and hence fixing system ownership early on is very important.

4. Communication and Noise
Agile gives importance to individuals and the interactions between them. Popular literature recommends an optimal team size of 7 and so does Scrum. At the same time multi scrum teams are a reality because they can provide higher throughput. To be effective a team should be shielded from external interference and be allowed to focus on the job at hand. I've seen instances where 2 scrum teams have so many dependencies between them, they are constantly talking to each other. In a way, they are then actually not separate teams. The benefit of higher throughput should be weighed against the cost of higher interference and communication overhead between multiple teams. In my view, while interactions within the team should be encouraged, that between the teams should be controlled lest it leads to chaos.

5. Nurturing Excellence
Today, the word 'waterfall' has almost taken on evil connotations. We're so paranoid about the term 'functional silos' that we assume than any 'functional' team is necessarily a 'silo'. That is far from true. Technical excellence is necessary to live upto the promise of Agile. How do you nurture that? Smart developers learn from smart developers and challenge each other to reach higher levels of expertise. Same is true for architects, testers and other members of a product team. Organizations should provide forums or structures where professionals practising the same craft can learn and interact with each other. It is perfectly normal for a developer to have strong affiliations towards his / her scrum team as well as the developer community inside or outside organizations. Organizations should provide the opportunity to maintain and nourish these multiple identities. They strengthen our collaborative capabilities in a cross functional forum, not diminish them.

6. Integration
This is one of the basic best practices which is not adhered to very consistently. In a multi scrum team scenario continuous (preferably automated and daily) integration needs to happen not just within but between scrum teams. Many organizations have this concept of an integration sprint at the end of 2~3 sprints which is basically a recipe for disaster and very watefallish in approach as it is delaying the discovery of the inevitable bugs and thereby the cost of resolving them. It is better that everybody stops working and resolve any integration problem when it happens rather than keeping them in a backlog and allowing technical debt to accumulate. I agree that in some context there would be technical limitations to integration but let those not be by design. Multiple teams feeding from a single gold version and collectively merging their versions might seem messy but if done regularly and with discipline, makes integration a non-event.

More Stories By Sujoy Sen

Sujoy is a TOGAF Certified Enterprise Architect, a Certified Six Sigma Black Belt and Manager of Organizational Excellence from American Society for Quality, a PMP, a CISA, an Agile Coach, a Devops Evangelist and lately, a Digital enthusiast. With over 20 years of professional experience now, he has led multiple consulting engagements with Fortune 500 customers across the globe. He has a Masters Degree in Quality Management and a Bachelors in Electrical Engineering. He is based out of New Jersey.

Microservices Articles
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...