Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Liz McMillan, Carmen Gonzalez, JP Morgenthal

Related Topics: Microservices Expo, Java IoT, Agile Computing

Microservices Expo: Article

Agile - Waterfall: Global and Local Optimizations

Is NP=P and why should we really care?

Did you notice that when something happens to you, it seems to occur to other people as well? For example when you have toddlers, suddenly you see toddlers everywhere - and naturally they are all misbehaving compared to yours J, or when you're planning your wedding, all around you, people are planning theirs, and white dresses become scarce. Lately I had this feeling myself; I was reading a David Baldacci thriller, when half way through, he introduces NP problems and the consequences of finding that P=NP. In addition, I have lately been reading many articles about Critical chain project management. I feel compelled to contribute and provide my take and insights.

There are two kinds of problems, the easy ones which we solve at school, and the hard ones. We might remember the method to solve a quadratic equation by using the x = [-b/2 ± √((b/2)^2 - ac)] /a formula. We are less sure on advising a salesperson how to plan a sales trip to 20 cities in which he can visit every city once and has a limited budget (travelling salesperson problem). When it comes to packing for the holidays we are truly baffled. How are we supposed to fit all the stuff into one checked-in luggage? After continuous deliberation, we land in the exotic resort only to find our bathing suit missing...

Unfortunately and also luckily (more on the luck aspect - later) most of the problems around us are unsolvable, or should I say - not computable in an optimal manner in a reasonable amount of time.

How is this related to project management?

The difficulties we are experiencing managing projects successfully in a resource constrained environment stem from the fact that the Resource Constrained Multi-project Scheduling Problem is very difficult to solve in an optimal manner. Moreover it can't be optimized at all. Multi Project Resource Constrained Scheduling (acronym MPRCS) is known as a NP complete (or NP hard - until this day I am not sure if I understand the exact difference) complexity type problem. In a nutshell, NP problems are very complex mathematical problems. Actually, as mentioned above, most of the problems that we are facing are of NP complexity. For example, those of you who like to travel to Alaska and need to choose equipment from a long list of possibilities, with capacity constraints, are facing an NP problem (knapsack problem).

The theory claims that if we solve one NP problem we can solve all of them. As a byproduct of the solution we would receive a noble prize, render the entire world of encryption useless, and probably get kidnapped by the NSA and never heard of again.  So far this hasn't been the case and NP problems are not computable in an optimal manner in a reasonable amount of time. Hence - luckily for us - long encryption keys are at present undecipherable in a reasonable time, which enables secure global communication networks.  Conspiracy theorists maintain that governments have already developed computational abilities that can solve NP problems - rendering encryption as we know it useless. Dan Brown in his bestseller book Digital Fortress and David Baldacci in his book Simple Genius venture down this path.

Back to project management. Since scheduling is a NP problem, we can't optimally solve a multi project resource constrained system.

Wait just a minute - am I saying that there aren't computerized tools to optimally schedule a bunch of related projects who share resources??? Truth be told, there are tools offering nearly optimal solutions and granted, while they do not provide an optimal solution they calculate, using complex algorithms, a good enough solution. The time required for providing this solution can be very long, depending on the complexity of the problem. There are environments which benefit from these sophisticated optimization tools. Consider for example a medium-size factory with 100 machines and several thousand components. Running the scheduling software for the defined production floor is lengthy and can take up to 3 days. The specific output of the software is a production plan for the explicit setting. The drawback is that once implemented, any unforeseen change in production requires: either rerunning the optimization software which doesn't make sense duration wise, or applying quick fixes which render the nearly optimal production plan obsolete. If ever you watched an operating factory, you have seen the chaotic nature and constant changes with which it is plagued.

There are industries in which nearly optimal scheduling is vital and as the environment is complex, they must invest in a sophisticated software optimization tool (do remember that the solution is nearly optimal). One example is the airline industry. They can't manually allocate numerous aircrafts and particular crews across several hundred destinations worldwide. When Lufthansa allocates specific aircrafts, along with the constraint of having each aircraft operated by a predefined crew, it makes sense to purchase an advanced and pricey optimization tool. Indeed, once a month the scheduling team runs the optimization application which serves as the basis for aircraft and crew allocation, ticket pricing, and routing worldwide. However if one of the pilots is having the shrimp in coconut sauce for dinner and suffers from food poisoning the next day, entire segments of the optimized schedule collapse and emergency escalation protocols are initiated. The central scheduling team in Frankfurt must find a quick fix to the shrimp ordeal.

Ok - for most of our projects it doesn't make sense to invest so much in building the infrastructure to solve, in a near optimal manner, the multi project resource constrained problem.  Especially as the plan is useless once changes occur. All it takes is for a critical resource to be sick one day, for the entire plan to go bust. What is then the alternative? Heuristics - or as we better know them - rules of thumb. These enable local and semi global optimizations. The two famous heuristics to resource constrained project scheduling are: Agile and critical chain. One handles the local optimization and the other semi global optimization. While Goldratt who wrote about critical chain project management knew about heuristics and global optimization, Agile proponents don't really think of it as a local optimization mechanism, but we will leave it for future articles.

More on critical chain, and Goldratt's approach to solving the complexity of resource constrained scheduling, some other time.

A FREE voice over recording of this article is found here.

You can check out my book: The Agile PMO - leading the effective, value-driven, project management office. - - where I further discuss these concepts:

The Agile PMO

More Stories By Michael Nir

Michael Nir - President of Sapir Consulting - (M.Sc. Engineering) has been providing operational, organizational and management consulting and training for over 15 years. He is passionate about Gestalt theory and practice, which complements his engineering background and contributes to his understanding of individual and team dynamics in business. Michael authored 8 Bestsellers in the fields of Influencing, Agile, Teams, Leadership and others. Michael's experience includes significant expertise in the telecoms, hi-tech, software development, R&D environments and petrochemical & infrastructure industries. He develops creative and innovative solutions in project and product management, process improvement, leadership, and team building programs. Michael's professional background is analytical and technical; however, he has a keen interest in human interactions and behaviors. He holds two engineering degrees from the prestigious Technion Institute of Technology: a Bachelor of civil engineering and Masters of Industrial engineering. He has balanced his technical side with the extensive study and practice of Gestalt Therapy and "Instrumental Enrichment," a philosophy of mediated learning. In his consulting and training engagements, Michael combines both the analytical and technical world with his focus on people, delivering unique and meaningful solutions, and addressing whole systems.

@MicroservicesExpo Stories
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry’s single source for the cloud. Fusion’s advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace. Traditional approaches for driving innovation are now woefully inadequate for keeping up with the breadth of disruption and change facing...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.