Click here to close now.


Microservices Expo Authors: Elizabeth White, Liz McMillan, Trevor Parsons, Jason Bloomberg, Roger Strukhoff

Related Topics: Microservices Expo, Java IoT, Agile Computing

Microservices Expo: Article

Agile - Waterfall: Global and Local Optimizations

Is NP=P and why should we really care?

Did you notice that when something happens to you, it seems to occur to other people as well? For example when you have toddlers, suddenly you see toddlers everywhere - and naturally they are all misbehaving compared to yours J, or when you're planning your wedding, all around you, people are planning theirs, and white dresses become scarce. Lately I had this feeling myself; I was reading a David Baldacci thriller, when half way through, he introduces NP problems and the consequences of finding that P=NP. In addition, I have lately been reading many articles about Critical chain project management. I feel compelled to contribute and provide my take and insights.

There are two kinds of problems, the easy ones which we solve at school, and the hard ones. We might remember the method to solve a quadratic equation by using the x = [-b/2 ± √((b/2)^2 - ac)] /a formula. We are less sure on advising a salesperson how to plan a sales trip to 20 cities in which he can visit every city once and has a limited budget (travelling salesperson problem). When it comes to packing for the holidays we are truly baffled. How are we supposed to fit all the stuff into one checked-in luggage? After continuous deliberation, we land in the exotic resort only to find our bathing suit missing...

Unfortunately and also luckily (more on the luck aspect - later) most of the problems around us are unsolvable, or should I say - not computable in an optimal manner in a reasonable amount of time.

How is this related to project management?

The difficulties we are experiencing managing projects successfully in a resource constrained environment stem from the fact that the Resource Constrained Multi-project Scheduling Problem is very difficult to solve in an optimal manner. Moreover it can't be optimized at all. Multi Project Resource Constrained Scheduling (acronym MPRCS) is known as a NP complete (or NP hard - until this day I am not sure if I understand the exact difference) complexity type problem. In a nutshell, NP problems are very complex mathematical problems. Actually, as mentioned above, most of the problems that we are facing are of NP complexity. For example, those of you who like to travel to Alaska and need to choose equipment from a long list of possibilities, with capacity constraints, are facing an NP problem (knapsack problem).

The theory claims that if we solve one NP problem we can solve all of them. As a byproduct of the solution we would receive a noble prize, render the entire world of encryption useless, and probably get kidnapped by the NSA and never heard of again.  So far this hasn't been the case and NP problems are not computable in an optimal manner in a reasonable amount of time. Hence - luckily for us - long encryption keys are at present undecipherable in a reasonable time, which enables secure global communication networks.  Conspiracy theorists maintain that governments have already developed computational abilities that can solve NP problems - rendering encryption as we know it useless. Dan Brown in his bestseller book Digital Fortress and David Baldacci in his book Simple Genius venture down this path.

Back to project management. Since scheduling is a NP problem, we can't optimally solve a multi project resource constrained system.

Wait just a minute - am I saying that there aren't computerized tools to optimally schedule a bunch of related projects who share resources??? Truth be told, there are tools offering nearly optimal solutions and granted, while they do not provide an optimal solution they calculate, using complex algorithms, a good enough solution. The time required for providing this solution can be very long, depending on the complexity of the problem. There are environments which benefit from these sophisticated optimization tools. Consider for example a medium-size factory with 100 machines and several thousand components. Running the scheduling software for the defined production floor is lengthy and can take up to 3 days. The specific output of the software is a production plan for the explicit setting. The drawback is that once implemented, any unforeseen change in production requires: either rerunning the optimization software which doesn't make sense duration wise, or applying quick fixes which render the nearly optimal production plan obsolete. If ever you watched an operating factory, you have seen the chaotic nature and constant changes with which it is plagued.

There are industries in which nearly optimal scheduling is vital and as the environment is complex, they must invest in a sophisticated software optimization tool (do remember that the solution is nearly optimal). One example is the airline industry. They can't manually allocate numerous aircrafts and particular crews across several hundred destinations worldwide. When Lufthansa allocates specific aircrafts, along with the constraint of having each aircraft operated by a predefined crew, it makes sense to purchase an advanced and pricey optimization tool. Indeed, once a month the scheduling team runs the optimization application which serves as the basis for aircraft and crew allocation, ticket pricing, and routing worldwide. However if one of the pilots is having the shrimp in coconut sauce for dinner and suffers from food poisoning the next day, entire segments of the optimized schedule collapse and emergency escalation protocols are initiated. The central scheduling team in Frankfurt must find a quick fix to the shrimp ordeal.

Ok - for most of our projects it doesn't make sense to invest so much in building the infrastructure to solve, in a near optimal manner, the multi project resource constrained problem.  Especially as the plan is useless once changes occur. All it takes is for a critical resource to be sick one day, for the entire plan to go bust. What is then the alternative? Heuristics - or as we better know them - rules of thumb. These enable local and semi global optimizations. The two famous heuristics to resource constrained project scheduling are: Agile and critical chain. One handles the local optimization and the other semi global optimization. While Goldratt who wrote about critical chain project management knew about heuristics and global optimization, Agile proponents don't really think of it as a local optimization mechanism, but we will leave it for future articles.

More on critical chain, and Goldratt's approach to solving the complexity of resource constrained scheduling, some other time.

A FREE voice over recording of this article is found here.

You can check out my book: The Agile PMO - leading the effective, value-driven, project management office. - - where I further discuss these concepts:

The Agile PMO

More Stories By Michael Nir

Michael Nir - President of Sapir Consulting - (M.Sc. Engineering) has been providing operational, organizational and management consulting and training for over 15 years. He is passionate about Gestalt theory and practice, which complements his engineering background and contributes to his understanding of individual and team dynamics in business. Michael authored 8 Bestsellers in the fields of Influencing, Agile, Teams, Leadership and others. Michael's experience includes significant expertise in the telecoms, hi-tech, software development, R&D environments and petrochemical & infrastructure industries. He develops creative and innovative solutions in project and product management, process improvement, leadership, and team building programs. Michael's professional background is analytical and technical; however, he has a keen interest in human interactions and behaviors. He holds two engineering degrees from the prestigious Technion Institute of Technology: a Bachelor of civil engineering and Masters of Industrial engineering. He has balanced his technical side with the extensive study and practice of Gestalt Therapy and "Instrumental Enrichment," a philosophy of mediated learning. In his consulting and training engagements, Michael combines both the analytical and technical world with his focus on people, delivering unique and meaningful solutions, and addressing whole systems.

@MicroservicesExpo Stories
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, S...
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Sysdig has announced two significant milestones in its mission to bring infrastructure and application monitoring to the world of containers and microservices: a $10.7 million Series A funding led by Accel and Bain Capital Ventures (BCV); and the general availability of Sysdig Cloud, the first monitoring, alerting, and troubleshooting platform specializing in container visibility, which is already used by more than 30 enterprise customers. The funding will be used to drive adoption of Sysdig Clo...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
What we really mean to ask is whether microservices architecture is SOA done right. But then, of course, we’d have to figure out what microservices architecture was. And if you think defining SOA is difficult, pinning down microservices architecture is unquestionably frying pan into fire time. Given my years at ZapThink, fighting to help architects understand what Service-Oriented Architecture really was and how to get it right, it’s no surprise that many people ask me this question.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction an...
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Microservices are hot. And for good reason. To compete in today’s fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery. But is it too...
In the midst of the widespread popularity and adoption of cloud computing, it seems like everything is being offered “as a Service” these days: Infrastructure? Check. Platform? You bet. Software? Absolutely. Toaster? It’s only a matter of time. With service providers positioning vastly differing offerings under a generic “cloud” umbrella, it’s all too easy to get confused about what’s actually being offered. In his session at 16th Cloud Expo, Kevin Hazard, Director of Digital Content for SoftL...
"Vicom Computer Services is a service provider and a value-added reseller and we provide technology solutions, infrastructure solutions, security and management services solutions," stated Amitava Das, Chief Technology Officer at Vicom Computer Services, in this interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Worldwide, there's a growing appreciation for the many benefits of the Open Source way. Clearly, being truly Open is a frame of mind that can apply to just about anything in life -- including the development and nurture of a progressive company culture that's equipped for the challenges and opportunities of today's Global Networked Economy. Jim Whitehurst, CEO of Red Hat, recently launched his new book entitled "The Open Organization" -- Igniting Passion and Performance. He says, "The conventio...
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
SYS-CON Events announced today that JFrog, maker of Artifactory, the popular Binary Repository Manager, will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based in California, Israel and France, founded by longtime field-experts, JFrog, creator of Artifactory and Bintray, has provided the market with the first Binary Repository solution and a software distribution social platform.
Puppet Labs has published their annual State of DevOps report and it is loaded with interesting information as always. Last year’s report brought home the point that DevOps was becoming widely accepted in the enterprise. This year’s report further validates that point and provides us with some interesting insights from surveying a wide variety of companies in different phases of their DevOps journey.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization. In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss: The acceleration of application delivery for the business with DevOps