Welcome!

Microservices Expo Authors: Liz McMillan, Craig Lowell, Elizabeth White, Roger Strukhoff, Lori MacVittie

Related Topics: Microservices Expo, Java IoT, Agile Computing

Microservices Expo: Article

Agile - Waterfall: Global and Local Optimizations

Is NP=P and why should we really care?

Did you notice that when something happens to you, it seems to occur to other people as well? For example when you have toddlers, suddenly you see toddlers everywhere - and naturally they are all misbehaving compared to yours J, or when you're planning your wedding, all around you, people are planning theirs, and white dresses become scarce. Lately I had this feeling myself; I was reading a David Baldacci thriller, when half way through, he introduces NP problems and the consequences of finding that P=NP. In addition, I have lately been reading many articles about Critical chain project management. I feel compelled to contribute and provide my take and insights.

There are two kinds of problems, the easy ones which we solve at school, and the hard ones. We might remember the method to solve a quadratic equation by using the x = [-b/2 ± √((b/2)^2 - ac)] /a formula. We are less sure on advising a salesperson how to plan a sales trip to 20 cities in which he can visit every city once and has a limited budget (travelling salesperson problem). When it comes to packing for the holidays we are truly baffled. How are we supposed to fit all the stuff into one checked-in luggage? After continuous deliberation, we land in the exotic resort only to find our bathing suit missing...

Unfortunately and also luckily (more on the luck aspect - later) most of the problems around us are unsolvable, or should I say - not computable in an optimal manner in a reasonable amount of time.

How is this related to project management?

The difficulties we are experiencing managing projects successfully in a resource constrained environment stem from the fact that the Resource Constrained Multi-project Scheduling Problem is very difficult to solve in an optimal manner. Moreover it can't be optimized at all. Multi Project Resource Constrained Scheduling (acronym MPRCS) is known as a NP complete (or NP hard - until this day I am not sure if I understand the exact difference) complexity type problem. In a nutshell, NP problems are very complex mathematical problems. Actually, as mentioned above, most of the problems that we are facing are of NP complexity. For example, those of you who like to travel to Alaska and need to choose equipment from a long list of possibilities, with capacity constraints, are facing an NP problem (knapsack problem).

The theory claims that if we solve one NP problem we can solve all of them. As a byproduct of the solution we would receive a noble prize, render the entire world of encryption useless, and probably get kidnapped by the NSA and never heard of again.  So far this hasn't been the case and NP problems are not computable in an optimal manner in a reasonable amount of time. Hence - luckily for us - long encryption keys are at present undecipherable in a reasonable time, which enables secure global communication networks.  Conspiracy theorists maintain that governments have already developed computational abilities that can solve NP problems - rendering encryption as we know it useless. Dan Brown in his bestseller book Digital Fortress and David Baldacci in his book Simple Genius venture down this path.

Back to project management. Since scheduling is a NP problem, we can't optimally solve a multi project resource constrained system.

Wait just a minute - am I saying that there aren't computerized tools to optimally schedule a bunch of related projects who share resources??? Truth be told, there are tools offering nearly optimal solutions and granted, while they do not provide an optimal solution they calculate, using complex algorithms, a good enough solution. The time required for providing this solution can be very long, depending on the complexity of the problem. There are environments which benefit from these sophisticated optimization tools. Consider for example a medium-size factory with 100 machines and several thousand components. Running the scheduling software for the defined production floor is lengthy and can take up to 3 days. The specific output of the software is a production plan for the explicit setting. The drawback is that once implemented, any unforeseen change in production requires: either rerunning the optimization software which doesn't make sense duration wise, or applying quick fixes which render the nearly optimal production plan obsolete. If ever you watched an operating factory, you have seen the chaotic nature and constant changes with which it is plagued.

There are industries in which nearly optimal scheduling is vital and as the environment is complex, they must invest in a sophisticated software optimization tool (do remember that the solution is nearly optimal). One example is the airline industry. They can't manually allocate numerous aircrafts and particular crews across several hundred destinations worldwide. When Lufthansa allocates specific aircrafts, along with the constraint of having each aircraft operated by a predefined crew, it makes sense to purchase an advanced and pricey optimization tool. Indeed, once a month the scheduling team runs the optimization application which serves as the basis for aircraft and crew allocation, ticket pricing, and routing worldwide. However if one of the pilots is having the shrimp in coconut sauce for dinner and suffers from food poisoning the next day, entire segments of the optimized schedule collapse and emergency escalation protocols are initiated. The central scheduling team in Frankfurt must find a quick fix to the shrimp ordeal.

Ok - for most of our projects it doesn't make sense to invest so much in building the infrastructure to solve, in a near optimal manner, the multi project resource constrained problem.  Especially as the plan is useless once changes occur. All it takes is for a critical resource to be sick one day, for the entire plan to go bust. What is then the alternative? Heuristics - or as we better know them - rules of thumb. These enable local and semi global optimizations. The two famous heuristics to resource constrained project scheduling are: Agile and critical chain. One handles the local optimization and the other semi global optimization. While Goldratt who wrote about critical chain project management knew about heuristics and global optimization, Agile proponents don't really think of it as a local optimization mechanism, but we will leave it for future articles.

More on critical chain, and Goldratt's approach to solving the complexity of resource constrained scheduling, some other time.

A FREE voice over recording of this article is found here.

You can check out my book: The Agile PMO - leading the effective, value-driven, project management office. - - where I further discuss these concepts:

The Agile PMO

More Stories By Michael Nir

Michael Nir - President of Sapir Consulting - (M.Sc. Engineering) has been providing operational, organizational and management consulting and training for over 15 years. He is passionate about Gestalt theory and practice, which complements his engineering background and contributes to his understanding of individual and team dynamics in business. Michael authored 8 Bestsellers in the fields of Influencing, Agile, Teams, Leadership and others. Michael's experience includes significant expertise in the telecoms, hi-tech, software development, R&D environments and petrochemical & infrastructure industries. He develops creative and innovative solutions in project and product management, process improvement, leadership, and team building programs. Michael's professional background is analytical and technical; however, he has a keen interest in human interactions and behaviors. He holds two engineering degrees from the prestigious Technion Institute of Technology: a Bachelor of civil engineering and Masters of Industrial engineering. He has balanced his technical side with the extensive study and practice of Gestalt Therapy and "Instrumental Enrichment," a philosophy of mediated learning. In his consulting and training engagements, Michael combines both the analytical and technical world with his focus on people, delivering unique and meaningful solutions, and addressing whole systems.

@MicroservicesExpo Stories
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
A company’s collection of online systems is like a delicate ecosystem – all components must integrate with and complement each other, and one single malfunction in any of them can bring the entire system to a screeching halt. That’s why, when monitoring and analyzing the health of your online systems, you need a broad arsenal of different tools for your different needs. In addition to a wide-angle lens that provides a snapshot of the overall health of your system, you must also have precise, ...
Cloud Expo 2016 New York at the Javits Center New York was characterized by increased attendance and a new focus on operations. These were both encouraging signs for all involved in Cloud Computing and all that it touches. As Conference Chair, I work with the Cloud Expo team to structure three keynotes, numerous general sessions, and more than 150 breakout sessions along 10 tracks. Our job is to balance the state of enterprise IT today with the trends that will be commonplace tomorrow. Mobile...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...

Let's just nip the conflation of these terms in the bud, shall we?

"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. T...

If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
Thomas Bitman of Gartner wrote a blog post last year about why OpenStack projects fail. In that article, he outlined three particular metrics which together cause 60% of OpenStack projects to fall short of expectations: Wrong people (31% of failures): a successful cloud needs commitment both from the operations team as well as from "anchor" tenants. Wrong processes (19% of failures): a successful cloud automates across silos in the software development lifecycle, not just within silos.
It's been a busy time for tech's ongoing infatuation with containers. Amazon just announced EC2 Container Registry to simply container management. The new Azure container service taps into Microsoft's partnership with Docker and Mesosphere. You know when there's a standard for containers on the table there's money on the table, too. Everyone is talking containers because they reduce a ton of development-related challenges and make it much easier to move across production and testing environm...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
To leverage Continuous Delivery, enterprises must consider impacts that span functional silos, as well as applications that touch older, slower moving components. Managing the many dependencies can cause slowdowns. See how to achieve continuous delivery in the enterprise.
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...

Modern organizations face great challenges as they embrace innovation and integrate new tools and services. They begin to mature and move away from the complacency of maintaining traditional technologies and systems that only solve individual, siloed problems and work “well enough.” In order to build...

The post Gearing up for Digital Transformation appeared first on Aug. 27, 2016 02:45 PM EDT  Reads: 1,551