Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Mehdi Daoudi, Yeshim Deniz

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, @CloudExpo

Containers Expo Blog: Article

Importance of ‘Proof-of-Concept’ – in Right Sizing the Infrastructure

Importance of undertaking proof of concept (PoC) to examine the viability of an approach

It is quite commonly observed now-a-days as a common practice; most of the companies invest a lot of time in engaging consultants & designers and spending colossal amounts of money in capacity planning to size the infrastructure for their specific needs. Not denying the fact that people, capacity planning tools are always helpful to help identify the required amount of resources to size the infrastructure correctly. However, need to consider the fact and it is absolutely necessary to do "Proof-of-concept" especially while making imperative decisions.

There are always concerns raised in terms of obtaining a satisfactory performance. Moreover, mergers and acquisitions have brought in their share of complexity to the existing environment, resulting in technology -vs- application compatibility related challenges. Nevertheless, this may be applicable for setting up new infrastructure for a business critical application from the scratch or for specific IT requirements (for example - data center consolidation, virtualizing a system or going for cloud based solutions). Proof of concepts helps companies in deciding acceptance criteria, right sizing the infrastructure according to their specific needs. It helps in achieving business objectives by controlling the budget over run and helps IT management to plan for cost and procure resources accordingly to ensure successful completion of a project. As the design phase is responsible for many critical decisions, many cost overrun causes are related to such phase. It is identified that most significant causes of cost overrun related to the design phase are due to blindly following the theoretical evidence or by going with by completely trusting on the metrics obtained using unreliable capacity planning tools.

The purpose of PoC is to showcase the benefits using real world end user scenarios and by calculating the TCO for individual cases. Considering the Key system performance base metrics - Processor, Memory, Disk and Network, usually the work loads are classified in to three types (1) Typical user (2) Power user and (3) Advanced Power user. It is always a good practise to calculate load / system usage based on "Power user". If funds permit, it would be even better to use the upper bound for the calculations by considering "Advanced power user" usage in to the account.

PoC helps in determining and size accordingly based on the Average and Peak loads. It enables the consultants in deciding anticipated future growth and leave sufficient room for all key system performance metrics discussed above.

Gartner predicts that the portion of organizations using cloud services will reach 80% by the end of year 2015. Whilst the Cloud Disaster Recovery Service becoming popular these days, companies want to have quick recovery of vital applications in case of failures, by taking advantage of cloud based DR solutions. Hence, it is becoming imperious for organisations to set their own PoC strategy, choose their own POC clouds, navigate technical hurdles & compatibility related challenges, and measure success.

In conclusion, to successfully execute a project, an organization has to give maximum importance to "Proof-of-concept", which defines its success criteria. The use of a proof-of-concept template can be applied to various projects that can help Businesses bridge the gap between the visionary and delivery stages of production efforts.

Fig Illustrates: Resource equals money

More Stories By Sathyanarayanan Muthukrishnan

Sathyanarayanan Muthukrishnan has worked on and managed a variety of IT projects globally (Canada, Denmark, United Kingdom, India) and interfaces with business leaders in the implementation of systems & enhancements.

  • IT Operations Management.
  • Strategic IT strategic road map planning & execution.
  • Data Center Management.
  • Architecture, Analysis and Planning.
  • Budgeting, Product comparisons: Cost - benefit analysis (Hardware, Software & Applications).
  • Disaster Recovery Planning & Testing.
  • Microsoft Windows & Unix Server farms management.
  • Databases (SQL, Oracle)
  • SAN/NAS storage management - capacity planning.
  • Virtualization & Cloud computing (Certified: Citrix, Vmware, Hyper-V)
  • Networking & IT Security.
  • Process refinement, Issues trend Analysis & solutions, ITIL (Change & Problem management)
  • Best Practices Implementations & Stabilization initiatives.

Microservices Articles
Digital Transformation is well underway with many applications already on the cloud utilizing agile and devops methodologies. Unfortunately, application security has been an afterthought and data breaches have become a daily occurrence. Security is not one individual or one's team responsibility. Raphael Reich will introduce you to DevSecOps concepts and outline how to seamlessly interweave security principles across your software development lifecycle and application lifecycle management. With ...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
All zSystem customers have a significant new business opportunity to extend their reach to new customers and markets with new applications and services, and to improve the experience of existing customers. This can be achieved by exposing existing z assets (which have been developed over time) as APIs for accessing Systems of Record, while leveraging mobile and cloud capabilities with new Systems of Engagement applications. In this session, we will explore business drivers with new Node.js apps ...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...