Click here to close now.

Welcome!

Microservices Expo Authors: Jason Bloomberg, AppDynamics Blog, Elizabeth White, Pat Romanski, Carmen Gonzalez

Related Topics: @CloudExpo, Microservices Expo, Microsoft Cloud, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Blog Feed Post

Notes from the Field: Inside a Real World Large-Scale Cloud Deployment

I thought I’d share some important generalities about this type of effort

I’ve been granted an incredible opportunity. Over the past three and a half months I have gotten to lead a real world large-scale delivery of a cloud solution. The final solution will be delivered as Software-as-a-Service (SaaS) to the customer via an on-premise managed service. While I have developed SaaS/PaaS (Platform-as-a-Service) solutions in the past, I was fortunate enough to have been able to build those on public cloud infrastructures. This has been a rare glimpse into the “making of the sausage” having to orchestrate everything from delivery of the hardware into the data center in four countries to testing and integration with the customer environment.

All I can say about this opportunity is that the term, “it takes a village” applies well. I thought I’d share some important generalities about this type of effort. It’s important to note that this is a Global 100 company with data centers around the globe. Regardless of what the public cloud providers are telling the world, this application is not appropriate for public cloud deployment due to the volume of data traversing the network, the amount of storage required, the types of storage required (e.g. Write-Once-Read-Many), level of integration with internal environments and the requirements for failover.

The following are some observations about deploying cloud solutions at this scale:

  • Data Centers. As part of IT-as-a-Service (ITaaS) we talk a lot about convergence, software-defined data centers and general consolidation. All of this has major implications for simplifying management and lowering the total cost of ownership and operations of the data centers. However, we should not forget that it still takes a considerable amount of planning and effort to bring new infrastructure into an existing data center. The most critical of these is that the data center is a living entity that doesn’t stop because work is going on, which means a lot of this effort occurs after hours and in maintenance windows. This particular data center freezes all changes between mid-December till mid-January to ensure that their customers will not have interrupted service during a peak period that includes major holidays and end of year reporting, which had significant impact on attempting to meet certain end-of-year deliverables. On site surveys were critical to planning the organization of the equipment (four racks in total) on the floor to minimize cabling efforts and ensure our equipment was facing in the right direction to meet the needs for hot/cold isles. Additionally, realize that in this type of business, every country may have different rules for accessing, operating in and racking your equipment.
  • Infrastructure. At the end of the day, we can do more with the hardware infrastructure architectures now available. While we leverage virtualization to take advantage of the greater compute power, it does not alleviate the requirements around planning a large-scale virtual environment that must span countries. Sometimes, it’s the smallest details that can be the most difficult to work out, for example, how to manage an on-premise environment, such as this one, as a service. The difficulties here is that the network, power, cooling, etc. are provided for by the customer, which requires considerable efforts to negotiate shared operating procedures, while still attempting to commit to specific service levels. Many of today’s largest businesses do not operate their internal IT organizations with the same penalties for failure to meet a service level agreement (SLA) as they would apply to an external service provider. Hence, service providers that must rely on this foundation face many challenges and hurdles to ensuring their own service levels.
  • Security. Your solution may be reviewed by the internal security team to ensure it is compliant with current security procedures and policies. Since this is most often not the team that procured or built the solution, you should not expect that they will be able to warn you about all the intricacies for deploying a solution for the business. The best advice here would be to ensure you engage the security team early and often once you have completed your design. In US Federal IT, part of deployment usually requires that those implementing the system obtain an Authority to Operate (ATO). Quite often, medium- and large-sized businesses have a similar procedure; it’s just not spelled out so succinctly. Hence, these audits and tests can introduce unexpected expenses due to the need to modify the solution and unexpected delays.
  • Software. Any piece of software can be tested and operated under a modest set of assumptions. When that software must be deployed as part of a service that has requirements to meet certain performance metrics as well as meet certain recovery metrics in the case of an outage, that same software can fall flat on its face. Hence, the long pole in the tent for building out a cloud solution at this scale is testing for disaster recovery and scalability. In addition to requiring time to complete, it often requires a complementary environment for disaster recovery and failover testing, which can be a significant additional cost to the project. I will also note that in a complex environment software license management can become very cumbersome. I recommend starting the license catalog early and ensure that it is maintained throughout the project.
  • Data Flow. A complex cloud-based solution that integrates with existing internal systems operating on different networks across multiple countries will have to cross multiple firewalls, routers and run along paths with varying bandwidth carrying varying levels of traffic. Hence, issues for production operation and remote management can be impacted by multiple factors both during planning and during operation. No matter how much testing is done in a lab, the answer seemingly comes down to, “we’ll just have to see how it performs in production.” So, perhaps, a better title for this bullet might be “Stuff You’re Going To Learn Only After You Start The Engine.” Your team will most likely have a mix of personalities. Some will be okay with this having learned from doing similar projects in their past, others will not be able to get past this point and continually raise objections. Shoot the naysayer! Okay, not really, but seriously, adopt this mandate and make sure everyone on the team understands it.
  • Documentation. I cannot say enough about ensuring you document early and often. Once the train is started, it’s infinitely more difficult to catch up. Start with good highly-reviewed requirements. Review them with the customer. Call to order the ARB and have them review and sign off. This is a complex environment with a lot of interdependencies. It’s not going to be simple to change one link without it affecting many others. The more changes you can avoid the more smoothly the process of getting a system into production will be.

Most importantly, and I cannot stress this enough, is the importance in building a team environment to accomplish the mission. Transforming a concept into a production-ready operational system requires a large number of people to cooperatively work together to address the hurdles. The solution as designed on paper will hardly ever match perfectly what is deployed in the field for the reasons stated above. This project is heavily reliant upon a Program Management Organization with representatives from engineering, managed services, field services, product and executive leadership to stay on track. Developing the sense of team within this group is critical to providing the appropriate leadership to the project as a whole. Subsequently, we also formed an Architecture Review Board (ARB) comprised of key technical individuals related to each aspect of the solution to address and find solutions for major technical issues that emerged throughout the project. In this way we ensure the responses were holistic in nature, not just focused on the specific problem, but also provided alternatives that would work within the scope of the entire project.

More Stories By JP Morgenthal

JP Morgenthal is an internationally renowned thought leader in the areas of IT transformation, modernization, and cloud computing. JP has served in executive roles within major software companies and technology startups. Areas of expertise include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. He routinely advises C-level executives on the best ways to use technology to derive business value. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@MicroservicesExpo Stories
Software is eating the world. The more it eats, the bigger the mountain of data and wealth of valuable insights to digest and act on. Forward facing customer-centric IT organizations, leaders and professionals are looking to answer questions like how much revenue was lost today from platinum users not converting because they experienced poor mobile app performance. This requires a single, real-time pane of glass for end-to-end analytics covering business, customer, and IT operational data.
In the midst of the widespread popularity and adoption of cloud computing, it seems like everything is being offered “as a Service” these days: Infrastructure? Check. Platform? You bet. Software? Absolutely. Toaster? It’s only a matter of time. With service providers positioning vastly differing offerings under a generic “cloud” umbrella, it’s all too easy to get confused about what’s actually being offered. In his session at 16th Cloud Expo, Kevin Hazard, Director of Digital Content for SoftL...
Microservices are individual units of executable code that work within a limited framework. They are extremely useful when placed within an architecture of numerous microservices. On June 24th, 2015 I attended a webinar titled “How to Share Share-Nothing Microservices,” hosted by Jason Bloomberg, the President of Intellyx, and Scott Edwards, Director Product Marketing for Service Virtualization at CA Technologies. The webinar explained how to use microservices to your advantage in order to deliv...
Enterprises are turning to the hybrid cloud to drive greater scalability and cost-effectiveness. But enterprises should beware as the definition of “policy” varies wildly. Some say it’s the ability to control the resources apps’ use or where the apps run. Others view policy as governing the permissions and delivering security. Policy is all of that and more. In his session at 16th Cloud Expo, Derek Collison, founder and CEO of Apcera, explained what policy is, he showed how policy should be arch...
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Countless business models have spawned from the IaaS industry. Resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his General Session at 16th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, broke down what we've got to work with and discuss the benefits and pitfalls to discover how we can best use them to d...
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
DevOps Summit, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development...
One of the hottest new terms in the world of enterprise computing is the microservice. Starting with the seminal 2014 article by James Lewis and Martin Fowler of ThoughtWorks, microservices have taken on a life of their own – and as with any other overhyped term, they have generated their fair share of confusion as well. Perhaps the best definition of microservices comes from Janakiram MSV, Principal at Janakiram & Associates. “Microservices are fine-grained units of execution. They are designe...
Agile, which started in the development organization, has gradually expanded into other areas downstream - namely IT and Operations. Teams – then teams of teams – have streamlined processes, improved feedback loops and driven a much faster pace into IT departments which have had profound effects on the entire organization. In his session at DevOps Summit, Anders Wallgren, Chief Technology Officer of Electric Cloud, will discuss how DevOps and Continuous Delivery have emerged to help connect dev...
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager - Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, reviewed next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discussed how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has been engaged in t...
In the last blog we started the conversation on our findings from @Cloud Expo 2015 and @ThingsExpo 2015 as the industry came together to explore the impact of Cloud and Internet of Things (IoT) on business models as we know them. While often the focus of IoT are consumer services and the sometimes over-simplification of what constitutes an IoT company or service, one area there is no disputing is the significant advancement in the Industrial Internet (a term made popular by GE). Ongoing improvem...
SYS-CON Events announced today that Alert Logic, the leading provider of Security-as-a-Service solutions for the cloud, has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo® and DevOps Summit 2015 Silicon Valley, which will take place November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Alert Logic provides Security-as-a-Service for on-premises, cloud, and hybrid IT infrastructures, delivering deep security insight and continuous protection for cust...
One of the charter responsibilities of DevOps (because it's a charter responsibility of ops) is measuring and monitoring applications once they're in production. That means both performance and availability. Which means a lot more than folks might initially think because generally speaking what you measure and monitor is a bit different depending on whether you're looking at performance or availability*.
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
SYS-CON Media announced today that CloudBees, the Jenkins Enterprise company, has launched ad campaigns on SYS-CON's DevOps Journal. CloudBees' campaigns focus on the business value of Continuous Delivery and how it has been recognized as a game changer for IT and is now a top priority for organizations, and the best ways to optimize Jenkins to ensure your continuous integration environment is optimally configured.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Live Webinar with 451 Research Analyst Peter Christy. Join us on Wednesday July 22, 2015, at 10 am PT / 1 pm ET In a world where users are on the Internet and the applications are in the cloud, how do you maintain your historic SLA with your users? Peter Christy, Research Director, Networks at 451 Research, will discuss this new network paradigm, one in which there is no LAN and no WAN, and discuss what users and network administrators gain and give up when migrating to the agile world of clo...
In my last Cortex newsletter, I discussed the history of Conway’s Law, and took a close look at how this erstwhile law can help us understand the reorganizations and deeper cultural shifts behind devops and digital transformation. The law – “any organization that designs a system will inevitably produce a design whose structure is a copy of the organization’s communication structure” – is more of an observation of correlations between system designs and communication structures, rather than any...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.