Welcome!

Microservices Expo Authors: Carmen Gonzalez, Pat Romanski, Elizabeth White, JP Morgenthal, Liz McMillan

Related Topics: Microsoft Cloud, Microservices Expo, Containers Expo Blog, Silverlight, Agile Computing, @CloudExpo

Microsoft Cloud: Blog Post

Looking at “Real World” Windows Azure Scenarios

Migrating a Classic 3-Tier Application to Windows Azure with Don Noonan from Skylera

Looking at “Real World” Windows Azure Scenarios – Migrating a Classic 3-Tier Application to Windows Azure with Don Noonan from Skylera

I wrote this article about Don Noonan, a Cloud Architect from Skylera and his overview of “Infrastructure as a Service” platform. Don and I met at TechEd in Orlando 2012 last year and I interviewed him on the newest technologies around Windows Azure. Don has experience working at Microsoft, Boeing and has been working with storage technologies, virtual machines, workloads and desktop client deployment using cloud services - instead of the usual on-premise infrastructure services.

We start by discussing the working components or parts of cloud deployment in a real customer scenario. His current customer had a future mobile application on .Net but wanted to sell more of their current classic products. The customer had many servers to manage, with their IT staff on call to manage their on-premise infrastructure. Given the new technology, Don’s customer decided to look at Windows Azure to scale their applications and workloads on Microsoft’s Infrastructure cloud services.

So they started with a collection or set of functional groups within IaaS. They separated their virtual machines by roles such as Active Directory and other core services. This was a basic implementation of Windows Azure availability sets, which means at the datacenter level there is a promise that at least one member of a group of virtual machines will remain available while updates are being made to the Windows Azure platform.

You should use a combination of availability sets and load-balancing endpoints to make sure that your application is always available and running efficiently. For more information about using load-balanced endpoints, see Load Balancing Virtual Machines.

This task includes the following steps from the Windows Azure website below:

· Step 1: Create a virtual machine and an availability set

· Step 2: Add a virtual machine to the cloud service and assign it to the availability set during the creation process

· Step 3: (Optional) Create an availability set for previously created virtual machines

· Step 4: (Optional) Add a previously created virtual machine to an availability set

Don wanted to make sure that the cloud services and hypervisor have the appropriate virtual machines and that the compute resources will remain there. In this project, they had availability sets around there SQL virtual machines and the goal was that the system understands that one of the SQL instances is always highly available. Even though they have availability sets, you still have to implement failover at the database level, either using a witness, or the new Always On capability in SQL Server 2012.. They also have a custom management service specific to their mobile solution so their customers can look at logs and activities as well as their custom C++ sync service application used to sync data between the mobile phone application and backend database. Don explains that from a Windows Azure Mobile Services context, he likes to group the virtual machines, define what roles they will be playing and how the networking might be specifically laid out like load balancers and endpoints. Don shows in the IT Time Radio interview the Windows Azure portal and shows the interface with virtual machines within an availability set with 2 Domain Controllers paired up running. Don configures the DC availability set that has Active Directory running and AD Domain Services itself has built-in replication giving it high availability capabilities. The demo in the video shows setting up affinity groups and we explain how they are used in the Windows Azure datacenter which keeps your resources closely together like a high-level container that has compute and storage can be close together for provisioning. So for instance, since we’re here on the East Coast we would pick EAST US and build out Affinity Groups close to where we are physically located. Datacenters are large so you would first set up an Affinity Group and then within the Affinity Group you can build out your storage and virtual networks. For security reasons, within virtual networking you may want to divide out or subnet out the virtual networks so that the services are segregated and only certain ports can talk to each other which in common within public clouds services. You could say that you only want to have Windows firewall rules that say I only want external servers to talk to me on port 443, or only have SQL traffic go from the middle-tier to the database-tier.

So the nice part about IaaS is that each customer can have their own management network with an instance of their own virtual machines so you can segregate customers and services. I had a chance to explain the overview picture with segregating the workloads with first discussing Directory Services, Database Services, Management Services, Sync Services, and then wrapping around the whole thing with an Affinity Group and around that the virtual networking. We took a look at building this out in the video and Don shows how to use Powershell scripts and the Windows Azure IaaS cmdlets that makes the actual application work. What he likes to do is break them out into chucks like core infrastructure and back-end management servers like Active Directory Domain Controller, the middleware tier in the front-end like in this case SharePoint Server. So similar to how he segmented the network out and Don shows the scripts he uses to provision objects using Windows Azure and Powershell. He shows how to script out an Affinity Group so that the resources are not a football field away from each other for performance reasons. XML is used to do many of the functions within the portal that you can create from scratch or you can also find pre-canned management scripts up on http://www.windowsazure.com and Don has been working with the Windows Azure team to get more scripts up after they have had time to test these “real world” proof of concepts.

Don shows the foundation including the networking, affinity groups and storage he then shows how to create a virtual machine. He creates the management service layer which contains two Domain Controllers, with the same header information he then tells the default storage account to put new objects in the same storage account like for instance, 5 virtual machines within that storage account. Don explains what cmdlets do what functions like setting up instance variables for his two domain controllers to be in the same availability set. When the DC’s are being configured he explains the beauty of Windows Azure in that it has an existing gallery or catalogue of pre-built virtual machines so he builds it off the Windows Server 2008 R2 SP1 install and then he tells it what subnet and then he shows the cmdlet New-AzureVMConfig command and create the first and second virtual machine and added them to the same availability set name. If we did not include them they would be independent and therefore might be serviced at the same time which would not give you high availability. The last thing he configures is the cloud service for the management network. He explains that this is where you would open ports and configure the connection to the virtual machines to service them via RDP. He finishes the overview of the real world Windows Azure application covering computing power, administrative privileges and adding a set of disks to the database tier like adding a 100GB LUN for data and a 50GB LUN for log files, and you can add lots of disks. Up to 16 data disks at 1TB a piece so that give you room for expansion. There are over 2400 cmdlets for Powershell in Windows Server 2012 and you can get the Windows Azure PowerShell cmdlets from the Windows Azure manage area on http://www.windowsazure.com . The last piece is the web-tier on the newly created subnet that is public facing and two web front-ends and he explains the setup at the end of (Part 1 of 5) Real World Azure - Migrating a Classic 3-Tier Application to Windows Azure IT Time Radio – TechNet Episode .

Catch the previous episodes of “IT Time Radio” below -

TechNet Radio: IT Time – (Part 2 of 5) Real World Azure - Deploying a Custom SharePoint Application to Windows Azure

TechNet Radio: IT Time – (Part 3 of 5) Real World Azure – Moving an All-In-One Server from Co-location to Windows Azure

TechNet Radio: IT Time – (Part 4 of 5) Real World Azure – Implementing RemoteApp for Client / Server Applications on Windows Azure

TechNet Radio: IT Time – (Part 5) Real World Azure – Real World Azure - Migrating a Classic 3-Tier Application to Windows Azure

Try Windows Azure http://aka.ms/try-azure – (Free account requires credit card but not charged)

Get your Microsoft Trial Products at http://aka.ms/msproducts

In case you missed any of the series here is a list to all of the articles: http://aka.ms/31azure

More Stories By Blain Barton

Raised in Spokane Washington, Blain Barton has been with Microsoft for 20 years and has held many diverse positions. His career started in 1988 as Team Leader in Manufacturing and Distribution, progressed to PSS Team Manager for Visual Basic Product Support, Product Consultant for Microsoft Word Division, OEM Systems Engineer and currently serves as a Senior IT Pro Evangelist.

Blain has organized and delivered a wide array of technical events and has presented over 1000 live events and has received over six “top-presenter” speaking awards. He has traveled around the world 3 times delivering OEM training sessions on pre-installing Microsoft Windows on new PC’s.

He attended Washington State University graduating with a Bachelor’s Degree in English/Business and Minor in Computer Science. After college, Blain taught snow skiing on a professional level in the Cascade Mountains before starting his career with Microsoft. Blain currently resides in Tampa Florida.

@MicroservicesExpo Stories
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his general session at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you...
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to captur...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
As Enterprise business moves from Monoliths to Microservices, adoption and successful implementations of Microservices become more evident. The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Documenting hurdles and problems for the use of Microservices will help consultants, architects and specialists to avoid repeating the same mistakes and learn how and when to use (or not use) Microservices at the enterprise level. The circumstance w...
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Ca...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS – and all the tools that enable those things. Changes in developer practices may come up – such as developers taking ownership of code ...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...