Welcome!

Microservices Expo Authors: Carmen Gonzalez, Pat Romanski, Liz McMillan, Elizabeth White, Roger Strukhoff

Related Topics: Microsoft Cloud, Microservices Expo, Containers Expo Blog, Silverlight, Agile Computing, @CloudExpo

Microsoft Cloud: Blog Post

Looking at “Real World” Windows Azure Scenarios

Migrating a Classic 3-Tier Application to Windows Azure with Don Noonan from Skylera

Looking at “Real World” Windows Azure Scenarios – Migrating a Classic 3-Tier Application to Windows Azure with Don Noonan from Skylera

I wrote this article about Don Noonan, a Cloud Architect from Skylera and his overview of “Infrastructure as a Service” platform. Don and I met at TechEd in Orlando 2012 last year and I interviewed him on the newest technologies around Windows Azure. Don has experience working at Microsoft, Boeing and has been working with storage technologies, virtual machines, workloads and desktop client deployment using cloud services - instead of the usual on-premise infrastructure services.

We start by discussing the working components or parts of cloud deployment in a real customer scenario. His current customer had a future mobile application on .Net but wanted to sell more of their current classic products. The customer had many servers to manage, with their IT staff on call to manage their on-premise infrastructure. Given the new technology, Don’s customer decided to look at Windows Azure to scale their applications and workloads on Microsoft’s Infrastructure cloud services.

So they started with a collection or set of functional groups within IaaS. They separated their virtual machines by roles such as Active Directory and other core services. This was a basic implementation of Windows Azure availability sets, which means at the datacenter level there is a promise that at least one member of a group of virtual machines will remain available while updates are being made to the Windows Azure platform.

You should use a combination of availability sets and load-balancing endpoints to make sure that your application is always available and running efficiently. For more information about using load-balanced endpoints, see Load Balancing Virtual Machines.

This task includes the following steps from the Windows Azure website below:

· Step 1: Create a virtual machine and an availability set

· Step 2: Add a virtual machine to the cloud service and assign it to the availability set during the creation process

· Step 3: (Optional) Create an availability set for previously created virtual machines

· Step 4: (Optional) Add a previously created virtual machine to an availability set

Don wanted to make sure that the cloud services and hypervisor have the appropriate virtual machines and that the compute resources will remain there. In this project, they had availability sets around there SQL virtual machines and the goal was that the system understands that one of the SQL instances is always highly available. Even though they have availability sets, you still have to implement failover at the database level, either using a witness, or the new Always On capability in SQL Server 2012.. They also have a custom management service specific to their mobile solution so their customers can look at logs and activities as well as their custom C++ sync service application used to sync data between the mobile phone application and backend database. Don explains that from a Windows Azure Mobile Services context, he likes to group the virtual machines, define what roles they will be playing and how the networking might be specifically laid out like load balancers and endpoints. Don shows in the IT Time Radio interview the Windows Azure portal and shows the interface with virtual machines within an availability set with 2 Domain Controllers paired up running. Don configures the DC availability set that has Active Directory running and AD Domain Services itself has built-in replication giving it high availability capabilities. The demo in the video shows setting up affinity groups and we explain how they are used in the Windows Azure datacenter which keeps your resources closely together like a high-level container that has compute and storage can be close together for provisioning. So for instance, since we’re here on the East Coast we would pick EAST US and build out Affinity Groups close to where we are physically located. Datacenters are large so you would first set up an Affinity Group and then within the Affinity Group you can build out your storage and virtual networks. For security reasons, within virtual networking you may want to divide out or subnet out the virtual networks so that the services are segregated and only certain ports can talk to each other which in common within public clouds services. You could say that you only want to have Windows firewall rules that say I only want external servers to talk to me on port 443, or only have SQL traffic go from the middle-tier to the database-tier.

So the nice part about IaaS is that each customer can have their own management network with an instance of their own virtual machines so you can segregate customers and services. I had a chance to explain the overview picture with segregating the workloads with first discussing Directory Services, Database Services, Management Services, Sync Services, and then wrapping around the whole thing with an Affinity Group and around that the virtual networking. We took a look at building this out in the video and Don shows how to use Powershell scripts and the Windows Azure IaaS cmdlets that makes the actual application work. What he likes to do is break them out into chucks like core infrastructure and back-end management servers like Active Directory Domain Controller, the middleware tier in the front-end like in this case SharePoint Server. So similar to how he segmented the network out and Don shows the scripts he uses to provision objects using Windows Azure and Powershell. He shows how to script out an Affinity Group so that the resources are not a football field away from each other for performance reasons. XML is used to do many of the functions within the portal that you can create from scratch or you can also find pre-canned management scripts up on http://www.windowsazure.com and Don has been working with the Windows Azure team to get more scripts up after they have had time to test these “real world” proof of concepts.

Don shows the foundation including the networking, affinity groups and storage he then shows how to create a virtual machine. He creates the management service layer which contains two Domain Controllers, with the same header information he then tells the default storage account to put new objects in the same storage account like for instance, 5 virtual machines within that storage account. Don explains what cmdlets do what functions like setting up instance variables for his two domain controllers to be in the same availability set. When the DC’s are being configured he explains the beauty of Windows Azure in that it has an existing gallery or catalogue of pre-built virtual machines so he builds it off the Windows Server 2008 R2 SP1 install and then he tells it what subnet and then he shows the cmdlet New-AzureVMConfig command and create the first and second virtual machine and added them to the same availability set name. If we did not include them they would be independent and therefore might be serviced at the same time which would not give you high availability. The last thing he configures is the cloud service for the management network. He explains that this is where you would open ports and configure the connection to the virtual machines to service them via RDP. He finishes the overview of the real world Windows Azure application covering computing power, administrative privileges and adding a set of disks to the database tier like adding a 100GB LUN for data and a 50GB LUN for log files, and you can add lots of disks. Up to 16 data disks at 1TB a piece so that give you room for expansion. There are over 2400 cmdlets for Powershell in Windows Server 2012 and you can get the Windows Azure PowerShell cmdlets from the Windows Azure manage area on http://www.windowsazure.com . The last piece is the web-tier on the newly created subnet that is public facing and two web front-ends and he explains the setup at the end of (Part 1 of 5) Real World Azure - Migrating a Classic 3-Tier Application to Windows Azure IT Time Radio – TechNet Episode .

Catch the previous episodes of “IT Time Radio” below -

TechNet Radio: IT Time – (Part 2 of 5) Real World Azure - Deploying a Custom SharePoint Application to Windows Azure

TechNet Radio: IT Time – (Part 3 of 5) Real World Azure – Moving an All-In-One Server from Co-location to Windows Azure

TechNet Radio: IT Time – (Part 4 of 5) Real World Azure – Implementing RemoteApp for Client / Server Applications on Windows Azure

TechNet Radio: IT Time – (Part 5) Real World Azure – Real World Azure - Migrating a Classic 3-Tier Application to Windows Azure

Try Windows Azure http://aka.ms/try-azure – (Free account requires credit card but not charged)

Get your Microsoft Trial Products at http://aka.ms/msproducts

In case you missed any of the series here is a list to all of the articles: http://aka.ms/31azure

More Stories By Blain Barton

Raised in Spokane Washington, Blain Barton has been with Microsoft for 20 years and has held many diverse positions. His career started in 1988 as Team Leader in Manufacturing and Distribution, progressed to PSS Team Manager for Visual Basic Product Support, Product Consultant for Microsoft Word Division, OEM Systems Engineer and currently serves as a Senior IT Pro Evangelist.

Blain has organized and delivered a wide array of technical events and has presented over 1000 live events and has received over six “top-presenter” speaking awards. He has traveled around the world 3 times delivering OEM training sessions on pre-installing Microsoft Windows on new PC’s.

He attended Washington State University graduating with a Bachelor’s Degree in English/Business and Minor in Computer Science. After college, Blain taught snow skiing on a professional level in the Cascade Mountains before starting his career with Microsoft. Blain currently resides in Tampa Florida.

@MicroservicesExpo Stories
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...