Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Jason Bloomberg, AppNeta Blog, Derek Weeks

Related Topics: Containers Expo Blog, Microservices Expo, Microsoft Cloud, @CloudExpo

Containers Expo Blog: Article

Understanding Remote Desktop Services (RDS)

The backbone of Microsoft VDI Solution

In Windows Server 2008 R2 (WS2008R2), Terminal Services (TS) has been expanded and renamed to Remote Desktop Services (RDS). RDS is the backbone of Microsoft's VDI solutions. And in Windows Server 2012, RDS is further enhanced and with a scenario-based configuration wizard. Still the concept and architecture remain very much the same since WS2008R2. The new and enhanced architecture takes advantage of virtualization and makes remote access a much flexible solution with new deployment scenarios. To realize the capabilities of RDS, it is essential to understand the functions of key architectural components and how they complement one another to process a RDS request. There are many new terms and acronyms to get familiar with in the context of RDS. For the remainder of this post, notice RDS implies the server platform of WS2008R2 and later, while TS implies WS2008.

There are five main architectural components in RDS, as shown, and all require a RDS licensing server. Each component includes a set of features designed to achieve particular functions. Together, the five form a framework for accessing Terminal Services applications, remote desktops, and virtual desktops. Essentially, WS2008R2 offers a set of building blocks with essential functions for constructing enterprise remote access infrastructure.

To start, a user will access a RDS webpage by specifying an URL where RDS resources are published to. This interface, provided by Remote Desktop Web Access (RDWA) and configured with a local IIS with SSL, is the web access point to RemoteApp and VDI. The URL is consistent regardless how resources are organized, composed, and published from multiple RDS session hosts behind the scene. By default, RDS publishes resources at https://the-FQDN-of-a-RDWA-server/rdweb and this URL is the only information a system administrator needs to provide to a user for accessing authorized resources via RDS. A user will need to be authenticated with one's AD credentials when accessing the URL and the RemoteApp programs presented by this URL is trimmed with access control list. Namely, an authenticated user will see and be able to access only authorized RemoteApp programs.

Remote Desktop Gateway (RDG) is optional and functions very much the same with that in TS. A RDG is to be placed at the edge of a corporate network to filter out incoming RDS requests by referencing criteria defined in a designated Network Policy Server (NPS). With a server certificate, RDG offers secure remote access to RDS infrastructure. As far as a system administrator is concerned, RDG is the boundary of a RDS network. There are two policies in NPS relevant to an associated RDG:

  • One is Connection Authorization Policy or CAP. I call it a user authorization list, showing who can access an associated RDG
  • The other is Resource Authorization Policy or RAP. In essence, this is a resource list specifying which devices a CAP user can connect to via an associated RDG.

In RDS, applications are installed and published in a Remote Desktop Session Host (RDSH) similar to a TS Session Host, or simply a Terminal Server in a TS solution. A RDSH loads applications, crunches numbers, and produces results. It is our trusted and beloved working horse in a RDS solution. Digital signing can be easily enabled in a RDSH with a certificate. Multiple RDSHs can be deployed along with a load balancing technology. Which requires every RDSH in a load-balancing group to be identically configured with the same applications.

A noticeable enhancement in RDSH (as compared with TS Session Host) is the ability to trim the presence of a published application based on the access control list (ACL) of the application. An authorized user will see, hence have an access to, only published applications of which the user is authorized in the ACL. By default, the Everyone group is included in a published application's ACL, and all connected user will have access to a published application.

Remote Desktop Virtualization Host (RDVH) is a new feature which serves requests for virtual desktops running in virtual machines, or VMs. A RDVH server is a Hyper-V based host, for instance a Windows Server with Hyper-V server role enabled. When serving a VM-based request, an associated RDVH will automatically start an intended VM, if the VM is not already running. And a user will always be prompted for credentials when accessing a virtual desktop. However, a RDVH does not directly accept connection requests and it uses a designated RDSH as a "redirector" for serving VM-based requests. The pairing of a RDVH and its redirector is defined in Remote Desktop Connection Broker (RDCB) when adding a RDVH as a resource.

Remote Desktop Connection Broker (RDCB), an expansion of the Terminal Services Session Broker in TS, provides a unified experience for setting up user access to traditional TS applications and virtual machine (VM)-based virtual desktops. Here, a virtual desktop can be running in either a designated VM, or a VM dynamically picked based on load balancing from a defined VM pool. A system administrator will use the RDCB console, called Remote Desktop Connection Manager, to include RDSHs, TS Servers, and RDVHs such that those applications published by the RDSHs and TS Servers, and those VMs running in RDVHs can be later composed and presented to users with a consistent URL by RDWA. And with this consistent URL, authenticated users can access authorized RemoteApp programs and virtual desktops.

A Remote Desktop (RD) Client gets connection information from the RDWA server in a RDS solution. If a RD client is outside of a corporate network, the client connects through a RDG. If a RD client is internal, the client can then directly connect to an intended RDSH or RDVH once RDCB provides the connection information. In both cases, RDCB plays a central role to make sure a client gets connected to a correct resource. With certificates, a system administrator can configure digital signing and single sign-on among RDS components to provide a great user experience with high security.

Conceptually, RDCB is the chief intelligence and operation officer of a RDS solution and knows which is where, whom to talk to, and what to do with a RDS request. Before a logical connection can be established between a client and a target RDSH or RDVH, RDCB acts as a go-between passing and forwarding pertinent information to and from associated parties when serving a RDS request. From a 50,000-foot view, a remote client uses RDWA/RDG to obtain access to a target RDSH or RDVH, while RDCB connects the client to a session on the target RDSH, or an intended VM configured in a target RDVH. Above is a RDS architecture poster with visual presentation on how all flow together. Http://aka.ms/free has number of free e-books and this poster for additional information of WS2008R2 Active Directory, RDS, and other components.

The configuration in WS2008 is a bit challenging with many details easily overlooked. Windows Server 2012 greatly improved the user experience by facilitating the configuration processes with a scenario-based wizard. Stay tuned and I will further discuss this in an upcoming blog post series.

Recommended additional reading on RDS/VDI/App-V, cloud essentials, and private cloud

[This is a cross-posting from http://blogs.technet.com/yungchou.]

More Stories By Yung Chou

Yung Chou is a Technology Evangelist in Microsoft. Within the company, he has had opportunities serving customers in the areas of support account management, technical support, technical sales, and evangelism. Prior to Microsoft, he had established capacities in system programming, application development, consulting services, and IT management. His recent technical focuses have been in virtualization and cloud computing with strong interests in hybrid cloud and emerging enterprise computing architecture. He is a frequent speaker in Microsoft conferences, roadshow, and TechNet events.

@MicroservicesExpo Stories
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
SYS-CON Events announced today that Auditwerx will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Auditwerx specializes in SOC 1, SOC 2, and SOC 3 attestation services throughout the U.S. and Canada. As a division of Carr, Riggs & Ingram (CRI), one of the top 20 largest CPA firms nationally, you can expect the resources, skills, and experience of a much larger firm combined with the accessibility and atten...
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
Lots of cloud technology predictions and analysis are still dealing with future spending and planning, but there are plenty of real-world cloud use cases and implementations happening now. One approach, taken by stalwart GE, is to use SaaS applications for non-differentiated uses. For them, that means moving functions like HR, finance, taxes and scheduling to SaaS, while spending their software development time and resources on the core apps that make GE better, such as inventory, planning and s...
By now, every company in the world is on the lookout for the digital disruption that will threaten their existence. In study after study, executives believe that technology has either already disrupted their industry, is in the process of disrupting it or will disrupt it in the near future. As a result, every organization is taking steps to prepare for or mitigate unforeseen disruptions. Yet in almost every industry, the disruption trend continues unabated.
Building custom add-ons does not need to be limited to the ideas you see on a marketplace. In his session at 20th Cloud Expo, Sukhbir Dhillon, CEO and founder of Addteq, will go over some adventures they faced in developing integrations using Atlassian SDK and other technologies/platforms and how it has enabled development teams to experiment with newer paradigms like Serverless and newer features of Atlassian SDKs. In this presentation, you will be taken on a journey of Add-On and Integration ...
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journe...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists l...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
As Enterprise business moves from Monoliths to Microservices, adoption and successful implementations of Microservices become more evident. The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Documenting hurdles and problems for the use of Microservices will help consultants, architects and specialists to avoid repeating the same mistakes and learn how and when to use (or not use) Microservices at the enterprise level. The circumstance w...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...