Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Liz McMillan, Harry Trott, Mamoon Yunus

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

The Myth of .NET Purity

The Myth of .NET Purity

There is an increasing amount of discussion around the topic of ".NET Purity" in development circles. When selling an application the question often arises "is your application 100% .NET?" or "How much of your application is .NET?" There is an implied qualitative judgment behind these questions and it's usually pejorative.

The implication is that an application that is entirely written in .NET, presumably without any interoperation with COM or direct calls to the Win32 API, is superior to an application that is a combination of technologies.

Certainly .NET represents a fantastic leap in developer productivity and puts a clean, consistent face on the services that the Windows Platform provides. For many years the set of interfaces provided by the Windows OS Platform - collectively known as the Windows SDK - have been exposed to developers as exported "C"-style functions in DLLs, and in recent years, via the Component Object Model (COM).

Common Language Runtime or Virtual Machine?
Often the .NET Common Language Runtime, or CLR, is directly compared to the Java Virtual Machine. Initially, there are many clear parallels: both are "managed" environments that provide a component container, both consume a "partially chewed" intermediate language, both provide low-level services like garbage collection and threading conveniences.

While these parallels are superficially compelling, these two implementations differ fundamentally in philosophy. Comparing the CLR to the VM is reasonable only to a certain point - their architectural goals are ultimately different.

Sun promotes a marketing program called 100% Pure Java, which is certainly appropriate if code portability and underlying operating system transparency is a desirable endpoint. However, many 3rd party Java Application Servers create a competitive advantage by judicious use of "C" function calls directly down (via Java Native Interface or JNI) into their host Operating Systems value-added services that are not exposed by the Java Application Platform (the Java Class Library). Calling into the core platform is the only way to make use of base functionality that is only presented via a native interface!

The Java VM is truly a "virtual machine" that's ultimate goal is to abstract (virtualize) away the underlying Operating System and provide an idealized (not necessarily ideal, but idealized) environment for development. The Java Virtual machine is also intimately united with the API - the Java Application Platform, which services provided by the VM implementation. Regardless of where you run your compiled Java code, you will run within the context of the Virtual Machine and ostensibly link with supplied Java Platform APIs.

The .NET Common Language Runtime is named well as it is used more as a Language Runtime than a Virtual Machine. While it successfully abstracts away aspects of underlying hardware through its use of an Intermediate Language, when the CLR is combined with the .NET Framework Library of APIs it is married to the underlying platform, which is Windows. The CLR provides all the facilities of the Windows Platform to any .NET-enabled Language.

.NET Framework Library
The Windows Platform has dozens and dozens of high-level system services that are exposed by thousands of APIs. This large library of functionality encompasses various levels of richness. A low-level API may open a file off a disk, while a high-level one might play an audio file. The designers of the .NET Framework wanted to create a consistent object-oriented face on a rich legacy of platform functionality. The CLR and .NET Framework work together to expose the capabilities within the Windows Platform, including those that may have previously been hidden away in difficult or little known APIs.

While the CLR provides a new paradigm for application development, it does not close the door on existing libraries. The CLR provides interop services to the developer but the biggest consumer of these services are the .NET Class Libraries that unlock existing Windows Platform abilities via a .NET API!

For example, when sending email using the .NET Framework Library class System.Web.Mail.SmtpMail, the Class Library uses a helper class that abstracts the existing CDO (Collaboration Data Objects) COM Library. This is just one example where a .NET Library developer chose to rely on a production-ready reliable existing library rather than write something from scratch. This example and dozens of others with the Library not withstanding, the Common Language Runtime still at some point needs to work with the Windows internal APIs.

If Microsoft were to truly virtualize the machine, they would have marginalized their investment in the Windows platform. Certainly it behooved the designers to make transitions to existing libraries as painless as possible. They have enabled this with NET » COM Interop via both Runtime- and COM-Callable Wrappers, the ability to tap into standard Win32 Platform APIs via a technology called P/Invoke (short for Platform Invoke) as well as other options. When writing code that is hosted in the CLR the vast resources of platform are just sitting under the developer - the runtime is transparent rather than virtual! This marks a fundamentally different view of the platform that other virtualizing machine implementations.

While creating a new fresh application using only .NET may offer some benefits in the arenas of deployment or marketing, these benefits may be not realized when weighed against the cost of rewriting non-.NET components in .NET when those legacy components could have been leveraged. A "pure" .NET solution can only make use of either those pieces of functionality that can be achieve entirely within the runtime, or those functions that have been exposed by the Base Class Library - which itself uses COM Interop and P/Invoke!

The .NET Framework Library itself isn't "pure .NET" as it takes every opportunity to take full advantage of the underlying platform primitives. Moreover, the concept of .NET Purity is rendered specious in this new light. The .NET Framework is the best way to create business components on the Windows Platform, but any applications along with the .NET Framework are only lifted as high as the underlying Windows OS services.

"Hybrid" Solutions provide Real Solutions
Many large existing applications are written in Visual C++ and COM. They are written "close to the metal" to take full advantage of native Windows multi-threading and fine-grained memory management. However, new business components may also be written in a .NET language such as C# or VB.NET. The existing system then hosts the .NET Common Language Runtime within its process space and Interops. The interface is usually COM interop but only incurs minimal overhead of between 10 and 40 processor instructions per in-proc call.

.NET Components hosted with in the legacy applicaiton can take advantage of that application's existing services. Lower level developer features such as memory management, object lifetime and object orientation are provided by the CLR, while higher level vertical-specific business functionality is exposed via the legacy application.

This "hybrid" can provide a best-of-breed solution on the Windows Platform exploiting both the highly performant low-level APIs via C++ and the highly componentized and object oriented features of the .NET Framework. These solutions can work very successfully while companies migrate their existing code bases to the .NET Framework.

More Stories By Scott Hanselman

Scott Hanselman will be starting a new job at Microsoft as a senior program manager in the developer division. His blog is at http://www.hanselman.com.

Comments (4) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Tim Huckaby 07/25/03 07:07:00 PM EDT

Your comments on system.directory are interesting. Adsi is simply a com wrapper, so technically it?s a ?wrapped wrapper? of the native ldap api which, of course, is c++ only. Being that said, the directory entry class you are referring to is a ?wrapped, wrapped wrapper?. Ultimately, the big disappointment of the .net framework 1.1 and the hope for 2.0 is more native framework classes.

Derek Ferguson 07/18/03 10:12:00 AM EDT

I would never suggest that COM Interop should be gotten rid of or is in any way, shape, or form "evil." However, as a developer who spends more than 90% of my coding time working with the System.DirectoryServices and System.Management namespaces, let me tell you -- MS could have save developers a lot of gried by having written some managed protocol handlers here, rather than just wrapping up the old, troubled API's.

As one example of this, the DirectoryEntry class in System.DirectoryServices allows you to pass a username and password to its constructor. However, when you use the WinNT ADSI provider, these parameters are sometimes ignored. Why is this? Because of a limitation in the existing API's that were wrapped!

Similar problems abound in the System.Management namespace -- where I recently managed to prove that Impersonation (a native API) interacts differently with EnablePrivilieges (a wrapped API) under ASP.NET than it does under the Console. In working through this with MS, I have been passed around to 10 different people in their Support infrastructure. Why? Because the old, obscure API's that have been wrapped are a "dark art" that are only known by a few individuals within the Redmond infrastructure.

Once again: it would've been better to have recreated the whole thing in C#.

Dean Guida 07/25/03 04:00:00 PM EDT

There is a lot to be said for purity for purity's sake. I have never subscribed to this type of thinking. At the end of the day we all want to build dependable software that solves the business problem at hand. Everything should always be taking in context of a solution with a sense of practicality. I think most of the software development community has this maturity.

Patrick Hynds 07/17/03 10:16:00 PM EDT

I think this article is right on, but felt that we should confront the issue of why this kind of rebuttal is needed (and it is needed). We find people who are earnest only in so far as they can justify their existence. Therefore they brand something heresy as soon as they abandon the practice themselves. Lets assume that COM interop was a horrible waste of resource, it still wouldn't justify discarding a tool and the wealth of existing functionality the last generation always holds in such a wholesale manner. I have seen people in ASP circles a while back declare that "Session State is bad". Like hybrid applications Session State in ASP is a tool, use it, don't use it, but if you happen to need a hammer it doesn't make the saw evil.

@MicroservicesExpo Stories
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
API Security is complex! Vendors like Forum Systems, IBM, CA and Axway have invested almost 2 decades of engineering effort and significant capital in building API Security stacks to lockdown APIs. The API Security stack diagram shown below is a building block for rapidly locking down APIs. The four fundamental pillars of API Security - SSL, Identity, Content Validation and deployment architecture - are discussed in detail below.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...