Welcome!

Microservices Expo Authors: Pat Romanski, Liz McMillan, Sematext Blog, Elizabeth White, Carmen Gonzalez

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

The Myth of .NET Purity

The Myth of .NET Purity

There is an increasing amount of discussion around the topic of ".NET Purity" in development circles. When selling an application the question often arises "is your application 100% .NET?" or "How much of your application is .NET?" There is an implied qualitative judgment behind these questions and it's usually pejorative.

The implication is that an application that is entirely written in .NET, presumably without any interoperation with COM or direct calls to the Win32 API, is superior to an application that is a combination of technologies.

Certainly .NET represents a fantastic leap in developer productivity and puts a clean, consistent face on the services that the Windows Platform provides. For many years the set of interfaces provided by the Windows OS Platform - collectively known as the Windows SDK - have been exposed to developers as exported "C"-style functions in DLLs, and in recent years, via the Component Object Model (COM).

Common Language Runtime or Virtual Machine?
Often the .NET Common Language Runtime, or CLR, is directly compared to the Java Virtual Machine. Initially, there are many clear parallels: both are "managed" environments that provide a component container, both consume a "partially chewed" intermediate language, both provide low-level services like garbage collection and threading conveniences.

While these parallels are superficially compelling, these two implementations differ fundamentally in philosophy. Comparing the CLR to the VM is reasonable only to a certain point - their architectural goals are ultimately different.

Sun promotes a marketing program called 100% Pure Java, which is certainly appropriate if code portability and underlying operating system transparency is a desirable endpoint. However, many 3rd party Java Application Servers create a competitive advantage by judicious use of "C" function calls directly down (via Java Native Interface or JNI) into their host Operating Systems value-added services that are not exposed by the Java Application Platform (the Java Class Library). Calling into the core platform is the only way to make use of base functionality that is only presented via a native interface!

The Java VM is truly a "virtual machine" that's ultimate goal is to abstract (virtualize) away the underlying Operating System and provide an idealized (not necessarily ideal, but idealized) environment for development. The Java Virtual machine is also intimately united with the API - the Java Application Platform, which services provided by the VM implementation. Regardless of where you run your compiled Java code, you will run within the context of the Virtual Machine and ostensibly link with supplied Java Platform APIs.

The .NET Common Language Runtime is named well as it is used more as a Language Runtime than a Virtual Machine. While it successfully abstracts away aspects of underlying hardware through its use of an Intermediate Language, when the CLR is combined with the .NET Framework Library of APIs it is married to the underlying platform, which is Windows. The CLR provides all the facilities of the Windows Platform to any .NET-enabled Language.

.NET Framework Library
The Windows Platform has dozens and dozens of high-level system services that are exposed by thousands of APIs. This large library of functionality encompasses various levels of richness. A low-level API may open a file off a disk, while a high-level one might play an audio file. The designers of the .NET Framework wanted to create a consistent object-oriented face on a rich legacy of platform functionality. The CLR and .NET Framework work together to expose the capabilities within the Windows Platform, including those that may have previously been hidden away in difficult or little known APIs.

While the CLR provides a new paradigm for application development, it does not close the door on existing libraries. The CLR provides interop services to the developer but the biggest consumer of these services are the .NET Class Libraries that unlock existing Windows Platform abilities via a .NET API!

For example, when sending email using the .NET Framework Library class System.Web.Mail.SmtpMail, the Class Library uses a helper class that abstracts the existing CDO (Collaboration Data Objects) COM Library. This is just one example where a .NET Library developer chose to rely on a production-ready reliable existing library rather than write something from scratch. This example and dozens of others with the Library not withstanding, the Common Language Runtime still at some point needs to work with the Windows internal APIs.

If Microsoft were to truly virtualize the machine, they would have marginalized their investment in the Windows platform. Certainly it behooved the designers to make transitions to existing libraries as painless as possible. They have enabled this with NET » COM Interop via both Runtime- and COM-Callable Wrappers, the ability to tap into standard Win32 Platform APIs via a technology called P/Invoke (short for Platform Invoke) as well as other options. When writing code that is hosted in the CLR the vast resources of platform are just sitting under the developer - the runtime is transparent rather than virtual! This marks a fundamentally different view of the platform that other virtualizing machine implementations.

While creating a new fresh application using only .NET may offer some benefits in the arenas of deployment or marketing, these benefits may be not realized when weighed against the cost of rewriting non-.NET components in .NET when those legacy components could have been leveraged. A "pure" .NET solution can only make use of either those pieces of functionality that can be achieve entirely within the runtime, or those functions that have been exposed by the Base Class Library - which itself uses COM Interop and P/Invoke!

The .NET Framework Library itself isn't "pure .NET" as it takes every opportunity to take full advantage of the underlying platform primitives. Moreover, the concept of .NET Purity is rendered specious in this new light. The .NET Framework is the best way to create business components on the Windows Platform, but any applications along with the .NET Framework are only lifted as high as the underlying Windows OS services.

"Hybrid" Solutions provide Real Solutions
Many large existing applications are written in Visual C++ and COM. They are written "close to the metal" to take full advantage of native Windows multi-threading and fine-grained memory management. However, new business components may also be written in a .NET language such as C# or VB.NET. The existing system then hosts the .NET Common Language Runtime within its process space and Interops. The interface is usually COM interop but only incurs minimal overhead of between 10 and 40 processor instructions per in-proc call.

.NET Components hosted with in the legacy applicaiton can take advantage of that application's existing services. Lower level developer features such as memory management, object lifetime and object orientation are provided by the CLR, while higher level vertical-specific business functionality is exposed via the legacy application.

This "hybrid" can provide a best-of-breed solution on the Windows Platform exploiting both the highly performant low-level APIs via C++ and the highly componentized and object oriented features of the .NET Framework. These solutions can work very successfully while companies migrate their existing code bases to the .NET Framework.

More Stories By Scott Hanselman

Scott Hanselman will be starting a new job at Microsoft as a senior program manager in the developer division. His blog is at http://www.hanselman.com.

Comments (4) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Tim Huckaby 07/25/03 07:07:00 PM EDT

Your comments on system.directory are interesting. Adsi is simply a com wrapper, so technically it?s a ?wrapped wrapper? of the native ldap api which, of course, is c++ only. Being that said, the directory entry class you are referring to is a ?wrapped, wrapped wrapper?. Ultimately, the big disappointment of the .net framework 1.1 and the hope for 2.0 is more native framework classes.

Derek Ferguson 07/18/03 10:12:00 AM EDT

I would never suggest that COM Interop should be gotten rid of or is in any way, shape, or form "evil." However, as a developer who spends more than 90% of my coding time working with the System.DirectoryServices and System.Management namespaces, let me tell you -- MS could have save developers a lot of gried by having written some managed protocol handlers here, rather than just wrapping up the old, troubled API's.

As one example of this, the DirectoryEntry class in System.DirectoryServices allows you to pass a username and password to its constructor. However, when you use the WinNT ADSI provider, these parameters are sometimes ignored. Why is this? Because of a limitation in the existing API's that were wrapped!

Similar problems abound in the System.Management namespace -- where I recently managed to prove that Impersonation (a native API) interacts differently with EnablePrivilieges (a wrapped API) under ASP.NET than it does under the Console. In working through this with MS, I have been passed around to 10 different people in their Support infrastructure. Why? Because the old, obscure API's that have been wrapped are a "dark art" that are only known by a few individuals within the Redmond infrastructure.

Once again: it would've been better to have recreated the whole thing in C#.

Dean Guida 07/25/03 04:00:00 PM EDT

There is a lot to be said for purity for purity's sake. I have never subscribed to this type of thinking. At the end of the day we all want to build dependable software that solves the business problem at hand. Everything should always be taking in context of a solution with a sense of practicality. I think most of the software development community has this maturity.

Patrick Hynds 07/17/03 10:16:00 PM EDT

I think this article is right on, but felt that we should confront the issue of why this kind of rebuttal is needed (and it is needed). We find people who are earnest only in so far as they can justify their existence. Therefore they brand something heresy as soon as they abandon the practice themselves. Lets assume that COM interop was a horrible waste of resource, it still wouldn't justify discarding a tool and the wealth of existing functionality the last generation always holds in such a wholesale manner. I have seen people in ASP circles a while back declare that "Session State is bad". Like hybrid applications Session State in ASP is a tool, use it, don't use it, but if you happen to need a hammer it doesn't make the saw evil.

@MicroservicesExpo Stories
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...