Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, JP Morgenthal, Pat Romanski, Cloud Best Practices Network

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

The Myth of .NET Purity

The Myth of .NET Purity

There is an increasing amount of discussion around the topic of ".NET Purity" in development circles. When selling an application the question often arises "is your application 100% .NET?" or "How much of your application is .NET?" There is an implied qualitative judgment behind these questions and it's usually pejorative.

The implication is that an application that is entirely written in .NET, presumably without any interoperation with COM or direct calls to the Win32 API, is superior to an application that is a combination of technologies.

Certainly .NET represents a fantastic leap in developer productivity and puts a clean, consistent face on the services that the Windows Platform provides. For many years the set of interfaces provided by the Windows OS Platform - collectively known as the Windows SDK - have been exposed to developers as exported "C"-style functions in DLLs, and in recent years, via the Component Object Model (COM).

Common Language Runtime or Virtual Machine?
Often the .NET Common Language Runtime, or CLR, is directly compared to the Java Virtual Machine. Initially, there are many clear parallels: both are "managed" environments that provide a component container, both consume a "partially chewed" intermediate language, both provide low-level services like garbage collection and threading conveniences.

While these parallels are superficially compelling, these two implementations differ fundamentally in philosophy. Comparing the CLR to the VM is reasonable only to a certain point - their architectural goals are ultimately different.

Sun promotes a marketing program called 100% Pure Java, which is certainly appropriate if code portability and underlying operating system transparency is a desirable endpoint. However, many 3rd party Java Application Servers create a competitive advantage by judicious use of "C" function calls directly down (via Java Native Interface or JNI) into their host Operating Systems value-added services that are not exposed by the Java Application Platform (the Java Class Library). Calling into the core platform is the only way to make use of base functionality that is only presented via a native interface!

The Java VM is truly a "virtual machine" that's ultimate goal is to abstract (virtualize) away the underlying Operating System and provide an idealized (not necessarily ideal, but idealized) environment for development. The Java Virtual machine is also intimately united with the API - the Java Application Platform, which services provided by the VM implementation. Regardless of where you run your compiled Java code, you will run within the context of the Virtual Machine and ostensibly link with supplied Java Platform APIs.

The .NET Common Language Runtime is named well as it is used more as a Language Runtime than a Virtual Machine. While it successfully abstracts away aspects of underlying hardware through its use of an Intermediate Language, when the CLR is combined with the .NET Framework Library of APIs it is married to the underlying platform, which is Windows. The CLR provides all the facilities of the Windows Platform to any .NET-enabled Language.

.NET Framework Library
The Windows Platform has dozens and dozens of high-level system services that are exposed by thousands of APIs. This large library of functionality encompasses various levels of richness. A low-level API may open a file off a disk, while a high-level one might play an audio file. The designers of the .NET Framework wanted to create a consistent object-oriented face on a rich legacy of platform functionality. The CLR and .NET Framework work together to expose the capabilities within the Windows Platform, including those that may have previously been hidden away in difficult or little known APIs.

While the CLR provides a new paradigm for application development, it does not close the door on existing libraries. The CLR provides interop services to the developer but the biggest consumer of these services are the .NET Class Libraries that unlock existing Windows Platform abilities via a .NET API!

For example, when sending email using the .NET Framework Library class System.Web.Mail.SmtpMail, the Class Library uses a helper class that abstracts the existing CDO (Collaboration Data Objects) COM Library. This is just one example where a .NET Library developer chose to rely on a production-ready reliable existing library rather than write something from scratch. This example and dozens of others with the Library not withstanding, the Common Language Runtime still at some point needs to work with the Windows internal APIs.

If Microsoft were to truly virtualize the machine, they would have marginalized their investment in the Windows platform. Certainly it behooved the designers to make transitions to existing libraries as painless as possible. They have enabled this with NET » COM Interop via both Runtime- and COM-Callable Wrappers, the ability to tap into standard Win32 Platform APIs via a technology called P/Invoke (short for Platform Invoke) as well as other options. When writing code that is hosted in the CLR the vast resources of platform are just sitting under the developer - the runtime is transparent rather than virtual! This marks a fundamentally different view of the platform that other virtualizing machine implementations.

While creating a new fresh application using only .NET may offer some benefits in the arenas of deployment or marketing, these benefits may be not realized when weighed against the cost of rewriting non-.NET components in .NET when those legacy components could have been leveraged. A "pure" .NET solution can only make use of either those pieces of functionality that can be achieve entirely within the runtime, or those functions that have been exposed by the Base Class Library - which itself uses COM Interop and P/Invoke!

The .NET Framework Library itself isn't "pure .NET" as it takes every opportunity to take full advantage of the underlying platform primitives. Moreover, the concept of .NET Purity is rendered specious in this new light. The .NET Framework is the best way to create business components on the Windows Platform, but any applications along with the .NET Framework are only lifted as high as the underlying Windows OS services.

"Hybrid" Solutions provide Real Solutions
Many large existing applications are written in Visual C++ and COM. They are written "close to the metal" to take full advantage of native Windows multi-threading and fine-grained memory management. However, new business components may also be written in a .NET language such as C# or VB.NET. The existing system then hosts the .NET Common Language Runtime within its process space and Interops. The interface is usually COM interop but only incurs minimal overhead of between 10 and 40 processor instructions per in-proc call.

.NET Components hosted with in the legacy applicaiton can take advantage of that application's existing services. Lower level developer features such as memory management, object lifetime and object orientation are provided by the CLR, while higher level vertical-specific business functionality is exposed via the legacy application.

This "hybrid" can provide a best-of-breed solution on the Windows Platform exploiting both the highly performant low-level APIs via C++ and the highly componentized and object oriented features of the .NET Framework. These solutions can work very successfully while companies migrate their existing code bases to the .NET Framework.

More Stories By Scott Hanselman

Scott Hanselman will be starting a new job at Microsoft as a senior program manager in the developer division. His blog is at http://www.hanselman.com.

Comments (4) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Tim Huckaby 07/25/03 07:07:00 PM EDT

Your comments on system.directory are interesting. Adsi is simply a com wrapper, so technically it?s a ?wrapped wrapper? of the native ldap api which, of course, is c++ only. Being that said, the directory entry class you are referring to is a ?wrapped, wrapped wrapper?. Ultimately, the big disappointment of the .net framework 1.1 and the hope for 2.0 is more native framework classes.

Derek Ferguson 07/18/03 10:12:00 AM EDT

I would never suggest that COM Interop should be gotten rid of or is in any way, shape, or form "evil." However, as a developer who spends more than 90% of my coding time working with the System.DirectoryServices and System.Management namespaces, let me tell you -- MS could have save developers a lot of gried by having written some managed protocol handlers here, rather than just wrapping up the old, troubled API's.

As one example of this, the DirectoryEntry class in System.DirectoryServices allows you to pass a username and password to its constructor. However, when you use the WinNT ADSI provider, these parameters are sometimes ignored. Why is this? Because of a limitation in the existing API's that were wrapped!

Similar problems abound in the System.Management namespace -- where I recently managed to prove that Impersonation (a native API) interacts differently with EnablePrivilieges (a wrapped API) under ASP.NET than it does under the Console. In working through this with MS, I have been passed around to 10 different people in their Support infrastructure. Why? Because the old, obscure API's that have been wrapped are a "dark art" that are only known by a few individuals within the Redmond infrastructure.

Once again: it would've been better to have recreated the whole thing in C#.

Dean Guida 07/25/03 04:00:00 PM EDT

There is a lot to be said for purity for purity's sake. I have never subscribed to this type of thinking. At the end of the day we all want to build dependable software that solves the business problem at hand. Everything should always be taking in context of a solution with a sense of practicality. I think most of the software development community has this maturity.

Patrick Hynds 07/17/03 10:16:00 PM EDT

I think this article is right on, but felt that we should confront the issue of why this kind of rebuttal is needed (and it is needed). We find people who are earnest only in so far as they can justify their existence. Therefore they brand something heresy as soon as they abandon the practice themselves. Lets assume that COM interop was a horrible waste of resource, it still wouldn't justify discarding a tool and the wealth of existing functionality the last generation always holds in such a wholesale manner. I have seen people in ASP circles a while back declare that "Session State is bad". Like hybrid applications Session State in ASP is a tool, use it, don't use it, but if you happen to need a hammer it doesn't make the saw evil.

@MicroservicesExpo Stories
Here’s a novel, but controversial statement, “it’s time for the CEO, COO, CIO to start to take joint responsibility for application platform decisions.” For too many years now technical meritocracy has led the decision-making for the business with regard to platform selection. This includes, but is not limited to, servers, operating systems, virtualization, cloud and application platforms. In many of these cases the decision has not worked in favor of the business with regard to agility and cost...
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager - Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, reviewed next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discussed how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has been engaged in t...
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions with...
Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2017' as Conference Chair The @DevOpsSummit at Cloud Expo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great t...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journe...
Software development is a moving target. You have to keep your eye on trends in the tech space that haven’t even happened yet just to stay current. Consider what’s happened with augmented reality (AR) in this year alone. If you said you were working on an AR app in 2015, you might have gotten a lot of blank stares or jokes about Google Glass. Then Pokémon GO happened. Like AR, the trends listed below have been building steam for some time, but they’ll be taking off in surprising new directions b...
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS – and all the tools that enable those things. Changes in developer practices may come up – such as developers taking ownership of code ...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
When building DevOps or continuous delivery practices you can learn a great deal from others. What choices did they make, what practices did they put in place, and how did they connect the dots? At Sonatype, we pulled together a set of 21 reference architectures for folks building continuous delivery and DevOps practices using Docker. Why? After 3,000 DevOps professionals attended our webinar on "Continuous Integration using Docker" discussing just one reference architecture example, we recogn...
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...