Welcome!

Microservices Expo Authors: Liz McMillan, Mamoon Yunus, Elizabeth White, Stackify Blog, Automic Blog

Related Topics: Java IoT, Agile Computing, @DevOpsSummit

Java IoT: Blog Feed Post

Java’s Built-In Garbage Collection | @CloudExpo #Java #Cloud #DevOps

Sun Java’s initial garbage collector did nothing to improve the image of garbage collection

How Java's Built-In Garbage Collection Will Make Your Life Better (Most of the Time)
By Kirk Pepperdine

“No provision need be made for the user to program the return of registers to the free-storage list.”

This line (along with the dozen or so that followed it) is buried in the middle of John McCarthy’s landmark paper, “Recursive Functions of Symbolic Expressions and Their Computation by Machine,” published in 1960. It is the first known description of automated memory management.

In specifying how to manage memory in Lisp, McCarthy was able to exclude explicit memory management. Thus, McCarthy relieved developers of the tedium of manual memory management. What makes this story truly amazing is that these few words inspired others to incorporate some form of automated memory management—otherwise known as garbage collection (GC)—into more than three quarters of the more widely used languages and runtimes developed since then. This list includes the two most popular platforms, Java’s Virtual Machine (JVM) and .NET’s Common Language Runtime (CLR), as well as the up and coming Go Lang by Google. GC exists not just on big iron but on mobile platforms such as Android’s Dalvik, Android Runtime, and Apple’s Swift. You can even find GC running in your web browser as well as on hardware devices such as SSDs. Let’s explore some of the reasons why the industry prefers automated over manual memory management.

Automatic Memory Management’s Humble Beginnings
So, how did McCarthy devise automated memory management? First, the Lisp engine decomposed Lisp expressions into sub-expressions, and each S-expression was stored in a single word node in a linked list. The nodes were allocated from a free list, but they didn’t have to be returned to the free list until it was empty.

Once the free list was empty, the runtime traced through the linked list and marked all reachable nodes. Next, it scanned through the buffer containing all nodes, and returned unmarked nodes to the free list. With the free-list refilled, the application would continue on.

Today, this is known as a single-space, in-place, tracing garbage collection. The implementation was quite rudimentary: it only had to deal with an acyclic-directed graph where all nodes were exactly the same size. Only a single thread ran, and that thread either executed application code or the garbage collector. In contrast, today’s collectors in the JVM must cope with a directed graph with cycles and nodes that are not uniformly sized. The JVM is multi-threaded, running on multi-core CPUs, possibly multi-socketed motherboards. Consequently, today’s implementations are far more complex—to the point GC experts struggle to predict performance in any given situation.

Slow Going: Garbage Collection Pause Time
When the Lisp garbage collector ran, the application stalled. In the initial versions of Lisp it was common for the collector to take 30 to 40 percent of the CPU cycles. On 1960s hardware this could cause the application stall, in what is known as a stop-the-world pause, for several minutes. The benefit was that allocation had barely any impact on application throughput (the amount of useful work done). This implementation highlighted the constant battle between pause time and impact on application throughput that persists to this day.

In general, the better the pause time characteristic of the collector, the more impact it has on application throughput. The current implementations in Java all come with pause time/overhead costs. The parallel collections come with long pause times and low overheads, while the mostly concurrent collectors have shorter pause times and consume more computing resources (both memory and CPU).

The goal of any GC implementer is to maximize the minimum amount of processor time that mutator threads are guaranteed to receive, a concept known as minimum mutator utilization (MMU). Even so, current GC overheads can run well under 5 percent, versus the 15 to 20 percent overhead you will experience in a typical C++ application.

So why you don’t feel this overhead like you do in a Java application? Because the overhead is evenly spread throughout the C/C++ run time, it is perceptibly invisible to the end users. In fact the biggest complaint about managed memory is that it pauses your application at unpredictable times for an unpredictable amount of time.

Garbage Collection Advancements
Sun Java’s initial garbage collector did nothing to improve the image of garbage collection. Its single-threaded, single-spaced implementation stalled applications for long periods of time and created a significant drag on allocation rates. It wasn’t until Java 2, when a generational memory pool scheme—along with parallel, mostly concurrent and incremental collectors—was introduced. While these collectors offered improved pause time characteristics, pause times continue to be problematic. Moreover, these implementations are so complex that it’s unlikely most developers have the experience necessary to tune them. To further complicate the picture, IBM, Azul, and RedHat have one or more of their own garbage collectors—each with their own histories, advantages and quirks. In addition, a number of companies including SAP, Twitter, Google, Alibaba, and others have their own internal JVM teams with modified versions of the Garbage collectors.

Costs and Benefits of Modern-Day Garbage Collection

Over time, an addition of alternate and more complex allocation paths led to huge improvements in the allocation overhead picture. For example, a fast-path allocation in the JVM is now approximately 30 times faster than a typical allocation in C/C++. The complication: Only data that can pass an escape analysis test is eligible for fast-path allocation. (Fortunately the vast majority of our data passes this test and benefits from this alternate allocation path.)

Another advantage is in the reduced costs and simplified cost models that come with evacuating collectors. In this scheme, the collector copies live data to another memory pool. Thus, there is no cost to recover short-lived data. This isn’t an invitation to allocate ad nauseam, because there is a cost for each allocation and high allocation rates trigger more frequent GC activity and accumulate extra copy costs. While evacuating collectors helps make GC more efficient and predictable, there are still significant resource costs.

That leads us to memory. Memory management demands that you retain at least five times more memory than manual memory management needs. There are times the developer knows for certain that data should be freed. In those cases, it is cheaper to explicitly free rather than have a collector reason through the decision. It was these costs that originally caused Apple to choose manual memory management for Objective-C. In Swift, Apple chose to use reference counting. They added annotations for weak and owned references to help the collector cope with circular references.

There are other intangible or difficult-to-measure costs that can be attributed to design decisions in the runtime. For example, the loss of control over memory layouts can result in application performance being dominated by L2 cache misses and cache line densities. The performance hit in these cases can easily exceed a factor of 10:1. One of the challenges for future implementers is to allow for better control of memory layouts.

Looking back at how poorly GC performed when first introduced into Lisp and the long and often frustrating road to its current state, it’s hard to imagine why anyone building a runtime would want to use managed memory. But consider that if you manually manage memory, you need access to the underlying reference system—and that means the language needs added syntax to manipulate memory pointers.

Languages that rely on managed memory consistently lack the syntax needed to manage pointers because of the memory consistency guarantee. That guarantee states that all pointers will point where they should without dangling (null) pointers waiting to blow up the runtime, if you should happen to step on them. The runtime can’t make this guarantee if developers are allowed to directly create and manipulate pointers. As an added bonus, removing them from the language removes indirection, one of the more difficult concepts for developers to master. Quite often bugs are a result of a developer engaged in the mental gymnastics required to juggle a multitude of competing concerns and getting it wrong. If this mix contains reasoning through application logic, along with manual memory management and different memory access modes, bugs likely appear in the code. In fact, bugs in systems that rely on manual memory management are among the most serious and largest source of security holes in our systems today.

To prevent these types of bugs the developer always has to ask, “Do I still have a viable reference to this data that prevents me from freeing it?” Often the answer to this question is, “I don’t know.” If a reference to that data was passed to another component in the system, it’s almost impossible to know if memory can safely be freed. As we all know too well, pointer bugs will lead to data corruption or, in the best case, a SIGSEGV.

Removing pointers from the picture tends to yield a code that is more readable and easier to reason through and maintain. GC knows when it can reclaim memory. This attribute allows projects to safely consume third-party components, something that rarely happens in languages with manual memory management.

Conclusion
At its best, memory management can be described as a tedious bookkeeping task. If memory management can be crossed off the to-do list, then developers tend to be more productive and produce far fewer bugs. We have also seen that GC is not a panacea as it comes with its own set of problems. But thankfully the march toward better implementations continues.

Go Lang’s new collector uses a combination of reference counting and tracing to reduce overheads and minimize pause times. Azul claims to have solved the GC pause problem by driving pause times down dramatically. Oracle and IBM keep working on collectors that they claim are better suited for very large heaps that contain significant amounts of data. RedHat has entered the fray with Shenandoah, a collector that aims to completely eliminate pause times from the run time. Meanwhile, Twitter and Google continue to improve the existing collectors so they continue to be competitive to the newer collectors.

Share “How Java’s Built-In Garbage Collection Will Make Your Life Better (Most of the Time)” On Your Site

The post How Java’s Built-In Garbage Collection Will Make Your Life Better (Most of the Time) appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By Jyoti Bansal

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

@MicroservicesExpo Stories
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...