Welcome!

Microservices Expo Authors: Liz McMillan, Martin Etmajer, Elizabeth White, Anders Wallgren, Jason Bloomberg

Related Topics: Open Source Cloud, Java IoT, Microservices Expo, Linux Containers

Open Source Cloud: Article

Memory Monitoring and Limiting with LXC

Building a Metrics Infrastructure

Original article can be found here: Memory Monitoring with LXC

Memory Monitoring And Limiting With LXC

For a long time we didn't limit the amount of memory that you can use during your build on Codeship. There was the possibility of a bad build eating up all of our memory.

A few weeks ago that bad build happened, using up so much memory that it decreased performance and eventually killed the test server. Even though we measure the memory usage of the whole test server, we didn't have the data to figure out exactly which build caused the trouble.

Combined maximum and minimum memory usage of Amazon EC2 Instances.

Combined maximum and minimum memory usage of Amazon EC2 Instances.

How to avoid builds eating up all of your memory
We couldn't risk that this problem happened again and be a threat to other builds on that test server. We didn't have enough data about the memory limits at this point so we had to take an educated guess. My first assumption was the most conservative one. Each test server has 60G of memory and we run 22 builds on each instance, so each build can get a maximum of 2,5G memory. As LXC manages memory with cgroups, it is simple to set a memory limit for each container.

Setting lxc.cgroup.memory.limit_in_bytes in container config:

lxc.cgroup.memory.limit_in_bytes = 2560M

This worked, the bad build wasn't a threat anymore.

We got a few support request where people were asking about their builds being stuck. While the 2,5G are enough for 95% of the builds, the other 5% were hitting the limit. We needed to increase the memory limit to ensure that all builds are running fine. After 2 incremental steps The Codeship is sailing smooth with a 10GB memory limit for each build.

Codeship - A hosted Continuous Deployment platform for web applications

Monitoring in the Future
We wanted to have more data to improve the memory limit in the future. We started measuring the maximum memory usage for each build. We export it to Librato Metrics and save it as metadata of the build in our database. This allows to inspect the memory usage easily. We plan to show the memory usage to our users in the future.

LXC tracks the memory usage for each container and exposes that and many other values in the cgroup. Right before we shutdown the build, we read the memory usage from the cgroup. You can read the memory usage from the cgroup on the LXC Host.

/sys/fs/cgroup/memory/lxc/name_of_running_container/memory.max_usage_in_bytes

and send the data to Librato Metrics.

Metriks.timer('build.memory_usage) .update(metrics[:max_memory])

Building a Metrics Infrastructure
It is important to back your actions with data. Data is your best argument. In case you don't have any useful data to solve the problem, it is important that you can easily add more metrics to your infrastructure. Sit back with your Coworkers and think about good metricsfor your product. Sometimes it's not easy to spot them on the first glimpse. We often talk about our metrics.

We discuss new metrics but also talk about removing redundant metrics. It is very exciting to talk about data and what your Coworkers conclude from that data. We are able to add new metrics to our infrastructure in minutes. By building this metrics infrastructure you can handle any new challenges by quickly adding mesurements that help you decide on the next steps. This infrastructure can be the difference between life and death of your service, so make sure you have it in place.

Do you monitor? Which tools and services do you use for it? Let us know in the comments!

Further Information on Linux Containers


Download Efficiency in Development Workflows: A free eBook for Software Developers. This book will save you a lot of time and make you and your development team happy.

Go ahead and try Codeship for Continuous Integration and Continuous Deployment! Set up for your GitHub and BitBucket projects only takes 3 minutes. It's free!

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

@MicroservicesExpo Stories
CIOs and those charged with running IT Operations are challenged to deliver secure, audited, and reliable compute environments for the applications and data for the business. Behind the scenes these tasks are often accomplished by following onerous time-consuming processes and often the management of these environments and processes will be outsourced to multiple IT service providers. In addition, the division of work is often siloed into traditional "towers" that are not well integrated for cro...
In a previous article, I demonstrated how to effectively and efficiently install the Dynatrace Application Monitoring solution using Ansible. In this post, I am going to explain how to achieve the same results using Chef with our official dynatrace cookbook available on GitHub and on the Chef Supermarket. In the following hands-on tutorial, we’ll also apply what we see as good practice on working with and extending our deployment automation blueprints to suit your needs.
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, will discuss how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved effi...
How is your DevOps transformation coming along? How do you measure Agility? Reliability? Efficiency? Quality? Success?! How do you optimize your processes? This morning on #c9d9 we talked about some of the metrics that matter for the different stakeholders throughout the software delivery pipeline. Our panelists shared their best practices.
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, covered the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and pr...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
If we look at slow, traditional IT and jump to the conclusion that just because we found its issues intractable before, that necessarily means we will again, then it’s time for a rethink. As a matter of fact, the world of IT has changed over the last ten years or so. We’ve been experiencing unprecedented innovation across the board – innovation in technology as well as in how people organize and accomplish tasks. Let’s take a look at three differences between today’s modern, digital context...
Sensors and effectors of IoT are solving problems in new ways, but small businesses have been slow to join the quantified world. They’ll need information from IoT using applications as varied as the businesses themselves. In his session at @ThingsExpo, Roger Meike, Distinguished Engineer, Director of Technology Innovation at Intuit, showed how IoT manufacturers can use open standards, public APIs and custom apps to enable the Quantified Small Business. He used a Raspberry Pi to connect sensors...
The principles behind DevOps are not new - for decades people have been automating system administration and decreasing the time to deploy apps and perform other management tasks. However, only recently did we see the tools and the will necessary to share the benefits and power of automation with a wider circle of people. In his session at DevOps Summit, Bernard Sanders, Chief Technology Officer at CloudBolt Software, explored the latest tools including Puppet, Chef, Docker, and CMPs needed to...
One of the bewildering things about DevOps is integrating the massive toolchain including the dozens of new tools that seem to crop up every year. Part of DevOps is Continuous Delivery and having a complex toolchain can add additional integration and setup to your developer environment. In his session at @DevOpsSummit at 18th Cloud Expo, Miko Matsumura, Chief Marketing Officer of Gradle Inc., will discuss which tools to use in a developer stack, how to provision the toolchain to minimize onboa...
Data-as-a-Service is the complete package for the transformation of raw data into meaningful data assets and the delivery of those data assets. In her session at 18th Cloud Expo, Lakshmi Randall, an industry expert, analyst and strategist, will address: What is DaaS (Data-as-a-Service)? Challenges addressed by DaaS Vendors that are enabling DaaS Architecture options for DaaS
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Avere delivers a more modern architectural approach to storage that doesn’t require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbuilding of data centers ...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
The (re?)emergence of Microservices was especially prominent in this week’s news. What are they good for? do they make sense for your application? should you take the plunge? and what do Microservices mean for your DevOps and Continuous Delivery efforts? Continue reading for more on Microservices, containers, DevOps culture, and more top news from the past week. As always, stay tuned to all the news coming from@ElectricCloud on DevOps and Continuous Delivery throughout the week and retweet/favo...
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part...