Click here to close now.

Welcome!

Microservices Journal Authors: Liz McMillan, Pat Romanski, Elizabeth White, AppDynamics Blog, Carmen Gonzalez

Related Topics: Cloud Expo, Java, Microservices Journal, Linux, Open Source, Ruby

Cloud Expo: Article

Top Six Ruby on Rails Deployment Methods in AWS: Pros & Cons

I’ll examine various deployment choices in detail, walk through a thorough analysis and then provide recommendations

Setting up a deployment process on the cloud means a variety of choices. Most likely you're prepared to make some tradeoffs. But getting a view across these potential tradeoffs can be difficult. Here are six popular deployments and advice for making the best choice for your organization's needs.

Let's assume you want a deployment for a small startup with fewer than 20 developers, each needing to host a web app that's gaining traction and for which rapid growth is expected. Its requirements are as follows:

  • Autoscaling support to handle expected surges in demand
  • Maximizing developer efficiency by automating tedious tasks and improving dev flow
  • Encouraging mature processes for building a stable foundation as the codebase grows
  • Maintaining flexibility and agility to handle hotfixes of a relatively immature codebase
  • Counting on a few sources to fail, because any of them can cause deployment failure - imagine GitHub failing or a required plugin becoming unavailable

Narrowing the focus a bit more, let's assume the codebase is using Ruby on Rails, as is often the case. We'll examine various deployment choices in detail, walk through a thorough analysis and then provide recommendations for anyone that fits our sample client profile.

1. The Plain Vanilla AMI Method
Amazon OpsWorks: This proven deployment is a well-tested Amazon OpsWorks Standard recommendation. Each time a new node comes up fresh, it requires running all Chef recipes. To automate this process, Cloud-init is used to run scripts for handling code and environment updates that occur when running nodes.

Pros: This approach requires no AMI management. The process is straightforward, self-documenting and brings up a clean environment every time. Updates and patches are applied very quickly.

Cons: Bringing up new instances is extremely slow, there are many moving parts, and there's a high risk of failure.

Bottom Line: While this is a clean solution, the frequent-failure rate and amount of time needed for bringup makes the Plain Vanilla AMI impractical for a use case with autoscaling.

2. The Bake-Everything AMI Method
This deployment option is proven to work at Amazon Video and Netflix. It runs all Chef recipes once, fetches the codebase and then bakes and uses the AMI. Each change requires a new AMI and an ASG replacement within the ELB, including code and environment changes.

Keep in mind that the environment and configuration management parts of the deployment still need automation using tools like Chef and Puppet. Lack of automation can otherwise make AMI management a nightmare, as one tends to lose track of how the environment actually looks within the AMI.

Pros: Provides the fastest bringup, requires no installation, and includes the fewest moving parts, so error rates are very low.

Cons: Each code deployment requires baking a new AMI. This requires a lot of effort to ensure that the process is as fast as possible in order to avoid developer bottlenecks. This setup also makes it harder to deploy hotfixes.

Bottom Line: This is generally a best practice, but requires a certain level of codebase maturity and a high level of infrastructure sophistication. For example, Netflix has spent a lot of time speeding up the process of baking AMIs by using their Aminator project.

3. A Hybrid Method Using Chef to Handle Complete Deployment
This method strikes a balance between the Plain Vanilla AMI and the Bake-Everything AMI. An AMI is baked using Chef for configuration and environment, but one can't check the codebase or deploy the app. Chef does those once the node is brought up.

Pros: Since all packages are pre-installed, this method is significantly faster than using a Plain Vanilla AMI. Also, since the code is pulled once a node is commissioned, the ability to provide hotfixes is improved.

Cons: Because we're relying on Chef in production, there's a dependency on the repository, and pulling from the repository may fail.

Bottom Line: We consider this to be a medium-risk implementation due to its reliance on Chef.

4. A Hybrid Method Using Capistrano to Handle Code Deployment
This is similar to the hybrid Chef deployment approach, but with code deployed through Capistrano. Capistrano is a mature platform for deploying Rails code that includes several features and fail-safe mechanisms that make it better than Chef. In particular, if pull from the repository fails, Capistrano deploys an older revision from its backups.

Pros: The same as for the Chef hybrid, except that Capistrano is more mature than Chef, especially in handling repository failures.

Cons: It requires two tools instead of one, which increases management overhead even though they're tied together. In addition, the gap between environment and code is wider, and managing the tools separately is difficult.

Bottom Line: Capistrano is a better Rails solution for code deployment than Chef, and the ability to apply fixes quickly may make it the best solution.

5. The AMI-Bake and CRON-Based Chef-Client Method
This deployment method resembles that of the hybrids. However, it provisions features allow auto-propagation of changes because each AMI runs chef-client every N minutes. New AMIs are baked only for major changes. It can provide continuous deployment, but continuous deployment is an aggressive tactic that requires excellent continuous integration on the back end.

Pros: Allows continuous code deployment.

Cons: It's prone to errors if Continuous Integration is not stable. In addition, Chef re-bootstraps aren't reliable and may fail.

Bottom Line: Not recommended unless CI is solid.

6. The Cloud-Init and Docker Method
All indications are that Docker is the best choice for this use case. It comes closer to a bake-everything solution while getting around bake-everything's biggest drawbacks. It allows AMIs to be baked once and rarely changes after that. Both the environment and the app code are contained inside an LXC container, with each AMI consisting of one container. Upon code deployment, a new container is simply pushed, which provides deployment-process flexibility.

Pros: Docker containers provide a history with which one can compare containers, helps with issues of undocumented steps in image creation. Code and environment are tied together. The repository structure of containers leads to faster deployment than does which baking a new AMI. Docker also helps to create a local environment similar to the production environment.

Cons: Docker is still in early phases of development and suffers from some growing pains, including a few bugs, a limited tools ecosystem, some app compatibility issues and a limited feature set.

Bottom Line: If you adopt this approach, you'll be doing considerable trailblazing. There's little information available, so comparing notes with other pioneers will be helpful.

Conclusion
While there are many options for deploying Ruby on Rails in AWS environments, there isn't a single best solution. Taking the time to review the options and tradeoffs can save headaches along the way. Talk to peers and experienced consultants about their experiences before making the final decisions.

What are your comments in regard to using these deployments?

More Stories By Ali Hussain

Ali Hussain is CTO & Co-Founder of Flux7 Labs. He has been designing scalable and distributed systems for the last decade and is an AWS Certified Solutions Architect, Associate Level, earning this recognition with a score of 95%.

He began his career at Intel as part of the performance modeling team for Intel’s Atom microprocessor where he focused on benchmarking, power usage and workload optimization. Ali spent four years focused on performance modeling at ARM, Inc. At ARM he optimized the latency and throughput characteristics of systems, modeled performance, and brought a data-driven methodology to performance analyses. Ali acquired his passion for distributed systems while earning his MS at the University of Illinois at Urbana-Champaign. His Bachelor of Science (High Honors) in Computer Engineering was obtained from the University of Texas at Austin.

His current interests in Flux7 are in Enterprise Migration and configuration management

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that a...
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, discussed how a user-centric Application Performance Management solution can help inspire your users with every applicati...
As enterprises engage with Big Data technologies to develop applications needed to meet operational demands, new computation fabrics are continually being introduced. To leverage these new innovations, organizations are sacrificing market opportunities to gain expertise in learning new systems. In his session at Big Data Expo, Supreet Oberoi, Vice President of Field Engineering at Concurrent, Inc., discussed how to leverage existing infrastructure and investments and future-proof them against e...
Once the decision has been made to move part or all of a workload to the cloud, a methodology for selecting that workload needs to be established. How do you move to the cloud? What does the discovery, assessment and planning look like? What workloads make sense? Which cloud model makes sense for each workload? What are the considerations for how to select the right cloud model? And how does that fit in with the overall IT transformation?
Docker is an open platform for developers and sysadmins of distributed applications that enables them to build, ship, and run any app anywhere. Docker allows applications to run on any platform irrespective of what tools were used to build it making it easy to distribute, test, and run software. I found this 5 Minute Docker video, which is very helpful when you want to get a quick and digestible overview. If you want to learn more, you can go to Docker’s web page and start with this Docker intro...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
Over the years, a variety of methodologies have emerged in order to overcome the challenges related to project constraints. The successful use of each methodology seems highly context-dependent. However, communication seems to be the common denominator of the many challenges that project management methodologies intend to resolve. In this respect, Information and Communication Technologies (ICTs) can be viewed as powerful tools for managing projects. Few research papers have focused on the way...
As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo – to be held June 3-5, 2015, at the Javits Center in New York City – will expand the DevOps community, enable a wide...
There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way ...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
How can you compare one technology or tool to its competitors? Usually, there is no objective comparison available. So how do you know which is better? Eclipse or IntelliJ IDEA? Java EE or Spring? C# or Java? All you can usually find is a holy war and biased comparisons on vendor sites. But luckily, sometimes, you can find a fair comparison. How does this come to be? By having it co-authored by the stakeholders. The binary repository comparison matrix is one of those rare resources. It is edite...
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
T-Mobile has been transforming the wireless industry with its “Uncarrier” initiatives. Today as T-Mobile’s IT organization works to transform itself in a like manner, technical foundations built over the last couple of years are now key to their drive for more Agile delivery practices. In his session at DevOps Summit, Martin Krienke, Sr Development Manager at T-Mobile, will discuss where they started their Continuous Delivery journey, where they are today, and where they are going in an effort ...
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers ...
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...