Welcome!

Microservices Expo Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, AppNeta Blog, Liz McMillan

News Feed Item

Java Concurrency and Scalability Platform Akka Celebrates Fifth Anniversary

Akka Raised the Standard for Handling Scale and Failure on the JVM; Today Its Developer Community Continues to Grow Across Data Streaming and Internet of Things Use Cases

SAN FRANCISCO, CA -- (Marketwired) -- 07/10/14 -- Typesafe, provider of the world's leading Reactive platform, today announced that July 12 will mark the five year anniversary of Akka, the popular run-time and toolkit for concurrency and scalability on the JVM ("Java Virtual Machine"), supported through the years by developers at high-growth and Blue-Chip companies like Amazon, BBC, Cisco, Credit Suisse, eBay, Groupon, Huffington Post and many more.

The Akka Creation Story (click here for a full interactive timeline on the history of Akka):

Akka was originally created by Swedish programmer Jonas Bonér -- who had built compilers, runtimes and open source frameworks for distributed applications at vendors like BEA and Terracotta. He'd experienced the scale and resilience limitations of CORBA, RPC, XA, EJBs, SOA, and the various Web Services standards and abstraction techniques that Java developers used to approach the overall problem set over the last 20 years. He'd lost faith in those ways of doing things.

This time he looked outside of the Java and enterprise space for answers. He spent some time with the Oz and Erlang programming languages. He saw a lot that he liked about how Erlang managed failure for services that simply could not go down (things like telecom switches for emergency calls), and how principles from Erlang and Oz could be applied towards the concurrency and distributed computing frontiers for mainstream enterprises. In particular he saw the Actor Model -- which emphasizes loose coupling and embracing failure in software systems and dataflow concurrency -- as the bridge to the future.

After months of intense thinking and hacking, Bonér shared his vision for the Akka Actor Kernel (now simply "Akka") on the Scala mailing list, and about a month later (on July 12, 2009) shared the first public release of Akka 0.5 on GitHub. Today Akka is the open source platform that major financial institutions use to handle billions of transactions, and that massively trafficked sites like Walmart and Gilt use to scale their services for peak usage. A full interactive timeline of the history of Akka (including a list of contributors) may be viewed here.

Recent Akka Highlights

As the Akka community has grown, the platform has been leveraged to power highly trafficked web sites, data and analytics, shuffling large amounts of data around, batch processing, real-time processing, and other distributed computing use cases where success means achieving low latency and high throughput. In recent years, several key growth areas have emerged for Akka:

Akka Cluster
In July 2013, version 2.2 of Akka shipped under the code name "Coltrane" and included full support of clustering. Akka Cluster provides a fault-tolerant decentralized peer-to-peer based cluster membership service with no single point of failure or single point of bottleneck. It does this using gossip protocols and an automatic failure detector. It also ships with a suite of high-level modules on top providing things like clustered Pub/Sub, clustered singleton, cluster sharding and more.

Akka Persistence
Predictably handling failure across distributed systems is Akka's calling card. But what happens to the Actor's state when things start failing? In October 2013, Akka Persistence was introduced to allow stateful actors to recover from JVM crashes in a way that Actors themselves are persisted in memory. The key concept in Akka Persistence is called Event Sourcing and allows you to -- instead of storing an actor's state directly -- persist the state-changing events that are sent to the Actor. These changes are immutable facts that are appended to a journal (backed by a pluggable durable storage), which allows for very high transaction rates, efficient replication, migration, replay, auditing, and another powerful layer of failure management.

Akka Streams
Historically, stream-based processing on the JVM ("Java Virtual Machine") has been perilous for both developers and operations, because when data is streamed at higher rates than recipients can handle, it builds up in the system until no space is left, leading to system failures in production. In April 2014, Typesafe announced the release of Akka Streams -- designed to help developers more easily achieve truly asynchronous, non-blocking data streaming on the JVM.

Akka HTTP
In June 2013, Typesafe acquired Spray.io, one of the best performing REST / HTTP libraries in the Java ecosystem. Then in June 2014, Typesafe announced the first preview of the core module of Akka HTTP -- a suite of lightweight Scala libraries providing client/server RESTful support on top of Akka. It fully embraces the Actor-, Future-, and Stream-based programming models used by the underlying platform. This lets developers build high-performant and scalable on RESTful applications with idiomatic Java and Scala code without worrying about wrapping around other Java libraries.

Recent Akka Presentations at Scala Days 2014:

Additional Resources:

About Typesafe
Typesafe (Twitter: @Typesafe) is dedicated to helping developers build Reactive applications on the JVM. With the Typesafe Reactive Platform, you can create modern, event-driven applications that scale on multicore and cloud computing architectures. Typesafe Activator, a browser-based tool with reusable templates, makes it easy to get started with Play Framework, Akka and Scala. Backed by Greylock Partners, Shasta Ventures, Bain Capital Ventures and Juniper Networks, Typesafe is headquartered in San Francisco with offices in Switzerland and Sweden. To start building Reactive applications today, download Typesafe Activator!

Image Available: http://www2.marketwire.com/mw/frame_mw?attachid=2636189

More Stories By Marketwired .

Copyright © 2009 Marketwired. All rights reserved. All the news releases provided by Marketwired are copyrighted. Any forms of copying other than an individual user's personal reference without express written permission is prohibited. Further distribution of these materials is strictly forbidden, including but not limited to, posting, emailing, faxing, archiving in a public database, redistributing via a computer network or in a printed form.

@MicroservicesExpo Stories
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
As Enterprise business moves from Monoliths to Microservices, adoption and successful implementations of Microservices become more evident. The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Documenting hurdles and problems for the use of Microservices will help consultants, architects and specialists to avoid repeating the same mistakes and learn how and when to use (or not use) Microservices at the enterprise level. The circumstance w...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...