|By Nicos Vekiarides||
|January 3, 2012 05:30 AM EST||
It’s time to make a few predictions for 2012 in the cloud data space. 2011 was a year of adoption, during which many companies started to leverage the cloud, enjoying the economies of scale, security and ease in managing their growing data needs. Those successes promise even greater cloud adoption in 2012. With that in mind, here are 10 predictions for hot trends to watch for in the cloud data space:
- Hybrid data storage environments combining cloud storage with existing storage. For most companies, the notion of moving all of their data to the cloud is not fathomable. However, continuously expanding data storage needs are fueling a need for more capacity. What better way to address this need than with cloud storage? The benefits include access to a secure, limitless pool of storage capacity, no future need for upgrade or replacement and reduced capital expenses. Look for auto-tiering technologies to seamlessly combine hybrid cloud and on-premise environments in a way that operates with existing applications.
- Private cloud environments in enterprise companies. Enterprises looking to leverage the economies, efficiencies and scale of cloud providers are adopting cloud models in-house, such as OpenStack, for both compute and storage environments. These private clouds offer scale, agility and price/performance typically unmatched by traditional infrastructure solutions and can reside inside a company’s firewall. In the storage space, look for technologies that can combine existing SAN infrastructure and private cloud storage into a unified Cloud SAN.
- Disaster recovery to the cloud as a viable option. Traditionally, companies that need disaster recovery (DR) and business continuity (BC) have relied on dedicated replicated infrastructure at an offsite location to be able to recover from physical disaster. This means paying for idle hardware that’s waiting for a disaster. DR in the cloud, on the other hand, means not having to pay for this infrastructure except when it is needed. The tradeoff? While not necessarily a zero-downtime solution, look for cloud DR with recovery time objectives (RTOs) in a matter of hours.
- Disaster recovery from the cloud as a new need. What happens to business data stored by SaaS application in the case of a disaster? The truth is most SaaS providers do have a DR strategy, but many businesses will demand a recovery strategy under their control. Look for emergence of solutions that backup SaaS data either locally or to an alternate provider as an extra level of protection.
- Simplified onboarding of applications to the cloud. Certain business applications can move entirely to the cloud, thereby saving the administrative and maintenance headaches of their hardware/software platforms onsite. Many IT-strapped businesses can benefit from tools to make this migration viable. Look for robust tool sets that can migrate applications to a choice of cloud providers – and also bring those applications back on-premise should the need arise.
- Non-relational databases for big data. NoSQL databases, like Apache CouchDB, enable tremendous scalability in order to meet the needs of Terabytes and Petabytes of data accessed by millions of users. Big data will force many companies to consider these alternatives to traditional databases and cloud deployment models will simplify the roll-out. Look for vendors providing supported NoSQL solutions.
- Use of the cloud for analytics. Analytics tend to require a scalable compute and storage environment as well as rather expensive software. Similar to idle hardware for disaster recovery purposes, analytics for many businesses may represent a seasonal need that only runs in short bursts and may not justify purchasing a dedicated software/hardware environment. Analytic environments in the cloud can turn the expense into a “pay-per-use” bill, meeting business goals at a far lower price point.
- SSD tiers of storage in the cloud. Moving higher performance applications into the cloud doesn’t always guarantee that they will get the level of performance they need from their data storage. By offering high-performance tiers of storage that are SSD-based (i.e. flash), cloud providers will be able to address the needs for predictable and faster application response times.
- Improvements in data reduction technology. With cloud storage commanding a per GB operating expense, deduplication and compression technologies have become rather ubiqitous in minimizing costs. While some may argue the capacity optimization game has played out, there is still the challenge of capacity optimization on a more global scale across multiple tenants and a challenge for rich media content which does not fare particularly well with today’s reduction technologies. Look for the introduction of new data reduction technologies that address both needs.
- “Cloud-envy” from cloud laggards. While many companies have already adopted the cloud and many more will adopt in 2012, others may still wait and ponder well past 2012. Regardless of which category a company falls into, the economics and efficiencies of the cloud have become irrefutable. As a result, some of the laggards will likely seek ways to leverage cloud methodologies that improve IT efficiency on-premise. Undoubtedly, some will fall prey to cloudwashing by purchasing traditional IT infrastructure named “cloud” in an attempt to satisfy their “cloud-envy.”
Bottom line? Cloud deployments are becoming simpler and more secure and the economics continue to improve. Which of these trends will your business follow in 2012?
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Oct. 9, 2015 04:45 AM EDT
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
Oct. 9, 2015 04:45 AM EDT
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
Oct. 9, 2015 04:15 AM EDT
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
Oct. 9, 2015 04:00 AM EDT Reads: 222
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Oct. 9, 2015 04:00 AM EDT Reads: 234
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.
Oct. 9, 2015 04:00 AM EDT Reads: 234
The cloud has reached mainstream IT. Those 18.7 million data centers out there (server closets to corporate data centers to colocation deployments) are moving to the cloud. In his session at 17th Cloud Expo, Achim Weiss, CEO & co-founder of ProfitBricks, will share how two companies – one in the U.S. and one in Germany – are achieving their goals with cloud infrastructure. More than a case study, he will share the details of how they prioritized their cloud computing infrastructure deployments ...
Oct. 9, 2015 03:00 AM EDT Reads: 737
Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running...
Oct. 9, 2015 02:30 AM EDT Reads: 236
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Oct. 9, 2015 02:00 AM EDT Reads: 287
If you are new to Python, you might be confused about the different versions that are available. Although Python 3 is the latest generation of the language, many programmers still use Python 2.7, the final update to Python 2, which was released in 2010. There is currently no clear-cut answer to the question of which version of Python you should use; the decision depends on what you want to achieve. While Python 3 is clearly the future of the language, some programmers choose to remain with Py...
Oct. 9, 2015 02:00 AM EDT Reads: 245
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Oct. 9, 2015 02:00 AM EDT Reads: 883
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
Oct. 9, 2015 02:00 AM EDT Reads: 2,210
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...
Oct. 9, 2015 01:45 AM EDT Reads: 230
Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales - most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services - like enter and update record. That allows the customer service team to writ...
Oct. 9, 2015 01:45 AM EDT Reads: 430
JFrog has announced a powerful technology for managing software packages from development into production. JFrog Artifactory 4 represents disruptive innovation in its groundbreaking ability to help development and DevOps teams deliver increasingly complex solutions on ever-shorter deadlines across multiple platforms JFrog Artifactory 4 establishes a new category – the Universal Artifact Repository – that reflects JFrog's unique commitment to enable faster software releases through the first pla...
Oct. 9, 2015 12:30 AM EDT Reads: 634
Somebody call the buzzword police: we have a serious case of microservices-washing in progress. The term “microservices-washing” is derived from “whitewashing,” meaning to hide some inconvenient truth with bluster and nonsense. We saw plenty of cloudwashing a few years ago, as vendors and enterprises alike pretended what they were doing was cloud, even though it wasn’t. Today, the hype around microservices has led to the same kind of obfuscation, as vendors and enterprise technologists alike ar...
Oct. 9, 2015 12:00 AM EDT Reads: 454
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes.
Oct. 9, 2015 12:00 AM EDT Reads: 225
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
Oct. 8, 2015 11:30 PM EDT Reads: 657
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
Oct. 8, 2015 10:00 PM EDT Reads: 596
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
Oct. 8, 2015 09:45 PM EDT Reads: 1,254