Click here to close now.



Welcome!

Microservices Expo Authors: Elizabeth White, Jason Bloomberg, Anders Wallgren, Pat Romanski, Cloud Best Practices Network

Related Topics: @BigDataExpo, Microservices Expo, Containers Expo Blog, Agile Computing, @CloudExpo, Apache, Cloud Security

@BigDataExpo: Article

Big Dollars from Big Data

How to reduce costs and increase performance in the data center

Cloud computing has given birth to a broad range of online services. To maintain a competitive edge, service providers are taking a closer look at their Big Data storage infrastructure in an earnest attempt to improve performance and reduce costs.

Large enterprises hosting their own cloud servers are seeking ways to scale and improve performance while maintaining or lowering expenditures. If the status quo of scaling users and storage infrastructure is upheld, it will become increasingly difficult to maintain low cost cloud services, such as online account management or data storage. Service providers will face higher energy consumption in their data centers overall, and many are loath to begin charging for online account access.

Costs vs. Benefits
In response to the trend of growing online account activity, many service providers are transitioning their data centers to a centralized environment whereby data is stored in a single location and made accessible from any location via the Internet. Centralizing the equipment enables service providers to keep costs down while delivering improved Internet connections to their online users and realizing gains in performance and reliability.

Yet with these additional performance improvements, scalability becomes more arduous and cost-prohibitive. Improving functionality within a centralized data center requires the purchase of additional high-performance, specialized equipment, boosting costs and energy consumption that are challenging to control at scale. In an economy where large organizations are seeking cost-cutting measures from every angle, these added expenses are unacceptable.

More Servers, More Problems?
Once a telco moves into providing cloud-based services for its users, such as online account access and management, the demands on its data centers spike dramatically. While the typical employee user of a telco's or service provider's internal network requires high performance, these systems normally have fewer users and can access files directly through the network. Additionally, employees are typically accessing, sending and saving relatively low-volume files like documents and spreadsheets, using less storage capacity and alleviating performance load.

Outside the internal network environment, however, the service provider's cloud servers are being accessed simultaneously over the Internet by more users, which itself ends up becoming a performance bottleneck. Providers, telcos and other large enterprises offering cloud services therefore not only have to scale their storage systems to each additional user, but must also sustain performance across the combined users. Due to the significantly higher number of users utilizing online account tools at any given time, cloud users place a greater strain on data center resources.

Combining Best Practices
To remain competitive, cloud service providers must find a way to scale rapidly to accommodate the proliferating demand for more data storage. Service providers seeking data storage options should look for an optimal combination of performance, scalability and cost-effectiveness. The following best practices can help maximize data center ROI in an era of IT cutbacks:

  1. Pick commodity components: Low-energy hardware can make good business sense. Commodity hardware not only costs less, but also uses far less energy. This significantly reduces both setup and operating costs in one move.
  2. Look for distributed storage: Distributed storage presents the best way to build at scale even though the data center trend has been moving toward centralization. This is because there are now ways to increase performance at the software level that counterbalances the performance advantage of a centralized data storage approach.
  3. Avoid bottlenecks at all costs: A single point of entry becomes a performance bottleneck very easily. Adding caches to alleviate the bottleneck, as most data center architectures presently do, add cost and complexity to a system very quickly. On the other hand, a horizontally scalable system that distributes data among all nodes delivers a high level of redundancy.

Conclusion
Big Data storage consists mainly of high performance, vertically scaled storage systems. Since these current architectures can only scale to a single petabyte and are expensive, they are not cost-effective or sustainable in the long run. Moving to a horizontally scaled data storage model that distributes data evenly onto low-energy hardware can reduce costs and increase performance in the Cloud. With these insights, providers of cloud services can take steps to improve the performance, scalability and efficiency of their data storage centers.

More Stories By Stefan Bernbo

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, he has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
SYS-CON Events announced today that AppNeta, the leader in performance insight for business-critical web applications, will exhibit and present at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. AppNeta is the only application performance monitoring (APM) company to provide solutions for all applications – applications you develop internally, business-critical SaaS applications you use and the networks that deli...
If we look at slow, traditional IT and jump to the conclusion that just because we found its issues intractable before, that necessarily means we will again, then it’s time for a rethink. As a matter of fact, the world of IT has changed over the last ten years or so. We’ve been experiencing unprecedented innovation across the board – innovation in technology as well as in how people organize and accomplish tasks. Let’s take a look at three differences between today’s modern, digital context...
The battle over bimodal IT is heating up. Now that there’s a reasonably broad consensus that Gartner’s advice about bimodal IT is deeply flawed – consensus everywhere except perhaps at Gartner – various ideas are springing up to fill the void. The bimodal problem, of course, is well understood. ‘Traditional’ or ‘slow’ IT uses hidebound, laborious processes that would only get in the way of ‘fast’ or ‘agile’ digital efforts. The result: incoherent IT strategies and shadow IT struggles that lead ...
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
As software organizations continue to invest in achieving Continuous Delivery (CD) of their applications, we see increased interest in microservices architectures, which–on the face of it–seem like a natural fit for enabling CD. In microservices (or its predecessor, “SOA”), the business functionality is decomposed into a set of independent, self-contained services that communicate with each other via an API. Each of the services has their own application release cycle, and are developed and depl...
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
At the heart of the Cloud Native model is a microservices application architecture, and applying this to a telco SDN scenario offers enormous opportunity for product innovation and competitive advantage. For example in the ETSI NFV Ecosystem white paper they describe one of the product markets that SDN might address to be the Home sector. Vendors like Alcatel market SDN-based solutions for the home market, offering Home Gateways – A virtual residential gateway (vRGW) where service provider...
SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed...
In the Bimodal model we find two areas of IT - the traditional kind where the main concern is keeping the lights on and the IT focusing on agility and speed, where everything needs to be faster. Today companies are investing in new technologies and processes to emulate their most agile competitors. Gone are the days of waterfall development and releases only every few months. Today's IT and the business it powers demands performance akin to a supercar - everything needs to be faster, every sc...
The (re?)emergence of Microservices was especially prominent in this week’s news. What are they good for? do they make sense for your application? should you take the plunge? and what do Microservices mean for your DevOps and Continuous Delivery efforts? Continue reading for more on Microservices, containers, DevOps culture, and more top news from the past week. As always, stay tuned to all the news coming from@ElectricCloud on DevOps and Continuous Delivery throughout the week and retweet/favo...
In most cases, it is convenient to have some human interaction with a web (micro-)service, no matter how small it is. A traditional approach would be to create an HTTP interface, where user requests will be dispatched and HTML/CSS pages must be served. This approach is indeed very traditional for a web site, but not really convenient for a web service, which is not intended to be good looking, 24x7 up and running and UX-optimized. Instead, talking to a web service in a chat-bot mode would be muc...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
With microservices, SOA and distributed architectures becoming more popular, it is becoming increasingly harder to keep track of where time is spent in a distributed application when trying to diagnose performance problems. Distributed tracing systems attempt to address this problem by following application requests across service boundaries, persisting metadata along the way that provide context for fine-grained performance monitoring.
Web performance issues and advances have been gaining a stronger presence in the headlines as people are becoming more aware of its impact on virtually every business, and 2015 was no exception. We saw a myriad of major outages this year hit some of the biggest corporations, as well as some technology integrations and other news that we IT Ops aficionados find very exciting. This past year has offered several opportunities for growth and evolution in the performance realm — even the worst failu...
Are you someone who knows that the number one rule in DevOps is “Don’t Panic”? Especially when it comes to making Continuous Delivery changes inside your organization? Are you someone that theorizes that if anyone implements real automation changes, the solution will instantly become antiquated and be replaced by something even more bizarre and inexplicable?
Welcome to the first top DevOps news roundup of 2016! At the end of last year, we saw some great predictions for 2016. While we’re excited to kick off the new year, this week’s top posts reminded us to take a second to slow down and really understand the current state of affairs. For example, do you actually know what microservices are – or aren’t? What about DevOps? Does the emphasis still fall mostly on the development side? This week’s top news definitely got the wheels turning and just migh...
Test automation is arguably the most important innovation to the process of QA testing in software development. The ability to automate regression testing and other repetitive test cases can significantly reduce the overall production time for even the most complex solutions. As software continues to be developed for new platforms – including mobile devices and the diverse array of endpoints that will be created during the rise of the Internet of Things - automation integration will have a huge ...
Providing a full-duplex communication channel over a single TCP connection, WebSocket is the most efficient protocol for real-time responses over the web. If you’re utilizing WebSocket technology, performance testing will boil down to simulating the bi-directional nature of your application. Introduced with HTML5, the WebSocket protocol allows for more interaction between a browser and website, facilitating real-time applications and live content. WebSocket technology creates a persistent conne...