Click here to close now.

Welcome!

Microservices Journal Authors: Carmen Gonzalez, Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz

Related Topics: Java, Microservices Journal, Oracle

Java: Book Excerpt

Book Excerpt: Java Application Profiling Tips and Tricks

Part 1 - Common types of performance issues

In the book Java Performance, Chapter 5, "Java Application Profiling," presented the basic concepts of using a modern Java profiler such as the Oracle Solaris Studio Performance Analyzer and NetBeans Profiler. It did not, however, show any specific tips and tricks in using the tools to identify performance issues and approaches of how to resolve them. This is the purpose of this excerpt. Its intention is to show how to use the tools to identify performance issues and take corrective actions to resolve them. This excerpt looks at several of the more common types of performance issues the authors have observed through many years of working as Java performance engineers.

Performance Opportunities
Most Java performance opportunities fall into one or more of the following categories:

•  Using a more efficient algorithm. The largest gains in the performance of an application come from the use of a more efficient algorithm. The use of a more efficient algorithm allows an application to execute with fewer CPU instructions, also known as a shorter path length. An application that executes with a shorter path length generally executes faster. Many different changes can lead to a shorter path length. At the highest level of the application, using a different data structure or modifying its implementation can lead to a shorter path length. Many applications that suffer application performance issues often use inappropriate data structures. There is no substitute for choosing the proper data structure and algorithm. As profiles are analyzed, take notice of the data structures and the algorithms used. Optimal performance can be realized when the best data structures and algorithms are utilized.

•  Reduce lock contention. Contending for access to a shared resource inhibits an application's capability to scale to a large number of software threads and across a large number of CPUs. Changes to an application that allow for less frequent lock contention and less duration of locking allow an application to scale better.

•  Generate more efficient code for a given algorithm. Clocks per CPU instruction, usually referred to as CPI, for an application is a ratio of the number of CPU clock ticks used per CPU instruction. CPI is a measure of the efficiency of generated code that is produced by a compiler. A change in the application, JVM, or operating system that reduces the CPI for an application will realize an improvement in its performance since it takes advantage of better and more optimized generated code.

There is a subtle difference between path length, which is closely tied to the algorithm choice, and cycles per instruction, CPI, which is the notion of generating more efficient code. In the former, the objective is to produce the shortest sequence of CPU instructions based on the algorithm choice. The latter's objective is to reduce the number of CPU clocks consumed per CPU instruction, that is, produce the most efficient code from a compiler. To illustrate with an example, suppose a CPU instruction results in a CPU cache miss, such as a load instruction. It may take several hundred CPU clock cycles for that load instruction to complete as a result of the CPU cache miss having to fetch data from memory rather than finding it in a CPU cache. However, if a prefetch instruction was inserted upstream in the sequence of instructions generated by a compiler to prefetch from memory the data being loaded by the load instruction, it is likely the number of clock cycles required to load the data will be less with the additional prefetch instruction since the prefetch can be done in parallel with other CPU instructions ahead of the load instruction. When the load instruction occurs, it can then find the data to be loaded in a CPU cache. However, the path length, the number of CPU instructions executed is longer as a result of the additional prefetch instruction. Therefore, it is possible to increase path length, yet make better use of available CPU cycles.

The following sections present several strategies to consider when analyzing a profile and looking for optimization opportunities. Generally, optimization opportunities for most applications fall into one of the general categories just described.

System or Kernel CPU Usage
Chapter 2, "Operating System Performance Monitoring," suggests one of the statistics to monitor is system or kernel CPU utilization. If CPU clock cycles are spent executing operating system or kernel code, those are CPU clock cycles that cannot be used to execute your application. Hence, a strategy to improve the performance of an application is to reduce the amount of time it spends consuming system or kernel CPU clock cycles. However, this strategy is not applicable in applications that spend little time executing system or kernel code. Monitoring the operating system for system or kernel CPU utilization provides the data as to whether it makes sense to employ this strategy.

The Oracle Solaris Performance Analyzer collects system or kernel CPU statistics as part of an application profile. This is done by selecting the View > Set Data Presentation menu in Performance Analyzer, choosing the Metrics tab, and setting the options to present system CPU utilization statistics, both inclusive or exclusive. Recall that inclusive metrics include not only the time spent in a given method, but also the time spent in methods it calls. In contrast, exclusive metrics report only the amount of time spent in a given method.

Tip
It can be useful to include both inclusive and exclusive metrics when first analyzing a profile. Looking at the inclusive metrics provides a sense of the path the application executes. Looking at the general path an application takes you may identify an opportunity for an alternative algorithm or approach that may offer better performance.

Figure 1 shows the Performance Analyzer's Set Data Presentation form with options selected to present both inclusive and exclusive System CPU metrics. Also notice the options selected report both the raw time value and the percentage of System CPU time.

Figure 1: Set system CPU data presentation

 

Figure 2: Exclusive system CPU

After clicking on the OK button, the Performance Analyzer displays the profile's System CPU inclusive and exclusive metrics in descending order. The arrow in the metric column header indicates how the data is presented and sorted. In Figure 2, the System CPU data is ordered by the exclusive metric (notice the arrow in the exclusive metric header and the icon indicating an exclusive metric).

Figure 2 shows a profile from an application that exhibits high system or kernel CPU utilization. You can see this application consumed about 33.5 seconds of System CPU in the java.io.FileOutputStream.write(int) method and about 11.6 seconds in a method called __write(), or about 65% and 22.5%, respectively. You can also get a sense of how significant the improvement can be realized by reducing the System CPU utilization of this application. The ideal situation for an application is to have 0% System CPU utilization. But for some applications that goal is difficult to achieve, especially if there is I/O involved, since I/O operations require a system call. In applications that require I/O, the goal is to reduce the frequency of making a system call. One approach to reduce the call frequency of an I/O system call is buffer the data so that larger chunks of data are read or written during I/O operations.

In the example shown in Figure 2, you can see the file write (output) operations are consuming a large amount of time as illustrated by the java.io.FileOutputStream.write(int) and __write() entries. To identify whether the write operations are buffered, you can use the Callers-Callees tab to walk up the call stack to see what methods are calling the FileOutputStream.write(int) method and the __write method. You walk up the call stack by selecting one of the callees from the upper panel and clicking the Set Center button. Figure 3 shows the Callers-Callees of the FileOutputStream.write(int) method.

The callers of FileOutputStream.write(int) are ExtOutputStream.write(int) and OutImpl.outc(int). 85.18% of the System CPU attributed to FileOutputStream.write(int) comes from its use in ExtOutputStream.write(int) and 14.82% of it from OutImpl.outc(int). A look at the implementation of ExtOutputStream.write(int) shows:

Figure 3: FileOutputStream.write(int) callers and callees

public void write(int b) throws IOException {
super.write(b);
writer.write((byte)b);
}
A look at the implementation of super.write(b) shows it is not a call to FileOutputStream.write(int):
public void write(int b) throws IOException {
crc = crc * 33 + b;
}

But the writer field in ExtOutputStream is declared as a FileOutputStream:

private FileOutputStream writer;

And it is initialized without any type of buffering:

writer = new FileOutputStream(currentFileName);
currentFileName is a field declared as a String:
private String currentFileName;

Hence, an optimization to be applied here is to buffer the data being written to FileOutputStream in ExtOutputStream using a BufferedOutputStream. This is done rather quickly and easily by chaining or wrapping the FileOutputStream in a BufferedOutputStream in an ExtOutputStream. Here is a quick listing of the changes required:

// Change FileOutputStream writer to a BufferedOutputStream
// private FileOutputStream writer;
private BufferedOutputStream writer;

Then chain a BufferedOutputStream and FileOutputStream at initialization time:

// Initialize BufferedOutputStream
// writer = new FileOutputStream(currentFileName);
writer = new BufferedOutputStream(
new FileOutputStream(currentFileName));

Writing to the BufferedOutputStream, instead of the FileOutputStream, in ExtOutputStream.write(int b) does not require any update since BufferOutputStream has a write() method that buffers bytes written to it. This ExtOutputStream.write(int b) method is shown here:

public void write(int b) throws IOException {
super.write(b);
// No update required here,
// automatically uses BufferedOutputStream.write()
writer.write((byte)b);

}

The other uses of the writer field must be inspected to ensure the use of BufferedOutputStream operates as expected. In ExtStreamOutput, there are two additional uses of the writer field, one in a method called reset() and another in checkResult(). These two methods are as follows:

public void reset() {
super.reset();
try {
if (diffOutputStream != null) {
diffOutputStream.flush();
diffOutputStream.close();
diffOutputStream = null;
}
if (writer != null) {
writer.close();

}
} catch (IOException e) {
e.printStackTrace();
}
}
public void checkResult(int loopNumber) {
try {
writer.flush();
writer.close();

} catch (IOException e) {
e.printStackTrace();
}
check(validiationProperties.getProperty(propertyName));
outProperties.put(propertyName, "" + getCRC());
reset();
}

The uses of writer as a BufferedOutputStream works as expected. It should be noted that the API specification for BufferedOutputStream.close() indicates it calls the BufferedOutputStream.flush() method and then calls the close() method of its underlying output stream, in this case the FileOutputStream.close() method. As a result, the FileOutputStream is not required to be explicitly closed, nor is the flush() method in ExtOutputStream.checkResult(int) required. A couple of additional enhancements worth consideration are:

1. A BufferedOutputStream can also be allocated with an optional buffered size. The default buffer size, as of Java 6, is 8192. If the application you are profiling is writing a large number of bytes, you might consider specifying an explicit size larger than 8192. If you specify an explicit size, consider a size that is a multiple of the operating systems page size since operating systems efficiently fetch memory that are multiples of the operating system page size. On Oracle Solaris, the pagesize command with no arguments reports the default page size. On Linux, the default page size can be obtained using the getconf PAGESIZE command. Windows on x86 and x64 platforms default to a 4K (4096) page size.

2. Change the ExtOutputStream.writer field from an explicit BufferedOutputStream type to an OutputStream type, that is, OutputStream writer = new BufferedOutputStream(), instead of BufferedOutputStream writer = new BufferedOutputStream(). This allows for additional flexibility in type of OutputStream, for example, ByteArrayOutputStream, DataOutputStream, FilterOutputStream, FileOutputStream, or BufferedOutputStream.

Looking back at Figure 3, a second method calls FileOutputStream.write(int) called org.w3c.tidy.OutImpl.outc(int), which is a method from a third-party library used in the profiled application. To reduce the amount of system CPU utilization used in a third-party supplied method, the best approach is to file a bug or enhancement request with the third-party library provider and include the information from the profile. If the source is accessible via an open source license and has acceptable license terms, you may consider further investigating and including additional information in the bug or enhancement request report.

After applying the changes identified in ExtOutputStream, using the BufferedOutputStream and its default constructor (not including the two additional enhancements just mentioned), and collecting a profile, the amount of system CPU utilization drops substantially.  Comparing the profiles in Figure 6-4 to those in Figure 6-2, you can see the amount of inclusive system CPU time spent in java.io.FileOutputStream has dropped from 45.182 seconds to 6.655 seconds (exclusive system CPU time is the second column).

Executing this application workload outside the profiler in a performance testing environment prior to making the modifications reports it took this application 427 seconds to run to completion.  In constrast, the modified version of the application workload that uses the BufferOutputStream in the same performance testing environment reports it runs to completion in 383 seconds. In other words, this application realized about a 10% improvement in its run to completion execution.

In addition, looking at the Callers-Callees tab for java.io.FileOutputStream.write(int), only the call to org.w3c.tidy.OutImpl.outc(int) remains as a significant consumer of the FileOutputStream.write(int) method. The Callers-Callees of FileOutputStream.write(int) are shown in Figure 6-5.

Figure 4: Reduced system CPU utilization

Figure 5: Callers-Callees after changes

Comparing the Callers-Callees in Figure 5, after the changes to ExtStream Output, with the Callers-Callees in Figure 3, prior to the changes, you can see the amount of attributable time spent in org.w3c.tidy.OutImpl.outc(int) stays close to the same. This should not be a surprise since the changes made to ExtStreamOutput now use BufferedOutputStream. But recall that the BufferedOutputStream invokes a FileOutputStream method when any of the underlying buffer in the BufferedOutputStream becomes full, the BufferedOutputStream.flush() method is called, or when the BufferedOutputSteam.close() method is called. If you look back at Figure 4 you see a FileOutputStream.writeBytes(byte[], int, int) method. This is the method that the BufferedOutputStream calls from ExtStreamOutput. Figure 6 shows the Callers-Callees tab for the FileOutputStream.writeBytes(byte[], int, int).

Selecting java.io.FileOutputStream.write(byte[], int, int) method from the upper Callee panel and clicking the Set Center button illustrates that BufferedOutputStream.flushBuffer() is its callee; see Figures 6 and 7.

Figure 6: Callers-Callees of FileOutputStream.writeBytes(byte[],int,int)

Figure 7: Callers-Callees of FileOutputStream.writeBytes(byte[], int, int)

Figure 8: Callers-Callees of BufferedOutputStream.flushBuffer()

Figure 9: Callers-Callees of BufferedOutputStream.write(int)

Selecting the BufferedOutputStream.flushBuffer() method in the upper Callee panel and clicking the Set Center button shows the callee of java.io.BufferedOutputStream.flushBuffer() is BufferedOutputStream.write(int). The Callers-Callees of BufferedOutputStream.flushBuffer() are shown in Figure 8.

Selecting the BufferedOutputStream.write(int) method in the upper Callee panel and clicking the Set Center button shows the callee of java.io.BufferedOutputStream.write(int) is ExtOutputStream.write(int), the method that has been modified. The Callers-Callees of BufferedOutputStream.write(int) are shown in Figure 9.

As mentioned earlier, the next step in reducing System CPU utilization for this application requires a modification to a third-party library, a library that holds the implementation of org.w3c.tidy.OutImpl.outc(int). It may be possible for the maintainers of the third-party library to implement a similar modification to OutImpl.outc(int) as just described and implemented for ExtOutputStream.write(int). However, the performance improvement realized will likely not be as significant since the profile suggests there is more System CPU utilization attributed to the call path of ExtOutputStream.write(int) than to OutImpl.outc(int); refer to Figure 6-3 for attributable System CPU utilization on callers of FileInputStream.write(int). In addition, looking at the amount of System CPU utilization consumed in OutImpl.outc(int), about 6.6 seconds, compared to the total application runtime of 383 seconds is rather small, about 1.5%. Hence, a modification to reduce the amount of System CPU utilization spent in OutImpl.outc(int) would likely not yield more than 1% to 2% improvement.

Tip
Applications that perform network I/O can employ a similar, general approach to reduce system CPU utilization as that just described in this section. That is, buffer both the data in the input and output stream used to write and read the data.

An additional strategy to reduce system CPU utilization for applications performing large amounts of network I/O is utilizing Java NIO nonblocking data structures. Java NIO was introduced in Java 1.4.2 with many runtime performance improvements added in Java 5 and Java 6. Java NIO nonblocking data structures allow for the ability to read or write as much data as possible in a single call to a network I/O (read or write) operation. Remember that every network I/O call eventually results in the invocation of an operating system's system call, which consumes system CPU utilization. The challenge with using Java NIO nonblocking data structures is it is more difficult to program than using blocking Java NIO or the older, more traditional Java SE blocking data structures such as java.net.Socket. In a Java NIO nonblocking output operation, you can write as many bytes as the operating system allows to be written. But you have to check the return value of the output operation to determine whether all the bytes you asked to be written have indeed been written. In a Java NIO nonblocking input operation, where you read as many bytes as are available, you have to check how many bytes have been read. You also have to implement some complex programming logic to deal with partially read protocol data units, or multiple protocol data units. That is, you may not be able to read enough bytes in a single read operation to construct a meaningful protocol data unit or message. In the case of blocking I/O, you simply wait until you generally read the specified number of bytes that constitute a full protocol data unit or message. Whether to migrate an application to utilize nonblocking network I/O operations should be decided upon by the application's performance needs. If you want to take advantage of the additional performance promised by using nonblocking Java NIO, you should consider using a general Java NIO framework to make the migration easier. Several popular Java NIO frameworks are available such as Project Grizzly and Apache MINA.

Another area where high System CPU utilization may show up is in applications experiencing heavy lock contention. Identifying lock contention in a profile and approaches to reduce lock contention are discussed in Part 2.

•   •   •

This excerpt is from the book, ‘Java Performance' by Charlie Hunt and Binu John, published in the Addison-Wesley Professional Java Series, ISBN 0137142528, Sept. 2011, Copyright 2012 Oracle America, Inc. For more info please visit: www.informit.com/title/0137142528

More Stories By Charlie Hunt

Charlie Hunt is the JVM performance lead engineer at Oracle. He is responsible for improving the performance of the HotSpot JVM and Java SE class libraries. He has also been involved in improving the performance of the Oracle GlassFish and Oracle WebLogic Server. A regular JavaOne speaker on Java performance, he also coauthored NetBeans™ IDE Field Guide (Prentice Hall, 2005).

More Stories By Binu John

Binu John is a senior performance engineer at Ning, Inc., where he focuses on improving the performance and scalability of the Ning platform to support millions of page views per month. Before that, he spent more than a decade working on Java-related performance issues at Sun Microsystems, where he served on Sun’s Enterprise Java Performance team. John has contributed to developing industry standard benchmarks such as SPECjms2007 and SPECJEnterprise2010; published several performance whitepapers; and contributed to java.net's XMLTest and WSTest benchmark projects.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Security is one the more prominent of the application service categories, likely due to its high profile impact. After all, if security fails, we all hear about it. The entire Internet. Forever. So when one conducts a survey on the state of application delivery (which is implemented using application services) you kinda have to include security. Which of course, we did.
Chef and Canonical announced a partnership to integrate and distribute Chef with Ubuntu. Canonical is integrating the Chef automation platform with Canonical's Machine-As-A-Service (MAAS), enabling users to automate the provisioning, configuration and deployment of bare metal compute resources in the data center. Canonical is packaging Chef 12 server in upcoming distributions of its Ubuntu open source operating system and will provide commercial support for Chef within its user base.
In 2015, 4.9 billion connected "things" will be in use. By 2020, Gartner forecasts this amount to be 25 billion, a 410 percent increase in just five years. How will businesses handle this rapid growth of data? Hadoop will continue to improve its technology to meet business demands, by enabling businesses to access/analyze data in real time, when and where they need it. Cloudera's Chief Technologist, Eli Collins, will discuss how Big Data is keeping up with today's data demands and how in t...
Choosing between BIG-IP and LineRate isn't as difficult as it seems.... Our recent announcement of the availability of LineRate Point raised the same question over and over: isn't this just a software-version of BIG-IP? How do I know when to choose LineRate Point instead of BIG-IP VE (Virtual Edition)? Aren't they the same?? No, no they aren't. LineRate Point (and really Line Rate Precision, too) is more akin to an app proxy while BIG-IP VE remains, of course, an ADC (Application Delivery ...
So I guess we’ve officially entered a new era of lean and mean. I say this with the announcement of Ubuntu Snappy Core, “designed for lightweight cloud container hosts running Docker and for smart devices,” according to Canonical. “Snappy Ubuntu Core is the smallest Ubuntu available, designed for security and efficiency in devices or on the cloud.” This first version of Snappy Ubuntu Core features secure app containment and Docker 1.6 (1.5 in main release), is available on public clouds, ...
SYS-CON Media announced today that @ThingsExpo Blog launched with 7,788 original stories. @ThingsExpo Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @ThingsExpo Blog can be bookmarked. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago.
No, not the head-banging, gritty, heavy metal Metallica song (though that's certainly awesome too.. excuse me for a moment while I turn it up to 11) but the Puppet as in automation kind of master. The importance placed on APIs - which are key to automation - in our State of Application Delivery 2015 survey was high, with 40% of respondents saying it was important to them that their infrastructure be API-enabled. Automation using those APIs is generally being accomplished through a variety of m...
The world's leading Cloud event, Cloud Expo has launched Microservices Journal on the SYS-CON.com portal, featuring over 19,000 original articles, news stories, features, and blog entries. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. Microservices Journal offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Follow new article posts on T...
The concept and subsequent adoption of 'Containerization'' is growing at a rapid speed with the support of almost every other major player in the industry. This concept is much more efficient than the Virtualization which has been a major option for Infrastructure optimization in the past decade. The following factors distinguish a Container from a Virtual Machine. Containers contain Only the Application Specific libraries and binaries. They do not include a guest operating system. Rather ...
SYS-CON Events announced today that Vicom Computer Services, Inc., a provider of technology and service solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. They are located at booth #427. Vicom Computer Services, Inc. is a progressive leader in the technology industry for over 30 years. Headquartered in the NY Metropolitan area. Vicom provides products and services based on today’s requirements...
DevOps tasked with driving success in the cloud need a solution to efficiently leverage multiple clouds while avoiding cloud lock-in. Flexiant today announces the commercial availability of Flexiant Concerto. With Flexiant Concerto, DevOps have cloud freedom to automate the build, deployment and operations of applications consistently across multiple clouds. Concerto is available through four disruptive pricing models aimed to deliver multi-cloud at a price point everyone can afford.
SYS-CON Events announced today the IoT Bootcamp – Jumpstart Your IoT Strategy, being held June 9–10, 2015, in conjunction with 16th Cloud Expo and Internet of @ThingsExpo at the Javits Center in New York City. This is your chance to jumpstart your IoT strategy. Combined with real-world scenarios and use cases, the IoT Bootcamp is not just based on presentations but includes hands-on demos and walkthroughs. We will introduce you to a variety of Do-It-Yourself IoT platforms including Arduino, Ras...
One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher's UI or API. To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality. With Rancher's Load Balancing capability, users ...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists will discuss how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations m...
Financial services organizations were among the earliest enterprise adopters of cloud computing. The ability to leverage massive compute, storage and networking resources via RESTful APIs and automated tools like Chef and Puppet made it possible for their high-horsepower IT users to develop a whole new array of applications. Companies like Wells Fargo, Fidelity and BBVA are visible, vocal and engaged supporters of the OpenStack community, running production clouds for applications ranging from d...
A few weeks ago, SmartBear hosted API Craft Boston with the folks from Akana, Ian Goldsmith and Laura Heritage, to talk about microservices. It was an extremely informative presentation of where microservices came from, what it solves, and considerations around how it might fit into an organizational API strategy. It’s one thing to read everyone else’s opinions on blogs, twitter, etc. It’s great to go to workshops and conferences, but this was so intelligently presented (and for a meetup too)...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? Join this panel of experts as they peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you’ll have no problem filling in your buzzword bingo cards.
SYS-CON Events announced today that Column Technologies, a global technology solutions company, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1998, Column Technologies is a leader in application performance and infrastructure management for commercial and federal markets. The company is headquartered in the United States, with a diverse and talented team of more than 350 employees around th...
The only place to be June 9-11 is Cloud Expo & @ThingsExpo 2015 East at the Javits Center in New York City. Join us there as delegates from all over the world come to listen to and engage with speakers & sponsors from the leading Cloud Computing, IoT & Big Data companies. Cloud Expo & @ThingsExpo are the leading events covering the booming market of Cloud Computing, IoT & Big Data for the enterprise. Speakers from all over the world will be hand-picked for their ability to explore the economic...
As we recently previewed (read more about our London PoP in Jesse's post), Blue Box is opening a new Data Center in London, but hadn't announced the provider. Today we're excited to partner with TelecityGroup, whom we've selected as our data center partner in London. We chose their Powergate location, which is one of the U.K.'s most advanced, flexible and energy efficient carrier-neutral data centres. Why does that matter to you? Well, when customers choose Blue Box, they're trusting us with ...