Welcome!

Microservices Expo Authors: John Rauser, Liz McMillan, Madhavan Krishnan, VP, Cloud Solutions, Virtusa, Jason Bloomberg, Pat Romanski

Related Topics: @CloudExpo, Containers Expo Blog, @DXWorldExpo

@CloudExpo: Blog Post

Case Study: Accelerate - Academic Research | @CloudExpo @DDN_limitless #Cloud #Storage

UCL transforms research collaboration and data preservation with scalable cloud object storage appliance from DDN

University College London (UCL), ranked consistently as one of the top five universities in the world, is London's leading multidisciplinary university with more than 10,000 staff , over 26,000 students as well as more than 100 departments, institutes and research centers. With 25 Nobel Prize winners and three Fields medalists among UCL's alumni and staff, the university has attained a world-class reputation for the quality of its teaching and research across the academic spectrum.

As London's premier research institution, UCL has 5,000 researchers committed to applying their collective strengths, insights and creativity to overcome problems of global significance. The university's innovative, cross-disciplinary research agenda is designed to deliver immediate, medium and long-term benefits to humanity. UCL Grand Challenges, which encompass Global Health, Sustainable Cities, Intercultural Interaction and Human Wellbeing, are a central feature of the university's research strategy.

According to Dr. J. Max Wilkinson, Head of Research Data Services for the UCL Information Services Division, sharing and preserving project-based research results is essential to the scientific method. "I was brought in to provide researchers with a safe and resilient solution for storing, sharing, reusing and preserving project-based data," he explains. "Our goal is to remove the burden of managing project data from individual researchers while making it more available over longer periods of time."

The Challenge
The opportunity to improve the sharing and access of project-based research presented several unique technical and cultural challenges. On the technical side, the team had to accommodate a variety of different types of data, growing in volume and velocity. In some cases, a small amount off data is so valuable to a research team that six discrete copies were retained on separate USB drives or removable hard drives kept in different locations. In other instances, UCL researchers produce copious amounts of very well-defined data that pass between compute algorithms under which research sits.

In addition to solving technical problems, the research data services team was faced with the opportunity to support researchers in a new ‘data-intensive' world by making it safe and easy to follow best practices in data management and use best-of-class storage solutions. "We discovered the valuable data underpinning most research projects were stuck on a hard drives or disk, never to be seen again," adds Wilkinson. "If we could provide a framework over which people could share and preserve data confidently, we could minimize this behavior and improve research by making the scholarly rerecord more complete."

To accomplish this, UCCLL needed to provide an enterprise-class foundation for data manipulation that met the needs of its diverse user community. While some researchers thought 100GB was a large amount of data, others clamored for more than 100TB to support a particular project. There was also an expectation that up to 3,000 individuals from UCL's total base of 5,000 active researchers and collaborators would require services within the next 18-to-24 months.

"We had a simple services proposition that would eliminate the need for research teams to manage racks of servers and data storage devices," says Wilkinson. "Of course, this meant we'd need a highly scalable storage infrastructure that could grow to 100PB without creating a large storage footprint or excessive administrative overhead."

Additionally, they had to address long-term data retention needs that extended well beyond the realm of research projects. UCL, along with many other UK research intensive institutes, is faced with increasingly stringent requirements for the management of project data outputs by UK Research Councils and other funding bodies in the United Kingdom. As grant funding in the UK supports best practice, it was critical to have a proven data management plan that documented how UCL would preserve data for sometimes decades while ensuring maximum appropriate access and reuse by third parties.

The Solution
In seeking a scalable, resilient storage foundation, UCL issued an RFP to solicit insight on different approaches for consolidating the university's research data storage infrastructure. Each of the 21 RFP respondents was asked to provide examples of large-scale deployments, which produced far-ranging answers, including how providers addressed sheer data volume, reduced increasingly complex environments or delivered overarching data management frameworks.

UCL's RFP covered a diverse set of requirements to determine each potential solution provider's respective strengths and limitations. "We asked for more than we thought possible from a single vendor-from a synchronous file sharing to a high performance parallel file system, highly scalable, resilient storage that would be simple to manage," notes Daniel Hanlon, Storage Architect for Research Data Services at University College London. "We wanted to cover our bases while determining what was practical and doable for researchers."

Recommendations encompassed a broad storage spectrum, including NAS, SAN, HSM, object storage, asset management solutions and small amounts of spinning disks with lots of back-end tape. "Because we had such broad requirements, we omitted any vendor that was bound to a particular hardware platform," explains Wilkinson. "It was important to be both data and storage agnostic so we would have the flexibility to support all data and media types without being locked into any particular hardware platform."

With its ability to support virtually unlimited scalability, object storage appealed to UCL, especially since it also would be much easier to manage than alternatives. Still, object storage was seen as a relatively new technology and UCL lacked hands-on experience with large-scale deployments within the university's ecosystem. In addition to evaluating the different technologies, UCL also assessed each provider's understanding of their environment, as it was critically important to accommodate UCL's researcher requirements in order to drive acceptance. "Some of the RFP respondents didn't understand the difference between the corporate and academic worlds, and the fact that universities by nature generally have to avoid being tied into particular closed technologies," adds Hanlon. "Many of the RFP respondents were eliminated, not because of their technical response, but because they didn't really get what we were trying to do."

As a result, the universe of prospective solutions was reduced to a half-dozen recommendations. As the team took a closer look at the finalists, they considered each vendor's academic track record, ability to scale without overburdening administrators and experience with open-source technology. "We wanted to work with a storage solutions provider that took advantage of open-source solutions," Hanlon notes. "This would enable us to partner with them and also with other academic institutions trying to do similar things."

In the final analysis, UCL wanted a partner with equal enthusiasm for freeing researchers from the burden of data storage so they could maximize the impact of their projects. "We were very interested in building a relationship with a strong storage partner to fill our technology gap," says Wilkinson. "After a thorough assessment, DataDirectTM Networks (DDN) met our technical requirements and shared our data storage vision. In evaluating DDN, we agreed that their solution had a simple proposition, high performance and low administration overhead."

The proposed solution, which included the GRIDScaler massively scalable parallel file system and Web Object Scaler (WOS), also provided the desired scalability and management simplicity. Another plus for WOS storage was its tight integration with the Integrated Rule-Oriented Data Management Solution (iRODS). This open-source solution is ideally suited for research collaboration by making it easier to organize, share and find collections of data stored in local and remote repositories.

"It was important that DDN's solution gave us multiple ways to access the same storage, so we could be compatible with existing application codes," says Hanlon. "The tendency with other solutions was to give us bits of technology that had been developed in different spaces and that didn't really fit our problem."

The Benefits
During a successful pilot implementation involving a half-petabyte of storage, UCL gained first-hand insight into the advantage of DDN's turnkey distributed storage and collaboration solution. "The main attraction of DDN WOS is the combination of an efficient object store with edge appliances to ease integration with other storage infrastructure," says Hanlon. Another big plus for UCL is DDN's high-density storage capacity, which will enable fitting a lot more disks into existing storage racks, which is crucial to growing while maintaining a small footprint in UCL's highly-congested, expensive downtown London location.

As researchers are often reluctant to give up control of their data storage solutions, the team also has been pleased to discover early adopters who see the value of using the new service to protect and preserve current data assets. In fact, the new research data service already is getting high marks for performance reliability, data durability, data backup and disaster recovery capabilities.

UCL predicts that as traction for the new service increases, there will be greater interest in leveraging it to further extend how current research is reused and exploited to drive more impactful outcomes. By taking this innovative approach, the UCL Research Data Services team is embracing the open data movement while enlisting leading-edge technologies to deliver reliable, flexible data access that maximizes appropriate sharing and re-use of research data.

Additionally, UCL is taking the researcher worry of meeting increasingly strong expectations from funding organizations out of the storage equation with its plans to add a scalable archive to its dynamic storage service offering. "We'll be able to tell researchers that if they use our services, they'll be compliant with UCL, UK Research Council and other UK and international funding bodies' policies and requirements," Wilkinson says. "They won't have to worry about it because we will."

By providing a framework over which UCL researchers can store and share data confidently, UCL expects to achieve significant bottom-line cost savings. Early projections around the initial phase of the infrastructure build out are upwards of hundreds of thousands of UK pounds, simply by eliminating the need for thousands of researchers to attain and maintain their own storage hardware. "DDN is empowering us to deliver performance and cost savings through a dramatically simplified approach; in doing so we support UCL researchers, their collaborators and partners to maintain first class research at London's global university," concludes Wilkinson. "Add in the fact that DDN's resilient, extensible storage solution provided evidence of seamless expansion from a half-petabyte to 100PBs, and we found exactly the foundation we were looking for."

More Stories By Pat Romanski

News Desk compiles and publishes breaking news stories, press releases and latest news articles as they happen.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Digital transformation has changed the way users interact with the world, and the traditional healthcare experience no longer meets rising consumer expectations. Enterprise Health Clouds (EHCs) are designed to easily and securely deliver the smart and engaging digital health experience that patients expect today, while ensuring the compliance and data integration that care providers require. Jikku Venkat