|By Dana Gardner||
|December 3, 2012 06:45 AM EST||
Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how regional healthcare services provider Lake Health in Ohio has matured from deploying security technologies to becoming more of a comprehensive risk-reduction practice provider internally for its own consumers.
We learn how Lake Health's Information Security Officer has been expanding the breadth and depth of risk management there to a more holistic level -- and we're even going to discuss how they've gone about deciding which risk and compliance services to seek from outside providers, and which to retain and keep on-premises.
Here to explore these and other security-related enterprise IT issues, we're joined by our co-hosts for this sponsored podcast, Chief Software Evangelist at HP, Paul Muller, and Raf Los, Chief Security Evangelist at HP.
And we also welcome our special guest, Keith Duemling, Information Security Officer at Lake Health. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: Many people are practicing IT security and they're employing products and technologies. They're putting in best practices and methods, of course.
But you have a different take. You've almost abstracted this up to information assurance -- even quality assurance -- for knowledge, information, and privacy. Tell me how that higher abstraction works, and why you think it's more important or more successful than just IT security?
Duemling: If you look at the history of information security at Lake Health, we started like most other organizations. We were very technology focused, implementing one or two point solutions to address specific issues. As our program evolved, we started to change how we looked at it and considered it less of a pure privacy issue and more of a privacy and quality issue.
Go back to the old tenets of security, with confidentiality, integrity, and availability. We started thinking that, of those three, we really focused on the confidentiality. But as an industry, we haven't focused that much on the integrity -- and the integrity is closely tied to the quality.
So we wanted to transform our program into an information-assurance program, so that we could allow our clinicians and other caregivers to have the highest level of assurance that the information they're making decisions based on is accurate and is available, when it needs to be, so that they feel comfortable in what they are doing.
As background, Lake Health is a not-for-profit healthcare system. We’re about 45 minutes outside of Cleveland, Ohio. We have two freestanding hospitals and approximately 16 satellite sites of different sizes that provide healthcare to the citizens of the county that we’re in and three adjacent counties.
We have three freestanding 24×7 emergency rooms (ERs), which treat all kinds of injuries, from the simple broken fingers to severe car accidents, heart-attacks, things of that nature.
We also have partnerships with a number of very large healthcare systems in the region, and organizations of that size. We send some of our more critically injured patients to those providers, and they will send some of their patients to us for more localized, smaller care closer to their place of residence.
We’ve grown from a single, small community hospital to the organization that we have now.
Typically, when I started, an individual was assigned a set of projects to work on, and I was assigned a series of security projects. I had a security background that I came to the organization with. Over time, those projects congealed into the security program that we have now, and if I am not mistaken, it's in its third iteration right now. We seem to be on a three-year run for our security program, before it goes through a major retrofit.
So it's not just protecting information from being disclosed, but it's protecting information so that it's the right information, at the right time, for the right patient, for the right plan of care.
From a high level, the program has evolved from simple origins to more of a holistic type of analysis, where we look at the program and how it will impact patient care and the quality of that patient care.
Gardner: It sounds like what I used to hear -- and it shows how long I have been around -- in the manufacturing sector. I covered that 20 years ago. They talked about a move toward quality, and rather than just looking at minute or specific parts of a process, they had to look at it in total. It was a maturity move on behalf of the manufacturers, at that time.
Raf Los, do you see this as sort of a catching up time for IT and for security practices that are maybe 20 years behind where manufacturing was?
Los: What Keith’s group is going, and where many organizations are evolving to, is a practice that focuses less on “doing security” and more on enabling the enterprise and keeping quality high. After all, security is simply a functionof -- one of the three pillars -- of quality. We look at does it perform, does it function, and is it secure?
So it's a natural expansion of this, sort of a Six Sigma-esque approach to the business, where IT is catching up, as you’ve aptly put it. So I tend to agree with it.
Gardner: Of course, compliance is really important in the healthcare field. Keith, tell us how your approach may also be benefiting you, not just in the quality of the information, but helping you with your regulatory and compliance requirements too?
Duemling: In the approach that we’ve taken, we haven’t tried to change the dynamics of that significantly. We've just tried to look at the other side of the coin, when it comes to security. We find that a lot of the controls that we put in place for security benefit from an assurance standpoint, and the same controls for assurance also benefit from a security standpoint.
As long as we align what we're doing to industry-accepted frameworks, whether it’d be NIST or ISO, and then add the healthcare-specific elements on top of that, we find that that gives us a good architecture to continue our program, and to be mindful of the assurance aspect as well as the security side.
One of the other benefits of the approach is that we look at the data itself or the business function and try to understand the risks associated with it and the importance of those functions and the availability of the data. When we put the controls and the protective measures around that, we typically find that if we're looking specifically at what the target is when we implement the control, our controls will last better and they will defend from multiple threats.
So we're not putting in a point solution to protect against the buzzword of the day. We're trying to put in technologies and practices that will improve the process and make it more resilient from both what the threats are today and what they are in the future.
Muller: A couple of observations ... The first is that we need to be really careful when we think about compliance. It's something of a security blanket, not so much for security executives. I think InfoSec security executives understand the role of compliance, but it can give business leaders a false sense of security to say, "Hey, we passed our audit, so we're compliant."
There was a famous case of a very large financial-services institution that had been through five separate audits, all of which gave them a very clear bill of health. But it was very clear from some of the honey pots they put in place in terms of certain data that they were leaking data through to a market-based adversary. In other words, somebody was selling their data, and it wasn’t until the sixth audit that it uncovered the source of the problem.
So we need to be really careful. Compliance is actually the low bar. We're dealing with a market-based adversary. That is, someone will make money from your data. It's not the nation-state that we need to worry about so much as the people who are looking to exploit the value of your information.
Of course, once money and profit enter the equation, there are a lot of people very interested in automating and mechanizing their attack against your defense, and that attack surface is obviously constantly increasing.
The challenge, particularly in examples such as the one that Keith is talking about, comes in the mid-sized organizations. They've got all of the compliance requirements, the complexity, and the fascinating, or interesting, data from the point of view from a market-based adversary. They have all of that great data, but don't necessarily have the scale and the people to be able to protect that.
It's a question of how you balance the needs of a large enterprise with the resources of a mid-sized organization. I don't know, Keith, whether you've had any experience of that problem.
Duemling: I have all too many times experienced that problem that you’re defining right there. We find that technology that helps us to automate our situational awareness is something that's key for us. We can take the very small staff that we have and make it so that we can respond to the threats and have the visibility that we need to answer those tough questions with confidence, when we stand in front of the board or senior management. We're able to go home and sleep at night and not be working 24×7.
Los: Keith, let me throw a question at you, if you don't mind. We mentioned automation, and everybody that I have with this conversation with tends to -- I don't want to say oversimplify -- but can have an over-reliance on automation technology.
In an organization of your size, you’re right smack in the middle of that, too big not to be a target, too small to have all the resources you've ever wanted to defend yourself. How do you keep from being overrun by automation -- too many dashboards, too many red lights blinking at you, so you can actually make sense of any of this?
Duemling: That's actually one of the reasons we selected HP's ArcSight. We had too many dashboards for our very small staff to manage, and we didn’t want Monday to be the dashboard for Product A, Tuesday for Product B, and things of that nature.
So we figured we would aggregate them and create the master dashboard, which we could use to have a very high-level, high-altitude view, drill down into the specific events, and then start referring them to subject-matter experts. We wanted to have just those really sensitive events bubble up to the surface, so that we could respond to them and they wouldn’t get lost in the maze of dashboards.
Gardner: How did you unify all of these different elements under what you call a program for security? What were some of the steps you needed to take? We heard a little bit about the dashboard issue, but I'm trying to get a larger perspective on how you unified culture around this notion of information assurance?
Duemling: We started within the information and technology department where we had to really do an evaluation of what technologies we had in place? What are different individuals responsible for, and who do they report to? Once we found that there was this sprinkling of technology and responsibilities throughout the department, we had to put together a plan to unify that all into one program that has one set of objectives, is under one central leadership, and has its clear marching orders.
Then once we accomplished that, we started to do the same thing across the entire organization. We improved our relationship within IT, not just with sub-departments within IT, but then we also started to look outside and said, "We have to improve our relationship with compliance and we have to improve our relationship with physical security."
So we’re unifying our security program under the mantra of risk, and that's bringing all the different departments that are related to risk into the same camp, where we can exchange notes and drive towards a bigger enterprise focused set of objectives.
Los: At the end of the day, what security is chartered with, along with most of the rest of IT, as I said earlier, is empowering the organization to do its work. Lake Health does not exist for the sole purpose of security, and clearly they get that.
That's step one on this journey of understanding what the purpose of an IT security organization is. Along the broader concept of resiliency, one of the things that we look at in terms of security and its contribution to the business is, can the organization take a hit and continue, get back up to speed, and continue working?
Not if, but when
Most organization technologists by now know it’s not a question of if you’re going to be hacked or attacked, but a question of when, and how you’re going to respond to that by allowing the intelligent use of automation, the aligning towards business goals, and understanding the organization, and what's critical in the organization.
They rely on critical systems, critical patient-care system. That goes straight to the enterprise resiliency angle. If you get hacked and your network goes down, IT security is going to be fighting that hack. At the same time, we need to realize how we separate the bad guys from the patient and the critical-care system, so that our doctors and nurses and support professionals can go back to saving lives, and making people’s lives better, while we contain the issue and eradicate it from our system.
It's more than just about security, and that's a fantastic revelation to wake up to every morning.
Gardner: Are there some other returns on investment (ROI), maybe it's a softer return like an innovation benefit or being able to devote more staff to innovation?
Duemling: I'd put forward two paybacks. One is about some earlier comments I heard. We, as an organization, did suffer a specific event in our history, where we were fighting a threat, while it was expected that our facilities would continue operating. Because of the significant size of that threat, we had degraded services, but we were able to continue -- patients were able to continue coming in, being treated, things of that nature.
That happened earlier in our program, but it didn’t happen to the point where we didn’t have a program in place. So, as an organization, we were able to wage that war, for lack of a better term, while the business continued to function.
Although those were some challenging times for us, and luckily there was no patient data directly or indirectly involved with that, it was a good payoff that we were able to continue to fight the battle while the operations of the organization continued. We didn't have to shut down the facilities and inconvenience the patients or potentially jeopardize patient safety and/or care.
A second payoff is, if we fast forward to where we are now, lessons learned, technologies put in place, and things of that nature. We have a greater ability to answer those questions, when people put them to us, whether it's a middle manager, senior manager, or the board. What are some of the threats we're seeing? How are we defending ourselves? What is the volume of the challenge? We're able to answer those questions with actual answers as opposed to, "I don't know," or "I'll get back to you."
So we can demonstrate more of an ROI through an improvement in situational awareness and security intelligence that we didn't have three, four, or five years earlier in the program’s life. And tools like ArcSight and some of the other technologies that we have, that aggregate that for us, get rid of the noise, and just let us hone in on the crown jewels of the information are really helpful for us to answer those questions.
System of record
Gardner: How about looking at this through the lens of a system of record perspective, an architectural term perhaps, has that single view, that single pane of glass, allowed you to gain the sense that you have a system of record or systems of record. Has that been your goal, or has that been perhaps even an unintended consequence?
Duemling: It's actually kind of both. One, it retains information that sometimes you wish you didn't retain, but that's the fact of what the device and the technology are in the solution and it’s meeting its objective.
But it is nice to have that historical system of record, to use your term, where you can see the historical events as they unfold and explain to someone, via one dashboard or one image, as a situation evolves.
Then, you can use that for forensic analysis, documentation, presentation, or legal to show the change in the threat landscape related to a specific incident, or from a higher level, a specific technology that's providing its statistical information into ArcSight, but you can then do trending and analysis on.
It is also good to get towards a single unified dashboard where you can see all of the security events that are occurring in the environment or outside the environment that you are pulling in, like edit from a disaster recovery (DR) site. You have that single dashboard where if you think there's a problem, you can go to that, start drilling down, and answer that question in a relatively short period of time.
Muller: I'll go back to Keith’s opening comments as well. Let's not undervalue the value of confidence -- not having to second guess not just the integrity of your systems and your applications, but to second guess the value of information. It's one thing when we're talking about the integrity of the bank balance of a customer. Let's be clear that that's important, but it can also be corrected just as easily as it can be modified.
When you're talking about confidence in patient data, medical imaging, drug dispensations, and so forth, that’s the sort of information you can't afford to lack confidence in, because you need to make split-second decisions that will obviously have an impact on somebody’s life.
Duemling: I would add to that. Like you were saying, you can undo an incorrect or a fraudulent bank transfer, but you cannot undo something such as the integrity of your blood bank. If your blood bank has values that randomly change or if you put the wrong type of blood into a patient, you cannot undo those without there being a definitely negative patient outcome.
Los: Keith, along those lines, do you have separate critical systems that you have different levels of classifications for that are defended and held to a different standard of resilience, or do you have a network wide classification? I am just curious how you figure out what gets the most attention or what gets the highest concentration of security?
Duemling: The old model of security in healthcare environments was to have a very flat type of architecture, from both networking, support, and a security standpoint. As healthcare continues to modernize for multiple reasons, there's a need to build islands or castles. That’s the term we use internally, "castles," to describe it. You put additional controls, monitoring, and integrity checks in place around specific areas, where the data is the most valuable and the integrity is the most critical, because there are systems in a healthcare environment that are more critical than others.
Obviously, as we talked about earlier, the ones that are used for clinical decision making are technically more critical than the ones that are used for financial compensation as it results from treating patients. So although it's important to get paid, it's more important that patient safety is maintained at all times.
We can't necessarily defend all of our vast resources with the limited set of tools that we have. So we've tried to pick the ones that are the most critical to us and that's where we've tried to put all the hardening steps in place from the beginning, and we will continue to expand from there.
We look at every security project with the mindset of how we can do this the most effectively and with the least amount of resources that are diverted from the clinical environment to the information security program.That being said, security as a service, cloud-based technology, outsourcing, whatever term you would like use, is definitely something that we consider on a regular basis, when it comes to different types of controls or processes that we have to be responsible for. Or professional services in the events of things like forensics, where you don’t do it on a regular basis, so you may not consider yourself an expert.
We tend to do an evaluation of the likelihood of the threat materializing or dependence on the technology, what offerings are out there, both as a service and premise-based, what it would take from an internal resource standpoint to adequately support and use a technology. Then, we try and articulate that into a high-level summary of the different options, with cost, pros and cons related to each.
Then, typically our senior management will discuss all of those, and we'll try and come to the decision that we think makes best for our organizations, not just for that point, but for the next three to five years. So some initiatives have gone premise-based and some have gone security-as-a-service based. We are kind of a mix.
Gardner: It's interesting that a common thread for successful organizations is knowing yourself well. It's also an indicator of maturity, of course.
You have had a good opportunity to know yourself and then to track your progress. Is that helping you make these decisions about what's core or context in the design of your risk-mitigation activities?
What you do well
Duemling: Yes, it is. You have to know what you do well and also you have to know the areas where you, as an organization, are not going to be able to invest the time or the resources to get to a specific comfort level that you would feel would be adequate for what you are trying to achieve. Those are some of the things where we look to use security as a service.
We don't want to necessarily become experts on spam filtering, so we know that there are companies that specialize in that. We will leverage their investment, their technology, and their IP to help defend us from email-borne threats and things of that nature.
We're not going to try and get into the business of having a program or to create an event-correlation engine. That's why we're going to go out and look for the best-of-breed technologies out there to do it for us.
We'll pick those different technologies, whether it's as a service or premise-based and we'll implement those. That will allow us to invest in the people that know our environment the best and intimately and who can make decisions based on what those tools and those managed services tell them.
They can be the boots on the ground, for lack of a better term, making the decisions that are effective at the time, with all the situational awareness that they need to resolve the problem right then and there.
Gardner: For those of our listeners who are perhaps juggling quite a few security products or technologies and they would like to move into this notion of a program, and would like to have a unified view -- any thoughts about getting started, any lessons learned that you could share?
Duemling: I would say just a couple of bullet points. Security is more than just technology. It really is the people, the process, and the technology. You have to understand the business that you are trying to protect. You have to understand that security is there to support the business, not to be the business.
Probably most importantly, when you want to evolve your security and set up projects into an actual security program, you have to be able to talk the language of the business to the people who run the business, so that they understand that it’s a partnership and you are there to support them, not to be a drain on their valuable resources.
Los: I think he has put it brilliantly just now. IT security is a resource and also a potential drain on resources. So the less we can take away from anything else the organization is doing, while enabling them to basically be better, deliver better, deliver smarter, and save more lives and make people healthier, that is ultimately the goal.
If there's nothing else that anybody takes away from a conversation like this, IT security is just another enabler in the business and we should really continue to treat it that way and work towards that goal.
Gardner: All right, last word to you today, Paul Muller. What sort of lessons learned or perhaps perceptions from the example of Lake Health would you amplify or extend?
Muller: I will just go back to some of my earlier comments, which is, let’s remember that our adversary is increasingly focused on the market opportunity of exploiting the data that we have inside our organizations -- data in all of its forms. Where there is profit, as I said, there will be a drive for automation and best practices. They are also competing to hire the best security people in the world.
But as a result of that, and mixed in with the fact that we have this ever-increasing attack surface, the vulnerabilities are increasing dramatically. The statistic I saw from just October is that the cost of cyber crime has risen by 40 percent and the attack frequency has doubled in the last 12 months. This is very real proof that this market forces are at work.
The challenge that we have is educating our executives that compliance is important, but it is the low bar. It is table stakes, when we think about information and security. And particularly in the case of mid-sized enterprises, as Raf pointed out, they have all of the attractiveness as a target of a large enterprise, but not necessarily the resources to be able to effectively detect and defend against those sorts of attacks.
You need to find the right mix of services, whether we call it hybrid, whether we call it cloud or managed services, combined with your own on-premises services to make sure that you're able to defend yourself responsibly.
Gardner: I'd like to thank our supporter for this series, HP Software, and remind our audience to carry on the dialogue with Paul Muller through the Discover Performance Group on LinkedIn, and also to follow Raf on his popular blog, Following the White Rabbit.
You can also gain more insights and information on the best of IT performance management at http://www.hp.com/go/discoverperformance.
And you can always access this and other episodes in our HP Discover Performance Podcast Series at hp.com and on iTunes under BriefingsDirect. Thanks!
You may also be interested in:
- HP Discover Performance Podcast: McKesson Redirects IT to Become a Services Provider That Delivers Fuller Business Solutions
- Investing Well in IT With Emphasis on KPIs Separates Business Leaders from Business Laggards, Survey Results Show
- Expert Chat with HP on How Better Understanding Security Makes it an Enabler, Rather than Inhibitor, of Cloud Adoption
- Expert Chat with HP on How IT Can Enable Cloud While Maintaining Control and Governance
- Expert Chat on How HP Ecosystem Provides Holistic Support for VMware Virtualized IT Environments
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Dec. 5, 2016 07:45 PM EST Reads: 2,215
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Dec. 5, 2016 07:15 PM EST Reads: 2,060
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Dec. 5, 2016 07:00 PM EST Reads: 1,845
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Dec. 5, 2016 07:00 PM EST Reads: 1,812
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Dec. 5, 2016 06:15 PM EST Reads: 1,554
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
Dec. 5, 2016 05:30 PM EST Reads: 1,924
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Dec. 5, 2016 04:45 PM EST Reads: 5,552
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Dec. 5, 2016 04:00 PM EST Reads: 2,556
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Dec. 5, 2016 02:45 PM EST Reads: 3,285
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Dec. 5, 2016 01:45 PM EST Reads: 1,711
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Dec. 5, 2016 01:15 PM EST Reads: 2,167
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 5, 2016 11:15 AM EST Reads: 965
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
Dec. 5, 2016 10:15 AM EST Reads: 983
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
Dec. 5, 2016 09:15 AM EST Reads: 1,431
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Dec. 5, 2016 06:45 AM EST Reads: 1,020
IT leaders face a monumental challenge. They must figure out how to sort through the cacophony of new technologies, buzzwords, and industry hype to find the right digital path forward for their organizations. And they simply cannot afford to fail. Those organizations that are fastest to the right digital path will be the ones that win. The path forward, however, is strewn with the legacy of decisions made long ago — often before any of the current leadership team assumed their roles. While it’s ...
Dec. 5, 2016 05:00 AM EST Reads: 1,647
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MyS...
Dec. 5, 2016 04:30 AM EST Reads: 5,259
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Dec. 5, 2016 04:00 AM EST Reads: 2,827
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dec. 5, 2016 01:30 AM EST Reads: 822
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Dec. 5, 2016 12:45 AM EST Reads: 1,833