Microservices Expo Authors: Liz McMillan, XebiaLabs Blog, David Sprott, Sematext Blog, Don MacVittie

Related Topics: Cloud Security, Microservices Expo, @CloudExpo

Cloud Security: Article

Coordinating Security Information

What happens when an agency finds a better point solution than one currently in place?

A recent article in Government Computer News raised the topic of FISMA reporting, specifically describing the "pessimism" of many USG agencies over meeting the September 2012 deadline for "using continuous monitoring to meet Federal Information Security Management Act reporting requirements." The article cites a survey of over 200 government IT professionals, conducted by RedSeal Networks, in which 55% of respondents felt they won't be ready, or don't know if they will be ready, by the deadline. One can certainly debate the significance of the number of agencies expressing concern over meeting the deadline, and the reasons given would likely drag the conversation to arguing over the validity of a deadline set by government for something that is far more complex than "flipping a switch." But set that aside for the moment.

More interesting is the fact that, when you look at the responses by the role of the respondents, "53 percent of security managers, administrators and auditors expected to meet the Sept. 30 deadline, while only 28 percent of CIOs and chief information security officers expected to." Mike Lloyd, RedSeal's CTO, said "This is an interesting finding, not what a cynic might expect." That cynic would expect the typical (over-)confidence of an executive, the one telling folks "no problem, we're right on track" while the IT managers, the ones actually tasked with the design, deployment and operation of relevant systems, the feverish scramble to find the right tools, the right people, and the right data to meet the reporting requirement.

In fact, the opposite is the case. The IT managers believe they have the right point solutions to do the monitoring, analyze the data, and process the relevant compliance reports. They aren't worried about trying to figure out how they're going to perform the continuous monitoring, primarily because today's IT vendors are creating products that provide the capabilities to meet these requirements. So why don't these CIOs and CISOs share the confidence of their IT staff?

The answer is both simple ... and not so simple. In discussing this survey and resulting article, the editors at SANS described the lack of C-level confidence this way (emphasis added): "Agencies need to find ways to bring together information from various systems to provide the necessary set of data." Bring information together? That's easy, just get a bunch of good developers to build custom integration points between all these systems that the IT managers feel really good about (rightly so), and then the data will flow! Sounds great...until you look a little closer at what this entails: a group of good developers is expensive, not to mention hard to find. Assuming you can find all these good developers (and afford to pay them), can they knock this effort out in, say, 6 months? 9 months? Factor in the unique and often proprietary formats and data structures of these various solutions, and now what, 12 months? Remember that September deadline?

What happens when the agency finds a better point solution than one currently in place? Bring back those good, expensive developers (or retain them) to build new integration points between the existing solutions and this new one? Not so simple anymore, is it?

This approach is not timely, cost-effective, or scalable. A better approach is to build a foundation that allows these best-of-breed point solutions to share data in a common format, providing each solution with the ability to use only that data that is relevant to it.

Over the last four years, the Trusted Computing Group (trustedcomputinggroup.org) has developed and published a set of open specifications called IF-MAP (or "Interface to Metadata Access Points"). IF-MAP is a protocol specifically designed to allow disparate systems from different vendors to share information. The IF-MAP open standard makes it possible for any authorized device or system to publish information to an IF-MAP server, to search that server for relevant information, and to subscribe to any updates to that information. This "sharing" is done in a standardized way, eliminating the need for costly custom integration points between these disparate systems. Through the use of IF-MAP, agencies would have the ability to enable data and information sharing between systems in an automated and continuous manner.

Share data without allowing unauthorized access among logs, records/databases, firewalls, provisioning systems, switches, and more.

Track devices and their owners on the network.

Track/monitor network traffic.

Control the activity/access of devices operating inappropriately.

Manage/Tie legacy systems into global enterprise (i.e., SCADA).

Validate endpoints and allow access (Standard managed endpoint security).

Share security data among devices and have those security devices act based on the collective available data.

And the best part - many government agencies already have solutions in place that support IF-MAP. Vendors including Lumeta, Juniper, Enterasys, and Infoblox, just to name a few, have products supporting IF-MAP. Numerous government agencies and system integrators have labs dedicated to using IF-MAP and similar open standard specifications to develop solutions to the biggest cyber-security challenges out there - such as real-time configuration management databases; the integration of physical and network security; and policy-based remote access - all using IF-MAP and COTS products.

IF-MAP alone won't necessarily help those agencies meet the September deadline, but one thing is certain - not using open standards and specifications such as IF-MAP will make the effort more costly, more time-consuming, and less flexible. If you can show me a government agency that has extra money and extra time, I'd love to see it.

More Stories By Steve Hanna

Steve Hanna is co-chair of the Trusted Network Connect Work Group in the Trusted Computing Group and co-chair of the Network Endpoint Assessment Working Group in the Internet Engineering Task Force. An inventor or co-inventor of 30 issued U.S. patents, he holds an A.B. in Computer Science from Harvard University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
In many organizations governance is still practiced by phase or stage gate peer review, and Agile projects are forced to accommodate, which leads to WaterScrumFall or worse. But governance criteria and policies are often very weak anyway, out of date or non-existent. Consequently governance is frequently a matter of opinion and experience, highly dependent upon the experience of individual reviewers. As we all know, a basic principle of Agile methods is delegation of responsibility, and ideally ...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
For those unfamiliar, as a developer working in marketing for an infrastructure automation company, I have tried to clarify the different versions of DevOps by capitalizing the part that benefits in a given DevOps scenario. In this case we’re talking about operations improvements. While devs – particularly those involved in automation or DevOps will find it interesting, it really talks to growing issues Operations are finding. The problem is right in front of us, we’re confronting it every day,...
The general concepts of DevOps have played a central role advancing the modern software delivery industry. With the library of DevOps best practices, tips and guides expanding quickly, it can be difficult to track down the best and most accurate resources and information. In order to help the software development community, and to further our own learning, we reached out to leading industry analysts and asked them about an increasingly popular tenet of a DevOps transformation: collaboration.
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

Digitization is driving a fundamental change in society that is transforming the way businesses work with their customers, their supply chains and their people. Digital transformation leverages DevOps best practices, such as Agile Parallel Development, Continuous Delivery and Agile Operations to capitalize on opportunities and create competitive differentiation in the application economy. However, information security has been notably absent from the DevOps movement. Speed doesn’t have to negat...
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, will discuss what every business should plan for how to structure their teams to d...
SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will present at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they manag...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
The evolution of JavaScript and HTML 5 to support a genuine component based framework (Web Components) with the necessary tools to deliver something close to a native experience including genuine realtime networking (UDP using WebRTC). HTML5 is evolving to offer built in templating support, the ability to watch objects (which will speed up Angular) and Web Components (which offer Angular Directives). The native level support will offer a massive performance boost to frameworks having to fake all...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...