|By Don MacVittie||
|November 17, 2012 09:00 AM EST||
We’re having all of our sidewalks redone right this instant. In fact, I’ll include a picture of the “pavers” – which is the fancy new word for the stones used to build the sidewalk. If the construction and design team do something wrong, it will cost them a pretty penny to come back out, rip up the pavers (and the columns or knee wall they’re putting in with the pavers on the patio), and move things around or replace pavers to make it right. We hired a great company that has done good work for us in the past, so I’m not terribly worried about this possibility. It happens in construction, but happens a lot less with a reputable installer.
It does offer a solid introduction to Field Programmable Gate Arrays (FPGAs) though. Because before there were FPGAs, most hardware out there shipped with a well-defined, non-changeable logic path. It did what it did, and if the hardware designers made a mistake in this increasingly complex product, you were stuck with the results. Some EEPROMs were shipped with re-programmability, but the vast majority of hardware did not have any way to update it. If a bug appeared, you lived with it or the vendor took the very expensive step of replacing it. Much like what happens when pavers are installed incorrectly. The difference of course is that you can look at pavers and see if you think the work is right, while hardware needs to be run – and run a lot – before weaknesses show. Kind of like the case where pavers are laid down but the material underneath them is not properly prepared. The next spring you can expect a jungle to grow up between the pavers, but until then they look nice.
EEPROMs (Electrically Erasable Programmable Read Only Memory) and then FPGAs brought the ability to fix bugs in the field into the realm of hardware. As FPGAs progressed and became more complex, even real-time updating (as in on-the-fly) became a possibility. At this point, there are billions of gates on an FPGA, and they’re used in a wide variety of devices. If you’ve ever “Flashed the ROM” or “Updated Firmware” there is a good chance you’ve been updating the FPGA in the device (though of course, these terms are vague enough that it could be other things you’re updating too).
But the power of updating on-the-fly is huge. If for nothing else than prototyping and training. Need to teach people hardware design? How better than on a device that you can program, test, reprogram, test again… Indeed, for at-home use (having nothing to do with F5, just one of my many geek toys), I use an Actel FPGA to set up complex circuits. Actel is now MicroSemi, but I haven’t dealt with them since the change, so I don’t know any details there. But for designing circuits, you can’t beat it. I’ve abused mine, and it still does what I tell it to. Note I said “what I tell it to”, not “what I expect it to”… I’m not a professional at FPGA programming, but it is a lot of fun.
But in a professional setting, the power is even greater. Not only can you train staff in FPGA programming and prototype solutions with FPGAs, you can also ship with FPGAs installed. Having FPGAs installed means that a huge percentage of the logic that makes a device go can be updated as-needed. This helps the vendor by giving them a path to fixing logic errors that were not discovered before ship time (say because the error is not obvious until the device is under massive load for a long period of time). It helps the customer by giving them an obsolescent-resistant product. If the logic of the hardware can be updated, then the device is much more forward-compatible than those that are not. When an FPGA can have 500,000 to millions of logic elements on it, the level of re-programmability becomes amazing. No support for the newest standard that impacts your device? Download the update, and BAM! You’ve got support for a standard that might not have even existed when your device was originally designed.
This does of course come with some risks. A part of your system that was stable forever now has changes introduced to it dynamically, but most reputable vendors have tools/steps/security in place to protect their customers from hardware problems bringing down the entire system. I can’t speak for everyone, in fact, at this instant I can’t even authoritatively speak for F5, but this next week I’ll be talking to the hardware folks about what we do, and the next two installments in this blog will cover both what we do with FPGAs, and how we protect our customers.
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Dec. 3, 2016 12:15 AM EST Reads: 1,745
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Dec. 2, 2016 10:30 PM EST Reads: 1,723
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Dec. 2, 2016 08:00 PM EST Reads: 1,525
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Dec. 2, 2016 04:45 PM EST Reads: 2,104
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Dec. 2, 2016 03:30 PM EST Reads: 3,197
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Dec. 2, 2016 03:15 PM EST Reads: 1,448
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Dec. 2, 2016 01:45 PM EST Reads: 5,437
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Dec. 2, 2016 01:30 PM EST Reads: 5,700
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Dec. 2, 2016 01:00 PM EST Reads: 2,449
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Dec. 2, 2016 12:00 PM EST Reads: 1,837
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
Dec. 2, 2016 11:30 AM EST Reads: 1,768
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Dec. 2, 2016 10:45 AM EST Reads: 1,607
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Dec. 2, 2016 10:15 AM EST Reads: 2,053
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 2, 2016 08:15 AM EST Reads: 798
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
Dec. 2, 2016 07:15 AM EST Reads: 627
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Dec. 2, 2016 04:00 AM EST Reads: 2,643
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Dec. 2, 2016 12:00 AM EST Reads: 618
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Dec. 1, 2016 09:00 PM EST Reads: 1,708
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long develop...
Dec. 1, 2016 07:00 PM EST Reads: 1,662
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Dec. 1, 2016 03:00 PM EST Reads: 1,933