Welcome!

Microservices Expo Authors: John Worthington, Pat Romanski, Stackify Blog, Automic Blog, Simon Hill

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

25 Years of PC Week

The scene is a deserted office park in Los Angeles after hours

The scene is a deserted office park in Los Angeles after hours. I am driving around, trying to find the spot that my IT manager friend left an envelope for me. Inside the envelope is a disc with a secret IBM software program that is about to give me one heck of a scoop for PC Week, c. 1987.

It has been a week of memories. Last week was the 40th anniversary of the real beginning of the Internet, and this week is the 25 years that PC Week (regrettably now called eWeek) began publishing its weekly commentary on our industry.

While I didn’t start writing for the publication until 1987, I remember those times very well: back in the early 1980s I was working for a private software developer and we were porting our programs from the Apple to the new fangled IBM PC, and trying to make them work. Given that we were charging several thousand dollars to electric utilities for these products, it was my job to do the quality control and make sure that the code was written properly.

I eventually went on to work in various end-user computing departments for government and private industry before getting the job at PC Week as a writer and analyst. I went on to work there for more than three years when the PC industry was rapidly expanding and corporations were buying truckloads of PCs. Back then we didn’t have networks other than the ones that connected our PCs to our IBM mainframes, and I began to specialize in networking and installed the first one in our company before I became a tech journalist.

Wayne Rash called me last month to catch up and get some input on a story that he has written for the publication about those early days. It made me go back and actually find some of the articles that I wrote and recall some fond memories.

For those of you that were born after this year and don’t remember a world without computers, it is worth taking a moment to remind you that we had 80386 computers that had barely more than a megabyte of RAM and ran at 10 MHz clock speeds. Most of the machines back then had character-mode displays (except for Macs, which were rare on corporate desktops) and Windows and Linux hadn’t yet been invented. IBM and Microsoft were working together on OS/2 and Novell’s Netware was the most popular networking operating system because it could run on 80286s and use all of the entire memory of the machine. Hard disks were rarely larger than 20 MB, and floppies had just increased to store 1 MB of data. Mostly academic researchers were using the Internet and few corporations had email, let alone email connections to the outside.

In a story that I wrote in May 1990, I talk about what corporate IT folks need to think about when upgrading to the latest OS – which at the time was Windows 3 or OS/2 1.2. Some of those issues are still with us as we wrestle with Windows 7 and Snow Leopard.

Here are a few memories from that era. You can see scans of various magazine covers and articles that I mentioned from that era here.

My first story for PC Week (Jan 1987) was about a little-known company in the PC-to-mainframe market called Attachmate and how they planned on unseating the then-champion Digital Communications Associates, makers of the popular Irma boards. Attachmate went on years later to purchase DCA, and is still around in the terminal emulation space, also having bought network analysis company NetIQ.

What really got the IBM PC started in corporate computing circles was a spreadsheet called 1-2-3 from upstart Lotus Development Corporation. For some people, it was the only application that they ran on their desktops. Lotus 1-2-3 wasn’t the first spreadsheet and indeed, here is a brief post on the original spreadsheet called Visicalc.

Years before IBM ironically purchased Lotus, they started a skunk works project to use spreadsheets as a front-end to their mainframe databases, something that was very sophisticated at the time. The sole programmer behind the project was Oleg Vishnepolsky who spent about 18 months writing the software simply called S2. The code was used for internal purposes. I spoke to Vishnepolsky last week and rather than be mad at me for blowing his cover he was reminded that when my article ran his status as a lowly programmer was immediately elevated and he got to talk to the big brass about his project. “I got to rub shoulders with people at the top layers of management, and remember this is when IBM had about ten or 12 managers between me and the CEO.” Still, the S2 project was one of the best ones of his career and the code was used by tens of thousands of IBMers.

At the time this was being developed – say 1987 or so – there were a variety of people who were trying to clone 1-2-3 using the exact same command syntax, most notably Adam Osborne. There were legal challenges going back and forth about intellectual property and Osborne, being the roué that he was, only brought more attention to the whole thing.

Somehow, I got a hold of a copy of S2 from one of IBM’s customers, the setting for my cloak and dagger black ops mission at the top of this essay. I wrote the story about S2 and saw Osborne coincidentally a few weeks later at an industry event. Much as I wanted to give him a copy, I didn’t. But you can see the screen caps of S2 that I found in my archives.

Back then, IBM was very secretive about their new products and had all sorts of established protocols for dealing with the press. One place where they gave out advance information about their plans was at their user group meetings. Since I had come from IT, I knew how easy it was to attend these meetings under somewhat false pretenses. I called up the IT manager for Ziff Davis and found out that we indeed had an IBM mainframe squirreled away in New Jersey. I asked the manager if he could give me their customer number, which is pretty much all you needed to register for the IBM user conference. When I reassured him that it wasn’t going to come out of his budget (some things never change), I signed up and brought home several scoops from the meeting, much to the dismay of my fellow PC Week news hounds. But they were quick learners and when it came time for the next meeting, several of us attended as “Ziff Davis IT managers.” When we came back from the third meeting with even more scoops, Infoworld – which at the time was our main competitor — starting putting together the pieces and called up the president of the user group and got us banned from further meetings. But it was fun while it lasted.

Speaking of fun scoops, one of our younger and more eager reporters was Gina Smith. Gina was out to dinner with her boyfriend (who later married her) at a Cambridge, Mass. Restaurant. Sitting at the next table were two Germans speaking quickly. Little did they know that Smith was fluent in German and as she listened it turned out they were from Lotus’ German office telling each other what the future product plans were for the company. Lotus never knew how we got that story, and Smith went on to write a few books and run a couple of companies in Silicon Valley.

One of my early columns (July 1987) was about how hard it was to use a laptop in a hotel room. Back then modems were the main remote access devices, and they were running at 2400 bps, which was slow enough that you could read the text as it was being transmitted. Most hotels had hard-wired their phones so you couldn’t attach a modem easily, without having to unscrew the wall plates and take out the two wires that you needed to attach the modem to the phone system. How far have we come now with universal wireless everywhere.

Another of my favorite columns (March 1988) was written as if I was Judith Martin, answering questions of network etiquette. I considered it a successful parody when I got a cease and desist letter from Miss Manner’s law firm!

In October 1988, I was promoted to run a major portion of the PC Week. That same week, I was visiting one of my friends, Cheryl Currid, who ran the IT organization of Coke Foods (Minute Maid et al.) in Dallas. One of Cheryl’s staffers had baked a cake in my honor, iced with a simulated cover of PC Week’s front page with various “stories” in icing. Currid went on to write many columns for me at various publications, and is still consulting in the industry.

Yes, those were interesting and fun times. I hope you enjoyed some of these memories too.

Read the original blog entry...

More Stories By David Strom

David Strom is an international authority on network and Internet technologies. He has written extensively on the topic for 20 years for a wide variety of print publications and websites, such as The New York Times, TechTarget.com, PC Week/eWeek, Internet.com, Network World, Infoworld, Computerworld, Small Business Computing, Communications Week, Windows Sources, c|net and news.com, Web Review, Tom's Hardware, EETimes, and many others.

@MicroservicesExpo Stories
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...