….but I found it hilarious. Continue reading Mistakes Were Made
In April 1964, IBM announced the System /360, an entirely new line of mainframes. It was a complete replacement for their existing hardware and all mainframe users were expected to move to the new platform. To ease the transition, they implemented various emulators but it was still a massive undertaking for both IBM and their customers.
I spent about 50 years working on IBM mainframes and I wish the Wintel world were as easy to master as the mainframe world. Today’s CISC mainframes have a much larger instruction set than Intel systems. This means there’s a lot more to learn but it also allows the code to be much cleaner, elegant and coherent and by reducing the number of instructions, reduces the logical complexity of programs and thus reduces errors. Coding in Assembler does require a knowledge of the hardware as well as the principles of programming and logic. I’d venture to say that if all the people coding high-level or scripting languages had to understand their hardware at the instruction level, most would be doing something else for a living.
One of the pleasures of working with mainframes was the extraordinary high level of documentation IBM provides. Back in the ’60s they were the world’s 2nd-biggest publisher, right behind the US Government Printing Office. When I worked in the Time/Life Datacenter, our library filled a 15-30 room, 3 walls full of floor-to-ceiling shelves and waist-high shelves down the middle of the room. We had a clerk whose sole job was to apply updates to the manuals, inserting or replacing pages all day long. And we did not have anywhere near the entire IBM library. Today, most people use digital manuals, since keeping paper would require a huge space and lots of updates. (I wonder how many people IBM has keeping their digital documentation updated…) IBM’s software and hardware has grown so much since the ’60s that they may well surpass the GPO today.
There were three types of manuals commonly used by programmers and Systems Programmers:
SRLs (System Reference Library) told how to use the various components and functions of OS. Primarily a guide to coding applications. The Intel world has comparable documents for various software.
PLMs (Program Logic Manuals) described in great detail the logic flow and data used in each component of the OS. Primarily used by Systems Programmers. I haven’t seen the equivalent of these in the Intel world.
Redbooks discussed implementation of OS and its various pieces of software, written by users in the field who were actually doing the work. These were initially an ad hoc effort from IBM World Trade. Primarily used by Systems Programmers. There is some similar material available for Intel-based software, but it’s usually part of install & customize instructions and not always very thorough.
In addition to the official manuals, the source code for the OS is available for the cost of the media. (Source for their encryption may not be publicly available). I learned Assembler from looking at source code of the OS, which also taught me the internals of the OS. There was an Education Center in the Time/Life building and I taught the last week of their Assembler classes, since they didn’t really understand macros very well. I learned macro syntax and functions by doing SysGens – the process by which a customer uses a stripped down version of OS to generate his desired target system. I probably did 2-3 SysGens/week for many months, since customers didn’t have the expertise in the early years. The entire SysGen process was driven by a collection of macros which generated a jobstream to build a customized version of OS.
The early OS didn’t have accounting routines, to track and report resources used by the jobstreams, and many customers needed that, in order to charge costs to various projects or departments. The last thing I did before I left IBM was write macros to generate accounting routines based on the data the customer needed, how he needed it presented and the version of OS he was running. The macros generated the routines themselves and the jobstreams to build and insert the routines into the OS. The macros were about 3000 lines of code.
In the Wintel world, it’s not uncommon for a new release of the Operating System to ‘break’ programs that previously worked. Given that users are running both Microsoft and third-party code, it’s understandable that problems crop up. IBM’s mainframe OS, on the other hand, has a reputation for being 100% backward compatible. A program that would run on OS PCP 1.0 will run on the latest zOS version. This is because IBM puts a LOT of effort into making sure new code is compatible with old code.
In 50+ years of working with the OS, I’ve only known of one upgrade that broke anything, when the logic of Write-To-Operator and Write-To-Operator-Reply functions changed. And that problem could be fixed by a simple re-assemble or re-compile, as long as the programmers were using IBM’s coding tools.
If you coded using IBM’s macros and libraries, you didn’t worry about updates breaking your programs and you coded pretty much independent of the hardware. There might, however, be instances where a programmer really needed to code down to device specifics. If one coded his disk I/O with logic that depended on a fixed length of a disk track or a specific a specific number of tracks on a cylinder, or number of cylinders, one was limited to files on specific devices. However, if one went the extra mile and queried the OS as to the layout, capacity and characteristics of a device and coded using the values returned by the OS, the program would continue to work on devices with different layouts.
I once coded a Checkpoint/Recovery package that bypassed the software the OS provided. I was programming ‘right down to the metal’, building my Channel Programs on-the-fly; doing my I/O on a SIO & TIO level. The system was using 3330-11 disks: 13030 bytes/track; 19 tracks/cylinder; 808 cylinders. I had to know when to switch tracks and cylinders. I could have coded based on 3330-11 values. Instead, I queried the OS for the values related to the disks and used what the OS returned: in this case, 13030, 19, 808. If the software were later run using (for example) 3390-9 disks, the OS would have returned 56663, 15, 3339 and my program would have used those values and continued to work.
If you did things the easy way and used IBM’s tools, you never had to worry about backwards compatibility. And even it you had to stray outside the ‘easy way’, IBM gave you the tools to make sure you could still maintain compatiblity.
Compatibility is more than just a practice or policy at IBM. It is a philosophy, a mindset. 😀
I began working for IBM in 1963, on Unit Record accounts, and there was as much fun and challenge in wiring a 604 or 403 board as trying to stuff a 13K program into a 12K 1401 or debug the various releases of software for the S/360. I spent 5 years at IBM, then 5 more years at a software house, a year freelancing, then 24 years at NYSE and its subsidiary SIAC. From 2000 to 2013 I worked at HealthQuest.
From 1968-1972, I worked for Complex Systems, a software house specializing in TP code. I did application programming but spent most of my time doing Systems Programming, Systems Engineering and Systems Design.
NYSE was one of our primary clients and I consequently spent a lot of time on the 1/2 floor overlooking the machine room at 11 Wall St. I admit to being a bit of a maverick, including my attire. I was particularly noted for a dark blue ‘western-cut’ suit, navy blue Stetson and black cowboy boots (which got me christened ‘Midnight Cowboy’).
Some equipment arrived in bubblewrap and I laid out a stretch of it about 2 feet wide and 15 feet long, just outside the control center and the offices of the VP in charge. I did my best impression of a flamenco dancer, pounding my way poppingly down the corridor, ending with a grand flourish right outside the VP’s door.
My ‘ta da!’ moment was somewhat dimmed by the discovery that the VP was entertaining an Executive VP whose jaw dropped in shock at my dramatic performance. Right then I was very glad to be a consultant rather than an employee. 😀
While working on a software package to move Odd Lot orders/reports between brokers and the Odd Lot dealer, we were doing our usual Q/A, including [what we thought were] every possible bad input from brokers and Odd Lot dealer.
Aside from the usual OS-MVT consoles, the system was controlled by operators manning 83B3 teletypes. One of these operators said she could crash the system any time. When challenged, she planted her elbow firmly on the keyboard of an 83B3, sending utter garbage into the system. She did indeed crash the system and we realized that garbage input could come from unexpected places.
From 1981-1988, I ran the the Market Data II system at at NYSE.
From about 1974 until the late 1980s, I was responsible for maintaining Market Data System II, the mainframe system which gathered and reported trades on the New York Stock Exchange. This system was based on OS MVT Release 11 and was heavily customized/modified by IBM, with the promise to support it forever. In addition to a modified OS, IBM coded components to handle non-standard hardware and memory, as well as the infrastucture to support ‘applications’ modules.
In 1987, the NYSE had several computer systems in operation. The oldest and most critical one was the Market Data System known as MDS-II. This system gathered trading information from the Trading Floor and drove the traditional ticker tape as well as distributing the trade data over higher speed synchronous lines. Of the various systems, MDS-II was the only one which was absolutely critical to trading. While failure of the other systems was an inconvenience, failure of MDS-II halted trading completely, resulting in a lot of lost revenue for NYSE and its member firms.
Copyright © 2017 Steeleweed - All Rights Reserved
Powered by WordPress & Atahualpa