G Saunders' Home INFO300 Home

Quiz #1 Topics: Operating Systems, IT Infrastructure, Range of Platforms, Storage

This is a collection of topics including some history and description of Operating Systems most likely to be encountered in the wild, the wide range of computers they operate, and storage infrastructure. Networking is equally, or more important, would also be a good starting point, will come along soon. But, OS-Operating Systems come earlier in the 'begats', they've been around since the 1950's and we start with them:

Operating Systems

(1/17) 'What _is_ an operating system?' is a good question at this point, and googling it will get back several pithy results. Here's Wikipedia's definition: An operating system (OS) is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require an operating system to function.

This definition acknowledges that some computers only run one program which has been burned into a ROM or the CPU as 'firmware'. Many computers embedded in appliances and other devices only have one function. It starts when the power is turned on and runs until powered off.

Examples of familiar operating systems for personal and mobile computers are Windows and Mac, iOS and Android. Linux, unix, and propretary OS like IBM's AIX, i5/OS, and z/OS run servers from small through mainframes. NXP is likely running under the hood of your car.

Operating systems enable computers to run application software. Examples of applications range from Pokemon Go, which exercises most of the features on a smartphone, through QuickBooks for small business, and PeopleSoft or Oracle for enterprise.

Modern Operating Systems manage these 7 essential functions:

  • Interfaces: With Users, Devices, Application Programs, and other computers
  • Devices: Block and Stream I/O, DMA and SMP, RAID, Graphics, Network media of several types.
  • File Systems: Locally Attached HDD/SDDSSD, NFS, NAS, SAN, 'RAM Disks'...
  • Memory: RAM limits are 4 GigaBytes for 32-bit CPU, is practically unlimited at 16 ExaBytes for 64-bit CPUs; 'Virtual Memory' is managed using DMA and swap areas on disk, hopefully seldom, to handle more data and code than the RAM will hold. 4 or 8+ GigaBytes for a Windows user to avoid swapping, 256 MegaBytes for a $7 Android tablet that swaps all the time.
  • Processes and CPUs: Put OS and application code to the CPU, or CPUs, for execution.
  • Networks: Security, Firewalling, Intrusion Detection and Response are important along with datacommunications hardware and protocols
  • AAA: User Authentication, Authorization, and Accounting for use of system resources

These 7 functions were demo'd in class using Windows and Linux, will be discussed in some detail later...

Booting Up

Usually, operating system software is loaded into RAM automatically when a computer is powered on. This is commonly called 'the bootup process', as in the adage 'pick yourself up by your bootstraps'. We're used to waiting several seconds or longer while our PCs and notebooks boot up, or our smartphone. Larger machines like servers, mid-range, and mainframes and even smaller computers embedded in Routers, household appliances, TVs, or DVD Players all 'boot up' when the power's turned on.

After the computer's booted, it's ready to run our applications.

Prior generation computers' bootup processes were manual, with no bootstrap loader. They involved flipping switches to set the address of the magnetic or paper tape drive, card reader, or disk holding the 'boot image' necessary to run the application at hand. Then, we'd press the Run button and see FLAB - Flashing Lights Acting Busy until the beast finished the boot process and the console sprang to life...

When a modern computer is switched on it's circuitry runs it through a POST - Power On Self Test to make sure the hardware's capable of booting. If the POST runs cleanly, the BIOS - Basic Input/Output System is usually loaded into RAM from a ROM - Read Only Memory mounted on the mainboard or built into a SoC - System on a Chip.

The BIOS code instructs the CPU to set and test the 'basic' i/o devices like keyboard, mouse, monitor, disk, usb, and network. If the BIOS devices are connected without error, the boot process continues to seek the bootable device configured into the BIOS, which may be a hard disk or solid-state drive, usb, or network interface. When the OS on the bootable device has been loaded our computer's ready to run applications.

Entering the BIOS

Occasionally network managers and techs, or PC owners, need get directly to the BIOS to make changes in hardware configuration or security settings. Most PC/Server BIOSs are accessed by tapping some key during the bootup. The F12 and F2 keys are common, but aren't the only keys used for accessing the BIOS. Most computers momentarily flash their BIOS access key on the monitor or command line early in the bootup sequence. Googling something like 'how to enter bios on my dell xps' often gets a useable response.

An example of this would be changing the bootup device, so that a computer is booted from the USB or DVD instead of its HDD-Hard Disk Drive or SSD-Solid State Drive.

Please, enter the BIOS on a couple of the notebook and desktop computers at your fingertips and consider what's there.

The BIOS provides a simple menu to check and change settings. If changes are made they are flashed into the BIOS ROM and used whenever the machine is rebooted.

This ability to alter the bootup process must be secured and managed carefully! Otherwise, some crook can load their own rogue OS and pillage the machine. An employee who can reboot a machine and enter the BIOS has a whole 'nother, rich, vector to attack the employer's networks, servers, and co-workers machines.

Dual Booting, Live Distros, and Virtualization

One may 'dual boot' their system to choose the OS that will boot and run the machine. GPARTED-Gnu Partiion Editor and GRUB-Grand Unified Bootloader are open source tools. BootCamp is a Mac product, and the Windows Disk Management Utility are some other options for 'dual booting' a notebook or desktop computer. Hyper-V, available free with Windoze Server from the MSDNAA is gaining huge market share in Windows environments.

Dual-booting software provides a 'boot menu' that appears whenever the computer is restarted. Several differently configured OS may be on the machine and you pick the one to run after power up. It runs one OS at a time, with the only communication being their shared file system.

Booting from a 'Live Distro' is a good way to see if a machine will run a Linux flavor like Ubuntu or Fedora. A DVD is prepared for the live distro, and the BIOS set to boot from a DVD first instead of the HDD. The new OS will boot and run from the DVD, and may be used to mount the file system on the machine's HDD or SSD. This circumvents any file security that might be provided by Windows, and all un-encrypted data is available to whoever boots the live distro...

A better option, and more salable skill, is to 'virtualize' your machine using a free product like Sun/Oracle's VirtualBox or Microsoft's HyperV for free, or spring $79 for VMWare Workstation -- VMWare still has lots of market share. Virtualizing your computer allows you to run two or more OSs simultaneously, depending on how many cores and gigabytes you've got, and switch between them as easily as switching between browser windows.

If your machine has 8+ Gigs of RAM and Quad-cores or better it is a good target for virtualization. The OS installed on the machine as purchased becomes the 'host OS' and is able to run as many 'guest OS' as will fit in its RAM and share the CPU cores available.

Mac users in the School of Business need to be able to run Windows apps like Visio and Visual Studio so this is a good time to get skills with virtualization, or be stuck working in the 2nd floor lab. Parallels is an easy option but only works with Windows and Mac and doesn't provide VLANs and other features of 'real virtualization software' like VMWare or VirtualBox.

Old Mac OS X with slow processors and new ones like MacBook Air do not virtualize well, and dual-boot might be the best option for them. Recent MacBook Pros with 8+ Gigs and Core technology are fine if you've got the money...

The instructor can relate lots of experiences where the old MacBook running VirtualBox or Parallels takes several _minutes_ to accomplish what a native Windows machine or MacBook Pro can do in _seconds_.

Recent History of Operating Systems

By about Y2K we were past a violent shake-out of computer manufacturers that began in the mid-80s when anti-monopoly legislation and customers demanding open systems shut down dozens of computer manufacturers that could not sustain business in a competitive environment.

Prior to this time, most computer manufacturers made hardware, operating systems, and programming languages that were _not_ compatible, sometimes wildy un-compatible, with other manufacturers' systems or languages. For a few decades computer manufacturers' customers were locked into their computing platforms because the application software that was critical for their operations and accounting wouldn't run on any other manufacturer's platform. Many of them had spent millions of $ through the '60s - '80s making the perfect application for their enterprise. They didn't want to afford re-writing it all for some other 'platform', but were forced to when their computer system's manufacturer went bust.

The manufacturers and software houses that survived the shake-out were those whose applications could be easily 'ported to' or 'rehosted on' unix and could easily be integrated with Microsoft applications on desktops and The Internet. Those of us who were working in the field during those years made a good living traveling around, taking applications and data home on tape from dying platforms like McDonnell Douglas Reality, Sequel or Pegasus, Progress, ADDs, Data General, or Ultimate, Monolith, or DEC. In many cases it was easy to re-deploy the applications and databases on Unix running on a 'generic' Intel built by DFI, Acer, Dell, or whoever was good and cheap. Others were re-deployed on commodity-priced RISC servers built by companies like Motorola, IBM, or HP.

In either case, a modest desktop or rack-mounted workstation/server-class computer would replace a mid-range computer the size of a refrigerator near or past its expected service-life of 5+ years. Usually, the ratio was like a $6,000 Server replacing a $60,000 Mid-range machine for something like $12,000 to $20,000 of 'consulting' to rehost the system. The new, smaller computers were better-performing by factors of dozens or hundreds, and they cost thousands of dollar _less_ electricity per year to operate and keep cool. In most cases they just dropped into whatever kind of network was running before, and all the printers, terminals, and PCs stayed where they were.

Legacy Providers: Several computer manufacturers and software houses survived the shake-out and more-or-less prospered into 2016 in spite of FOSS. Customers of VARs-Value Added Resellers associated with the likes of IBM, HP, SGI, Sun/Oracle, Tandem, Hitachi, Fujitsu, and a few others who were able to ride out the shake-out with no expense. They're proud of their Legacy Systems, can demonstrate how cost-effective they are, and use them to service and fleece the market. The VARs for these legacy providers are experts in the 'vertical market' and add value to the machines they sell. In many cases, their customers don't need any 'systems people' on their payroll, may have somebody they designate as their IT guy.

There are talented consultants, managers of departments, presidents, owners, and other officers or C-level execs who can engineer a solution at a fraction of the cost of a legacy provider's VAR! Many organizations are pleased with their 'grown here' systems. Some aren't, and may seek out one of the VARs above.

With today's cheap hardware for internet and ethernet, and techniques like 'web services' for exchanging documents, most systems communicate freely and securely with one-another. It doesn't make much difference who built the hardware or the operating system these days, Internet and Ethernet make most network compatible. Today's computers, software, and networks are less expensive, faster, and more reliable than ever before. These are good times for IS and IT infrastructure!

Although some OS are relatively 'free' or greatly discounted these days, there is very little 'free _application software_' to run a small business or complex organization as well as QuickBooks or SAP. Times are good for application developers that can adapt their business model to use all 'the cloud' or 'their cloud' can offer, and get a fair price or better.

SaaS-Software as a Service and other markets are thriving on The Internet and there are more and more options for Vertical Market software providers with a proven application to earn their livings in the land of 'free Operating Systems'. Google and Microsoft both offer SaaS, Google Docs and Office 365 are good examples.

Examples of OS and Applications

Some examples of Operating Systems are: DOS, Windows XP or Vista, Windows Server 2012, Mac OS9 & OSX for desktops and macbooks, Apple iOS, Linux of several flavors, proprietary Unices like HP/UX & IBM's AIX, Sun's Solaris, or SGI's Irix, Google's Android and ChromeOS; IBMs i5OS and zOS. Windows 10 and Apple OSX are current personal computer OS. RedHat Linux or CentOS, Solaris, Windows Server 2010, and IBM's zOS & i5OS are examples of server OS.

Non-OS softwares like Angry Birds, or MS Office Word or Open Office Writer, or FireFox or Internet Explorer, or ITunes, or Waze are 'application software' (apps) that a _PC or personal device_ can run after it's booted.

When a _server_ boots, the applications it enables are like: Apache or IIS web server, Sendmail or Exchange mail server, MySQL or SQLServer 2012 database management system, SharePoint Server, ERP or other business application software, ZoneMinder CCTV, WordPress or other 'server-based' applications that users access via a 'client' app.

Hardware Platform = CPU + OS

(8/30) A 'hardware platform' is the combination of a CPU and an Operating System. There is a wide range of CPUs, from those embedded in our cars and appliances through personal computers, servers, mid-range, and mainframe. There is a range of Operating Systems, too, specialized for the CPU where it will run. Common server hardware platforms are Linux on x86, Windows on x86, AIX on Power, Linux on Power, i5 on Power, Solaris on Sparc, zOS on zApp...

CPU-Central Processing Unit

OSs are written for a particular CPU, or 'family' of CPUs. These days there are two major families of CPU: CISC-Complex Instruction Set Computers and RISC-Reduced Instruction Set Computers. For personal devices and computers, both these families are divided into smaller, less powerful versions for smaller, battery-powered devices and larger, more powerful versions for desktop and larger systems that run on 'house current' or larger power circuits.

CISC is mostly 'x86', Intel & AMD CPUs that have run IBM PCs and clones since 1981. Today's low-end CISC is exemplified by Intel's Atom at about $29 per chip, or the similarly priced, low-tech Core2 or Celeron. These will power a cheap tablet, notebook, desktop, or thin client for a user with few demands, web-browsing and email. Intel's high-end Core7, or Xeon, go from a few hundred dollars for 8 cores through thousands for 24 cores. Intel recently released the Curie SoC priced less than $10, to compete with the low-end of RISC.

Modern RISC chips all derive from Motorola 68000, or 68K, chip that ran Macs in the early 1980s. There are about three grades of RISC chips:

  • Low-powered and cheap NXP/Freescale/Qualcomm chips are direct descendents of Motorola, are priced from less than a dollar to several dollars to be embedded in automobiles and many appliances including the IoT - Internet of Things.
  • Mid-powered RISC is a competitive market using mostly ARM-licensed technology, exemplified by Apple's A7 and A8, Snapdragon, OMAP or other 'ARM' type CPU which may be packaged as a SoC-System on a Chip optimized for mobile or tablet devices.
  • The high-end of RISC these days is exemplified by IBM/Global Foundry's POWER chips used in IBM's pSeries small and mid-range servers, and the Watson supercomputers. Sun/Oracle's SPARC found in Sun's server-class, mid-range, and mainframes.
  • In 2017 these legacy manufacturer's high-end RISC chips _and_ high-end CISC are being challenged by emerging ARM 64 technology, with multi-core 64-bit CPUs deployed as a SoC that includes interfaces with ethernet and ethernet fabric for storage area networks. Google and FaceBook both have this technology in their data centers.

The market for really, really low-end CPUs for has been dominated by RISC and a few manufacturers like NXP/Freescale Semiconductor/Qualcomm whose catalogs start at the less-than-a-dollar range and go through several dollars. These companies merged through 2015 and 2016 and continue to specialize in small CPUs found in radios, microwave ovens, our automobiles and the IoT.

Of CISC and RISC, CISC is the more proprietary family and the details of the architecture are trade secrets mostly of Intel and AMD and their licensees. RISC is a relatively open family with few proprietary features held as intellectual property by a few companies that tend not to be very greedy or stingy. It's easier to manufacture RISC, and these CPUs are 'commodity priced' in a competitive market. RISC is a commodity manufactured by a consortium that maximizes returns for all involved. Today, the head of the technical consortium for mid-range is ARM Holdings, a UK-based company recently acquired by Softbanc for $32 Billion.

OS-Operating Systems

In 2015, there are several families of OS in the marketplace: QNX, Windows, Windows CE, unix/Linux including Mac OS X, iOS, Android, embedded Linux, and IBM's proprietary server operating systems i5/OS for midrange platforms and z/OS for their mainframes. Operating systems are also scaled from smaller to larger depending on the application at hand. Of this list, QNX would be the smallest and z/OS as big as they come.

Operating systems range in size and complexity from something less than a few kilobytes to handle an embedded computer, a megabyte or few to handle simple routing or wifi snooping, through a gigabyte or more for a fully-featured PC or server. Linux, for example, can be pared down to a couple dozen components in less than a megabyte to handle simple networking or firewalling tasks. Or, it can be built up with a thousand+ components in hundreds of megabytes to run a workstation or server. White Dwarf or Damn Small Linux can be configured as small as a megabyte with only a few components to run an embedded computer. A RedHat or other Linux server might be a couple Gigabytes and contain 1500+ components to run an application or web server.

Some key operating system components must be distributed as 'binary machine code' that will only run on one CPU family or specific CPU. This limits the 'portability' of one OS to other CPUs.

Some operating systems have been adapted to more than one platform. For example, RHE-RedHat Enterprise is available for x86 32- & 64-bit servers, POWER midrange, and zAPP mainframe.

Platform Independence

In the old days, programs were written in proprietary languages for proprietary databases on proprietary platforms connected by proprietary networks. Since nothing was compatible, it could be very expensive to move an application from one machine to another. As times progressed and we've standardized on a few computing and networking technologies there are several methods of making applications software and databases 'portable' among different hardware platforms. Virtual machines, compilers, and interpreters allow application software to be econmically deployed across multiple platforms.

Platform Independent applications suffer some loss of efficiency relative to applications compiled as executable code for a particular hardware platform. In many cases of single-user or server applications where performance isn't an issue they run acceptably well.

Middleware, Virtual Machines

Any discussion of computing platforms needs to include 'middleware' like Java Virtual Machine & Runtime Environment, Windows .NET Framework, IBM's SLIC and VM, and 'virtualization software' like VMWare, Sun's VirtualBox, or Microsoft's HyperV. These are ways of achieving 'platform independence', where an application or operating system can be run anywhere the middleware's VM-Virtual Machine has been adapted.

Without middleware, for example, a software house pursuing a notebook/desktop market would have to write two versions of their application, one for Windows and another for Mac. Or, they could elect to write one version of the application for Java and it will run anywhere Java is installed, which is almost everywhere.

Here's a schmatic of the Java Virtual Machine. A talented programmer who is well aware of the capabilities built into the virtual machine can write efficient application software. Windows' .NET Framework offers a similar approach, has been included with Windows operating systems since XP and Server 2000. It has also been adapted, through the Mono Project to run in other environments like Linux, iOS, and Android.

The benefit of middleware is that operating systems or applications will run on any platform where the middleware has been adapted. It lets you run Windows on a Mac, Linux on a PC, or a network manager might deploy several guest operating systems on a host rather than setting up several separate machines.

The cost of these VM approaches is always degraded performance relative to native binary code running on real machines. But, most real machines these days are capable of running virtual machines that are quick enough to satisfy the managers and users.

For the web, the 'write once and deploy anywhere' can allow a developer to economically provide customers software that will run across platforms, like Windows, Mac, and Linux. The next generations of Microsoft's Mono Project, Android Studio, Swift, Unity, and a couple other promising middlewares will likely run across Windows, Mac, Ubuntu, Droid tablets and phones, iPad or iPhone.

Network managers find lots of reasons virtual machines are better than 'real machines'. These 'virtual server instances' can reap economies beyond sheer speed of operation. Where the old advice for securing systems was to 'run each service on a separate server' now the advice is 'run each service on a separate virtual machine'.

Compilers

Another way of making applications portable across platforms is to write them using a programming language with compilers that write binary code for each target platform. C, C+, C++, Objective-C, Lisp , Pascal, COBOL, Ada, Python, some BASICs and other languages have compilers for OS like Windows, Linux, and Unices like QNX, Android, UNIX, or iOS that will compile for CPUsj like x86, Power, ARM, OMAP, and others.

Compilers' developers optimize the compiler for each platform so that it produces binary code appropriate for the target platform.

Compilers allow VARs and other developers to distribute their applications as binary code so they can keep their source code proprietary. Binary distributions also put a customer's operations at risk should their developer's business fail and the source code is not available.

Interpreters

In open source and other applications where there is no motivation to keep the source code private, interpreted languages are an option for portability of an application across platforms. Interpreters work for a system's web server and also from its command line for administrative scripting. PHP and Perl are commonly used interpreted languages that do not have a compiler, too. Python, C, Ruby, BASIC, and Lisp may be either interpreted _or_ compiled.

Compilation can produce binaries that execute magnitudes faster than the interpreted version of the same script. Batch processes or other control classes that affect a large number of records, where performance is a big issue, will likely be quicker if compiled. They may be written in C and compiled for the target platform.

The Software Wars: FOSS vs. Microsoft, Proprietary Unices & the Legacy

  • ca. Y2K
    softwareWars.gif (33384 bytes)
  • 2002
    softwarewar.jpg (186647 bytes)
  • 2003
    2003softwarewar.gif (64001 bytes)
  • 2006
    Software Wars 2006
  • 2011
    Software Wars 2011

These diagrams are _not_ about Microsoft dominating every corner of the software market place! They are about the influence of FOSS - Free and Open Source Software. Besides gaining tremendous share of several markets, FOSS has helped drive down the cost of proprietary operating systems and application software.

During the time the diagrams represent the price for MS Server has dropped from about $179 per CAL - Concurrent Access License on a primary domain controller to about $29. MS Office sold for decades at something like $300 per copy and is now $4 per month per user with Office 365, competing head on with Google Docs which is bundled with gmail for $5 per user per month.

In 2016, Microsoft greatly reduced the price of SQL Server and has released a version of it to run on Linux. SQL Server has matured nicely and can scale up to run enterprise databases, meets ANSI standards, complies with transaction processing for ACID, and can be configured to provide seamless rollover of services in the event of a server fault.

The Software Wars series appears humorous, with predatory monopolista Bill in Borg headgear at the center of the empire, but these drawings are full of truth about the what's in the legacy and what's emerging in operating systems and application software in the years since FOSS came on the scene. In class we looked at the 2000, 2003, and 2011 versions. 2011 - 2015 hasn't seen a lot of changes in the battle lines, is constant trench warfare, where Windows is maturing nicely and FOSS is surging at the trenches... 

Placing Microsoft at the center of these diagrams makes it easy to think that Microsoft has won all its battles, but it has not. Microsoft is the undisputed leader of the market only for desktop, gamers, power-users like engineers, and notebook operating systems. Microsoft does not dominate markets of embedded processors, mobile and tablets, server-class, mid-range, or mainframe.

Apple's recent rise in market share for notebook and desktop computers isn't reflected in the Software Wars series. It should appear prominently as a rival of the Microsoft Empire if updated to 2017!

The diagrams show a constant struggle by Microsoft to get into every kind of marketplace, including the Server Rooms which continue to be dominated by proprietary and free unix running on Intel, HP RISC, and IBM Power chips.

In 2017: Microsoft has about half share of the server marketplace. Windows Server runs about half of the workstation/server class machines, but none of mid-range and mainframe computers where legacy, unix-based applications and databases go back to the '70 and '80s. Considered as a whole the legacy providers IBM, HP, and Sun/Oracle continue to dominate this business server marketplace worldwide. Microsoft has a huge 70% of the desktop and notebook market, but had about 95% of it in 2005, has lost 30+% to Apple in recent years. Microsoft has about Zilch in tablets and mobile although they have had attractive products there.

For our IS majors, this means it makes good sense to learn about other OS along with MicroSoft Windows and Apple's Mac! It makes very good sense to be skilled with Windows on a personal computer, even if you like Mac better yourself. Windows folks probably should cross-train on Macs. iOS, Windows, Android, and a couple flavors of unix should all be familiar to anybody heading into IT.

Becoming familiar with IBM zSeries is a good idea, too! A large percent of Fortune 100 through 1000 use IBM zSeries mainframes or iSeries mid-range. There are always thousands of managers looking to hire application developers and analysts for this environment. This is something you can do for free.

Watch out for IBM's Mainframe Challenge in the fall. It is an excellent way to learn zOS, COBOL, CICS, and how to integrate Windows, Mac, or Linux with an IBM eServer zSeries mainframe. Getting past the first challenge gets you a Tee-Shirt, getting past the third gets you multiple job offers the next day...

The 'hardware wars' began back in the '80s as anti-trust and other legislation rendered computing into a highly competitive environment, where in the few decades prior computing was highly proprietary and organizations were usually 'locked into' the computing platform they chose because it was too expensive to try to change platforms.

When Windows NT emerged in 1993, Bill Gates immediately declared that Micro$oft had won the war, and that Mid-Range and Mainframes were dead with a client/server victory certain very soon. We all mourned the passing of the mainframe when IBM renamed it as the HESS-High End Super Server division in the mid-90's and spoke of its passing. But, as at 2010 or so everybody who could migrate from mainframes did, and the mainframe market has stabilized. IBM calls them mainframes now, and their mainframe sales have been flat for years, with a recent upturn in zSeries hardware to be used for Linux.

In 2017 the battles and skirmishes continue, with Microsoft claiming most of the desktop and notebook marketplace years ago and holding onto it. In 2015 Microsoft's share is slipping away for personal computers, but has recently claimed nearly 50% of the workstation/server farm market. Microsoft has zero% of the mid-range and mainframe market. Microsoft has moved into the hardware market with their Surface line where they are responsible for the entire machine.

Notes about Software Wars

Familiarize yourself with the organizations involved in the IT marketplace, their software, and their hardware. (There is a 'software wars re-mastered' added in 2011, that is geared more to the personal computing/smart devices market. It entirely ignores the enterprise-class computer marketplace that is less fun...)

(1/26) Current Platforms:

These days, there are lots of options for server and personal operating systems:

  • Red Hat Enterprise Linux is a popular server OS that has binaries for Intel, Xeon, and IBM's Power & zAPP CPUs. Most applications written with languages supported by RedHat will be interpreted, or compiled, and run on CPUs from desktop and server through mainframe. If developers have avoided using assembly code and special features of a particular language, CPU, or OS it's a good chance that an application written for one family of CPU will compile and run on the other -- this allows some degree of platform independence.
  • Suse and Ubuntu Linux also have excellent commercial support by Novell and Canonical Corp, are popular choices in the EU.
  • CentOS is a version of Red Hat gently touched up for support by the CentOS community and provided free. Like RedHat they promise long-term support, very slow obsolescence and no surprises between releases. Fedora is also based on Red Hat and is touched up to be 'bleeding edge' by its community, but it surprises somebody every release, releases a new version two or three times a year, promises no long-term support, pretty much guarantee obsolescence sometime next year, and isn't a good choice for a stable server OS.
  • Android is Linux optimized by Google for tablets and phones, mostly low-end RISC CPUs. Google's ChromeOS runs on very low-end RISC notebooks and also on higher-end Intel CISC notebooks.
  • Other flavors or Linux or other free unix power Google, FaceBook, lots of other social networks, massively-online games, hundreds of thousands of WordPress sites, &c... This accounts for huge share of the server-class and virtual-server market!
  • Mint is Linux optimized for Intel-based desktop and notebook computers, so is Ubuntu, who is also developing an 'ecosystem' that works with phones, tablets, and desktop systems...
  • Yellow Dog Linux is optimized for multi-core Power chips manufacturered by IBM/Global Foundries and can handle the huge RAM these machines are capable of handling. Yellow Dog has replaced lots of IBM's AIX on their eServer pSeries machines. It was also an open-source alternative for the older, Power-based Macs, could make a Mac desktop into a very capable web server. Yellow dog provides excellent commercial support for Power-based systems.
  • White Dwarf Linux and dozens of other Linux Distributions too numerous to mention are Linux adapted to a manufacturer's environment, or to the whims of this or that individual or team of developers. They are mostly derivative of the major Linux distributions: Debian, RedHat Enterprise, and Slackware. There are dozens of other distributions that have little in common with the major distros except the Linux Kernel.
  • Mac OSX and iOS are _BSD Unix_ adapted to Apple Mac desktops, notebooks, and portable devices. OSX (X means unix & 10) started its life on IBM/Freescale Power CPUs and 'jumped platforms' to Intel since BSD runs equally well on either RISC or CISC. iOS is optimized for RISC.
  • Windows desktop and notebook through 7 and Server 2012 are for CISC: Intel and AMD. So were XP, Vista, 98, 95 and before.
  • Windows 8 added the 'Metro' interface for touch screens and also has the 'RT' version for the RISC CPUs in smart phones, its Surface line, and other portable devices. Windows Server 2000-2012 are for CISC/Intel/AMD web servers and application servers. When Windows NT was first released it ran on RISC and Sparc as well as Intel, but MicroSoft has been mostly CISC since. Recently, Microsoft has adapted a Windows 10-like OS for RISC that runs on Raspberry Pi and other versions of RISC.
  • IBM's zOS runs the the zApp CPU on their zSeries mainframes, i5 runs their iSeries midrange on 'Power' CPUs. Either CPU, mainframe or mid-range, will also boot and run Linux 'native', making these fault-tolerant, energy-efficient alternatives to *ix-based server farms. One mainframe can handle the workload of thousands of workstation/server class machines, making them a money-saving, high-performance, fault-tolerant option for hosting virtual linux platforms. In Summer of 2015 IBM announced an initiative for even lower-cost zSeries mainframes designed without the zSeries-specific architectural features to run Ubuntu Server, RedHat, or Suse.

Mobile Quickly Eclipsed Notebooks and Desktops

Here's a 'share chart' for PC (Personal Computing) devices over the years thru 2011: Rise and Fall of the PC. Here's another look at Market Share of Personal Computing Platforms, showing that 'WinTel' is less often the #1 choice for 'personal computers', and 'has a real keyboard' is less important, at the end of '12. There's no similar upheaval in the Server-class marketplace yet, only slow & steady growth by Windoze to about half the share.

Practically, this is why the huge emphasis on 'mobile friendly and responsive' websites and apps. 'Mobile First' is probably the best way to go.

Legacy Systems

For an intern or recent graduate in IS, whatever's there when you get your new desk, or other working arrangement, is 'the legacy'. Evaluate what's there and think about how to extend the value of the legacy, and also think about what tech's emerging to replace some costly components.

Some hear 'legacy system' and think 'it needs to be retired'. Some legacy systems should be shot in the heart and replaced, quickly, before they cost even more and deliver even less of what's needed for eBusiness these days. Maybe historical data can be salvaged and taken into a new system? Maybe there's nothing of worth?

Other legacy systems are truly valuable and provide considerable competitive advantage at very little cost. Their value can be extended, through webfacing, data mining, and other modern techniques. The system has been scalable and may handle 100s or 1000s X more customers than when it was put in place. It's not uncommon to find a legacy system that has every keystroke by every system user since the '70s or '80s and now gathers 100s of times more data from mobile-friendly, responsive websites that get excellent customer reviews.

Some legacy systems were developed and supported by a consultant, an IS department, maybe by a C-level board exec or accountant.

Other organizations bought their system from a VAR-Value Added Reseller associated with an equipment manufacturer who has decades of experience in one or more diverse vertical markets like veteranary clinics, pediatricians, fast lube, hospitals, auto dealers, parts dealers, barber shops, billiard parlors, landscaping, not-for-profit, collection agencies, hardware stores, garment, hardgoods, electronics, and other retail product lines, &c, &c... Practically every 'industry group' has a selection of software from seasoned VARs and as many new VARs with good systems to consider. VARs of IBM, Sun, HP, Tandem have been providing the best solutions at reasonable prices since the '70s. Microsoft's VARs have been shaken out for a decade or so and service lots of vertical markets with WinTel solutions. Typical costs for these systems are $3500+ per CAL and 12% annually for support.

Today's market also includes considerable options for SaaS, where fees are typically per CAL or user per month. Where 'horizontal' SaaS like gmail + Google Docs is $5 per user per month, SaaS providing integrated business systems for vertical markets may cost several times more. Compared to the high, up front purchase and support costs for vertical market application software SaaS may be an attractive option, where a business can start with a few users at a reasonable rate and add more as it grows.

(2/2) Operating Systems in the IT Legacy→

This discusses what you're most likely to find back in the server rooms of enterprise today. The best is a mix of software from the past two score of years that's integrated seamlessly with hardware that emerged in the last year or two.

Here's a link to a VNC Viewer download page that pretty much sums up today's Desktop/Workstation platforms of interest to a network administrator using VNC Viewer.

The Red Hat Store shows the relatively few Server and Midrange Platforms that dominate this market: x86 - Multi-core (2 thru 24) Intel/AMD CISC, 4Gig thru maybe a TeraByte or two of RAM, not Fault Tolerant, also run SCO & Windoze; Power RISC, 4-16 Gig thru 16 TeraBytes of RAM, can be Fault Tolerant, also run i5/OS and AIX); zApp Mainframe, Fault Tolerant, lots of 7+GHz cores, 192Gig thru many TeraBytes of RAM, also, happens to run RHE Linux, Suse, Ubuntu...

Here's an article about Linux on IBM z Series. Here's IBM's Product Page adding Ubuntu to the mix. Far from being 'dead', IBMs zSeries/zOS sales have been at least 'flat' for the past several years and are now trending up with the increasing demand for zSeries/Linux.

Operating System Concepts & Marketplace

This intro amplifies Ch 1 in the text + web resources. Class includes demos of OS functions using Windows and Linux command lines: Interfaces: GUI, CUI, API, CGI; Device Management; File System; Memory Management; Processor Management; User AAA-Authentication, Authorization, & Accounting; Networking.

(2/2) Introduction to Operating Systems & Overview of the OS Marketplace →

(2/7) Range of Computer Platforms

There is a wide range of computer platforms involved in eBusiness, on-line Media, and our PAN-Personal Area Network. While we're intimately familiar the devices in PAN, it's only the IT Managers and Technicians who get close to the servers and storage devices in the network rooms or data centers of business, enterprise, and government.

The times, they are a-changing, and is the equipment used in each range of computing. 2016 has seen more activity about RISC and involving it across the range of platforms than in the prior decade.

Range of Computer Platforms →

Study Questions

Quiz #1 Study Questions Quiz #1 will be about ten to twelve questions about content in class and the text, mostly from this list of Study Questions.

Email about the study questions is repeated: Here


G Saunders,
Dept of Information Systems
VCU School of Business

G Saunders Wings

Content © 1999 - Today
By G Saunders
Images are Available on the Web