G Saunders' Home Page

IT Infrastructure: Computers

IT-Information Technology Infrastructure these days is made up of three basic technologies: Computers, Networks, and Storage. Each is equally important for Information Systems. We're starting the discusssion with Computers, will take up the other two technologies later...

Wikipedia's current definition for Computer gives three paragraphs about functionality before it rattles off the main components of a Computer as CPU, Memory, and Peripherals. It's a quick romp through the topic in general, including history and technology, hardware, software, and other features that make up computers as we know and use them. It's a good introduction, is recommended reading. In INFO300, we need to consider computers as they're used in business information systems, and in enough depth and detail to be valuable for future IT managers as you learn and make decisions about computers.

Computer hardware doesn't do anything on it's own. It needs software, aka scripts or code, to do anything useful What we think of as 'a computer' is usually 'a computer and its operating system'. When we say PC, or Mac, or Droid, or iPhone we're combining a CPU and operating system like Intel/Windows, Intel/Mac OS, ARM/Android, or ARM/iOS. We seldom use or refer to a computer without its operating system and application software.

In Information Systems, this combination of computing hardware and operating system software is termed a 'Platform'. In modern times we further qualify the term as 'Hardware Platform' or 'Software Platform' because they are both important in our increasingly 'virtualized' computer platforms. Often, one hardware platform hosts another or several other software platforms, called 'virtual machines'. We see this kind of virtualiztion in our classes where lots of Mac users will virtualize their MacBook and run Windows on it so they can run software required for classes, including Visio and Visual Studio...

A 'computer system' usually includes application software, whether it's for personal, business, enterprise, or government use. The operating system, programming languages, application software, database management and storage, and networks are all inter-dependent and are often referred to as the OE-Operating Environment or Application Environment. The OE is what determines the what skillset is required of the people who work in it.

Hardware Platforms: CPU + OS

A Computer Platform aka Hardware Platform is the combination a CPU - Central Processing Unit and an OS - Operating System. An OS is System Software written for a CPU that manages the computer and runs Application Software written for that hardware platform.

Of these components, the ordinary computer user interacts mostly with the application software, occasionally dropping to the OS 'settings' menu to reset the network or connect to a new WiFi. This is what we use at work or leisure, furnished by our employer, purchased, or funded by adware. Seldom to we need to do anything with the CPU, we just make our decisions whether it's Mac, Windows, Droid, or iPhone and use the computer until we replace it.

A system administrator likely spends more time with the platform's operating system and hardware. They'll have a collection of application software that helps manage security, software version control, performance, and other aspects of the system's operations.

In this diagram, the 'platform' is represented by the blocks for the Hardware and the Operating System. The application sits on the platform. The user has access to the applications but can't get directly to the OS. The system administrator works directly with the hardware and the OS.

User Application Os Stack

Bootup: BIOS Loads & Runs OS

The bootup process engages the CPU to load and run the OS. It gets the platform ready to launch our apps. For all but the smallest of platforms, the bootup process ends by running apps in the Startup folder, then authenticating the user to provide the interface they're authorized to use.

Plan to reboot your notebook or desktop computer and get into it's BIOS-Basic I/O System. Figure out how to work it, check out the options it provides for configuration of the bootup and the machine. Take care not to brick it. Learn how to recover it if you do.

Bootup is a security issue since, anybody who has time, access to the machine, and inclination can boot your machine off their thumbdrive and pwn it in several ways.

Use your Task Manager to examine processes in the Startup folder or tab, or equivalent for i or Android. Be curious about what each does, what it might tell others about you and your activities, and how much available 'bandwidth' it's sapping off your machine. (I been watching what Google gets off my Pixel and I wonder who else sees it? )

Here is a discussion about Booting Up.

Stuff like this is in the matter of the CompTIA A+ Certificate and is important for Cyber Security. Seeing a few BIOS work is a good way to get started. I _think_ I can tap F12 when the logo flashes up, or is it F2, maybe? Google can be your friend on this, or the manufacturer's customer service pages...

Managing Platforms

Decisions about computer platforms are important in the strategy of business and other organizations, large and small. Choice of computer platform limits the options for application software, which is typically developed for a specific platform, so many decisions are determined by the software developer's expertise. Choice of platform may also limit or facilitate 'scalability' of an application environment to manage a growing business or enterprise.

[[[ Add case study or two about platform crisis, success ]]]

Operating systems and most application software will only run on the platform or machine for which they were developed and compiled. This is called 'Platform Dependence', or 'Machine Dependence'.

It's easy to see platform dependence on a personal computer, where different versions of software are provided depending on the platform you've got at hand. Some software, like Text Wrangler for Macs or video games for Windows or a PlayStation, just aren't available for the other platforms and don't run as well if we 'virtualize' or 'dual boot' trying to accomodate them.

Platform dependence is also an issue for business application software. Application developers typically choose a platform for development and that's what they support. Applications that run on Windows servers won't run on unix or other midrange or mainframe systems.

Platform dependence is because different families of CPUs, like Intel and ARM, have different Instruction Sets and the binary code in the application software must use the instruction set for the CPU where it runs. Different operating systems, like Windows or Mac, expect application code to be assembled appropriately for the operating system's application program interface.

Choice of a small platform when starting out may limit the 'scalability' of application software as a business grows...

Scalability of Platforms & Business Application Environments

Scalability of an application environment is a key issue wherever a business wants to grow. Many/most businesses are dependent on the application software they run to satisfy their customers, and manage their workflow and operations. The ability to handle lots of branches or locations, maybe with lots of languages, is important for growth. It's wise for any organization to investigate scalable solutions as they start up and choose one that suits their business model now and can support growth in the future.

It's also important that the system be 'well-integrated' so that it can handle all aspects of the business, from taking on partners, taking orders, handling fulfillment or service delivery, and accounting for it all with a detailed audit trail. Any system that can't do this, unobtrusively and efficiently, shouldn't be used.

Changing application software is never a cheap or trivial matter and an organization that finds itself with poorly integrated applications that cannot 'scale up' to handle growth is likely to face great expense or failure.

An ordinary example is where a service company that has run on MYOB-Mind Your Own Business or Quickbooks since startup, got good at what they were doing, then hires on new crew to handle more business, then buys a competitor, then loses control because the single-user app can't handle the business, or doesn't have adequate accounting controls or do scheduling or service delivery...

The price between a 'single user' application and a 'multi user' is often like this: A fine single-user application for landscapers can be loaded from a CD or downloaded for $349 and has everything to manage two guys in a truck who bid jobs and install them and do residential and commercial landscape maintenance. The multi-user application needed to run a landscaping business with multiple crews and branches costs more like $3,500 _per user_, has everything needed for dozens or more managers to tend to business and account for the activity of dozens of crews, and even spits out the tax returns at year end. If a business isn't expecting this kind of expense, tens of thousands of dollars, to obtain a well-integrated and scalable system it can be a rude awakening when exponential growth happens and 'the system' needs to be changed!

One slogan that's prominent at IBM COMMON conferences, attended by thousands of IBM's zealous users' tech managers, developers, and VARs is Expect Exponential Growth!. Growth is often exponential as well as incremental. Of course, any IBM VAR likely represents a very safe choice for business application software. So would VARs associated with HP or Sun/Oracle, who also have product lines featuring a range of servers that can grow to handle hundreds of thousands or millions of customers and employees.

Unless a business has a stated goal not to grow it's got to be prepared to grow safely. Maybe the two guys in a truck who have seen problems with managing burgeoning growth and are now happy to be pulling in lots of $ and keeping most of them. Most businesses would like to grow without losing most of those $.

Lots of exponential growth happens when an organization _with_ a well-integrated, scalable system acquires competitors who are failing because their system was _not_ scalable or wasn't well-integrated with all aspects of the business.

It's possible through bad decision making for an organization to buy or build an application environment on a platform that cannot scale up to meet needs of a company that's growing by leaps and bounds, incrementally and exponentially. The risk is that an expensive 'platform change' or change of application software may be required at a time when resources are stretched because of meeting explosive market demands, or acquiring a competitor's business.

Lots of businesses have failed or missed opportunities to expand because their systems could not be scaled up to manage business that grew 100%, 200%, or more in a short period of time. Not all growth is incremental, increasing by a few percent per year. Lots businesses grow exponentially, doubling or quadrupling their activity through acquisition of competitors or explosive demand for their products or services.

Most companies want growth, and maturing organizations anticipate and prepare for growth by buying or building scalable systems that meet standards for their industry group, security, confidentiality, integrity, and availability.

The traditional way of ensuring scalability is to buy an application system from a VAR-Value Added Reseller with experience in the industry at hand, a 'Vertical Market', and develops for a line of computers that can grow to handle a huge load. Resellers of IBM, HP, and Sun/Oracle support applications that can launch on small servers, maybe handling a dozen users, in a line that can grow to handle tens of thousands to millions of users. For example, a small IBM i5 server has all the features needed for enterprise-scale computing with a price-tag of several thousand dollars. If/as the business grows the i5 can be very easily be moved to larger servers without disrupting the business. Enterprise and Government are using PeopleSoft, SAP, JD Edwards -- these are proven-reliable, almost infinitely scalable systems that cost in the range of $4,500++ per user to implement.

Examples of vertical markets are: hardware distributors, barber shops, pharmaceutical manufacturers, junkyards, road builders, pool halls, not-for-profit organizations, grocery stores, insurance companies, auto dealers, &c... Whatever the business, there are likely a couple or a few software houses that specialize in well-integrated systems for it. The systems come loaded with everything needed to startup and run that business. The software house is usually a VAR who can buy computers 'wholesale' from the manufacturer and add their software and system management services to them.

The traditional way is always challenged by talented developers who make systems scale by adding servers to accomodate growth. In some cases, a 'server farm' can rival the capacity of a midrange or mainframe system.

PaaS - Platform as a Service is another way toward scalability that has emerged in recent years. This is where a small company, or a large one like eXpedia, contracts with a 'cloud based' company like AWS-Amazon Web Services to provide their computing platform 'as a service'. AWS, RackSpace Cloud, Digital Ocean, IBM, MicroSoft, Oracle and a dozen other companies with excellent track records for reliability and scalability offer public cloud services where a business or developer can get started for a few dollars a month and scale up to huge size at reasonable rates for compute, storage, and network. These companies can provide solutions including geographically dispersed redundancy, seamless failover if a server or data center goes offline, load balancing, huge bandwidth, and expertise to handle growth. PaaS isn't free, but economies of scale make it competitively priced.

If a 'Public Cloud' won't work, maybe because of security concerns, companies can operate a private cloud instead. Economies of scale are very real in a data center environment, so the rates for PaaS can be _very_ competitive with owning and operating the equivalent hardware platform.

Software Platforms: Several Variations

Platform Independence or Cross-Platform Software is desirable for many applications. It lets the developer 'develop once and deploy anywhere'. With hardware server platforms, WinTel, LinTel, and i5/Power for example, a developer who wants to sell an application for all these fine platforms would have to develop and maintain three sets of code.

Middleware, Virtual Machines

One way for one set of code tp run on all three of these hardware platforms is to develop the software for a 'Software Platform' or 'Virtual Machine' like Java. 'Middleware' is another term for this kind of software, as it's in the middle, between the application code and the operating system.

Java and .NET Framework are today's best-known middlewares. Here are schematics of the Java Virtual Machine and Windows' .NET Framework.

Instead of compiling source code into binary executables for the CPU to execute, middlewares compile it into 'byte code' for the virtual machine to translate to binary suitable for the hardware platform at hand. The virtual machine reads the byte code and translates it 'on the fly' to binary code for the CPU at hand.

These Virtual Machines are rich with methods programmers use to code application software. If programmers are familiar with the methods built into these virtual machines they can write efficient code for either platform. Microsoft's Visual Studio and Java's Net Beans are IDEs-Integrated Development Environments that help guide programmers to effective use of the features of the virtual machines.

Sun/Oracle's JVM - Java Virtual Machine was released in the late '90s. Microsoft's .NET Framework was released with Windows XP in about 2002 and has been part of all Windows personal and server operating systems since.

Software written for a virtual machine will run on any hardware platform where the virtual machine has been adapted to fit. Java has been adapted to practically every hardware platform known to man -- most of us have it on our personal computers and dutifully update it a couple times a year.

Microsoft's .NET will run on most platforms in use today and will likely adapt to those of tomorrow. For nearly a decade, the .NET Framework has been adapted by Microsoft's Mono Project to run on most common platforms. Although it hasn't been as widely accepted as Java, the .NET Framework is poised for wide-scale adoption.

Developers of software for these virtual machines hope that their software will run well on any platform, and with Java already installed on almost all hardware platforms many developers have found this is true. Mobile devices, with different manufacturers using slightly different controls (think Apple & Droid) is can complicate the programmer's job by requiring tweaks to applications intended for each device -- this is seldom as big a burden as re-developing from scratch for each hardware platform.

Programs running on virtual machines always take a hit in performance vs. the same code compiled for a specific hardware platform. Java applications were visibly sluggish on PCs until we got fast Pentiums and Core technology to run them.

Examples of this performance hit are seen in gaming, video editing, or graphics software where performance and high-quality graphics are key. These software are almost always developed and distributed for a specific hardware platform like WinTel, Mac, XBox, or PlayStation.

Where portability of application software is more important than blazing performance, as is often the case in business applications running for one user on a desktop or other personal device, it makes sense to develop for a virtual machine.

Where efficiency is more important, as in games or server-based applications, performance is an issue and any lack of efficiency can result in more servers to handle the load or disappointed customers.

Cross-Platform Languages and Development Environments

Another approach to platform independent, or cross-platform, software involves writing the software in an Interpreted Language or Compiled Language that has interpreters or compilers for more than one platform.

Although interpreted languages may not result in the absolutely most efficient performance, the result may be good enough for many practical applications. Well-structured code in PHP, for example, will run on the smallest and largest web-servers we have today, from embedded through mainframe. JavaScript running on a modern browser these days enables applications like Google Docs to rival the performance of desktop software and compete favorably with MSOffice.

Interpreted languages are 'open source' and the source code cannot be hidden from the end users. Software compiled for a hardware or software platform can be 'closed source', only distributed as binary machine code or byte code (.dll, .exe., .jar...) without the source code so it can remain a trade secret.

Distributing open source code isn't an option for a developer who wants to sell their software, so proprietary software like Excel and games are usually distributed as compiled binary or byte code.

Today's nearly ubiquitous and very reliable Internet lets FOSS developers offer their SaaS - Software as a Service instead of distributing the software and can be very competitive with traditional methods of delivering software. Lots of VARs with decades of experience in their vertical market now provide their SaaS.

These languages work cross-platform for Windows, Mac, and Linux and most can be adapted to mobile platforms like Android & iOS:

  • C++ is an 'industrial strength' compiled language that produces efficient binary code for practically every computer platform in current use in the past deacdes. It's the latest in a lineage from C through C+ to C++. A skilled C programmer can make very efficient code, so it is used where performance is the goal. These are abstract, tedious languages with a steep learning curve. Any lapse of attention or logic by the coder can make for sluggish software, apps that 'freeze up', or BSOD-Blue Screen of Death experiences for the application's users. Windows is written with C and C++, so is Linux and most other flavors of unix. The compilers for C++ have been optimized to produce machine instructions efficient for RISC and CISC-Intel so code written in C++ often performs well enough when compiled and deployed on either platform. Applications coded in C++ are usually easy for developers to port from one platform to another. (Note: C++ is not C#, a popular Microsoft language with C-like syntax which depends on the .NET framework to run.
  • PHP is a popular scripting language built for database programming on the web that works with the web server on Windows, Linux, or IBM i5 midrange and z/OS mainframe platforms. It largely automates the CGI-Common GAteway Interface and facilitates form handling, uploaded files. Later versions of PHP also have a CLI-Command Line Interface that makes it popular for administrative scripting on servers.
  • Python is another popular language that runs on Windows, Mac, or Linux servers. Although It also has applications on personal computers. Although it wasn't developed specifically for the web there are several libraries and frameworks, such as Django and Flask, that make Python good for mobile and responsive web development.
  • JavaScript is a browser-side, interpreted language that runs in the browser of any personal computer and most mobile devices able to browse the web. Recent developments in HTML5, CSS3, and JavaScript are replacing Flash, ActiveX, Silverlight, and other proprietary technologies for browser programming. Adobe, for example, has deprecated their Flash product and now provides an excellent IDE for developing in HTML5, CSS3, JavaScript, and jQuery -- it's not free, but is good value.
  • Microsoft's Visual Studio in an IDE-Integrated Development Environment that supports several languages including Visual Basic, C#, a Model View Controller, and ASP. Objects and scripts compile to 'byte code', in .dll files, for the .NET Framework to convert to binary machine code for the platform at hand, and present to the CPU. This is not an open source environment, allows shipping obfuscated byte code to run customers' applications so they can't see the source code for the apps. Visual Studio includes GUI tools for developing web application, web-services for B2B EDI-Electronic Data Interchange, and desktop applications for business and enterprise. Microsoft's Mono Project, a decade-long reach into the FOSS community, is gaining momentum to run Visual Studio-developed applications for a range of computers from portable through servers. Skills with these tools are very valuable. Recently, Microsoft released the Community Version of Visual Studio (Don't get it for INFO350 or 400-level programming!) and has integrated the Unity environment for gaming and mobile apps.
  • Java skills are in demand. The programming language is Java, then there are a Net Beans, and Enterprise Net Beans to make a powerful IDE with GUI tools for form layout for desktop and web-deployed applications. Java classes and objects are compiled into byte code that will run anywhere the JRE-Java Runtime Environment and JVM-Java Virtual Machine have been adapted, which is pretty much everywhere. Sun/Oracle provides Net Beans for free and it is an excellent way to introduce yourself to these valuable tools. Enterprise Net Beans is not free, but is is well supported by Oracle engineers and can be used to build extremely scalable applications from server through midrange and mainframe. Sun midrange and mainframes run Java the best, it sucks for bandwidth deployed anywhere else although there's loads of it running on IBM mainframes that can pitch JVM apps into the mix with lots of other 'virtual machines'.
  • WebSphere is IBM's cross-platform IDE. It is based on the open-source IDE Eclipse, and does what Visual Studio and Net Beans do for desktop and web forms. Plus, WebSphere supports development in IBM's proprietary midrange and mainframe programming languages and databases: RPG, COBOL, DB/2, IMS, and others that provide support for any of IBM's platforms since the '70s. IBM provides Websphere Application Server for their customers to deploy applications on Linux, AIX, i5, zOS, and Windows platforms. One strength of IBM's legacy midrange and mainframe systems today is the ease of integrating these large platforms with Windows and Mac desktops, Linux servers or native PHP, The Internet, and browsers on the World Wide Web. Websphere lets developers see all this within a powerful GUI IDE.

TIOBE provides this Index for August 2017, ranking programming languages over the years. It's important to be able to demonstrate some familiarity with a couple or a few languages if network management or application development are in your plans. With so many languages, nobody will be impressed if you only know one. Include code in your professional portfolio!

The Platform's CPU - Central Processing Unit

The CPU, aka 'the chip', is often referred to as the brain of a computer. It takes a lot more components to make a computer that is useful. The other components may be mounted on a circuit board along with the CPU, as in a notebook computer, tablet or phone. Or, in a desktop or server the other components may in the same chassis. Another option these days is a SoC-System on a Chip that puts the CPU and other essential components on the same chip with the CPU, making a very small computer that may be embedded in a phone or other small applicance.

A Little CPU History

The CPUs involved in computers for personal use, business, and enterprise today all derive from the computer technology of the late 1940s, just after Mauchly and Eckert rolled out the war machine ENIAC, which was the first entirely Electronic Numerical Integrator And Computer. Prior data processing machines since the 1990s were electro-mechanical, clockwork tabulators that processed data on punched cards using relays and other mechanical and electrical components. ENIAC was one of the first purely electronic digital computers that operated thousands of times faster than tabulators, and had a richer instruction set than the venerable old Hollerith, IBM, and Siemons machines it out-performed. But, it inherited their decimal clockwork, so it was not a binary machine as we use today. The programs that ENIAC ran were a combination of Plugboards with some elements of RAM and register-based machine code similar to modern assembly code.

With lessons learned from building and operating this ENIAC beast, Mauchly and Eckert continued on to engineer one of the next machines in this first generation, EDVAC. This machine and a few notable others at the time used _binary_ circuits instead of the _decimal_ circuits that emulated old clockwork. Binary circuits and memories are much easier to build than decimal, hastening commercial applications for these early mainframe computers and those that came to the market in the 1960s.

Nearby, at Princeton, John von Neumann championed and documented the efforts of Mauchley and Eckert along with other advances in the field. Soon this technology was termed von Neumann Architecture. Of several architectures being pushed at the time, von Neumann's was accepted as the best, was manufactured and commercialized through the 1950's, continues in use today in practically all of our computers. It beat Harvard Architecture and other design for computing machinery which gained little commercial acceptance and died on the vine.

[[[Google on von neumann machine schematic to see an assortment of schematics]]]

A CPU built with von Neumann's architecture has these components and features:

  • Clock - at each clock tick an instruction is fetched by the ECU and executed. Early computers ticked off hundreds or thousands of instructions per second, today's tick off billions.
  • Input Unit - gets data input from a keyboard, network, disk or other input devices and delivers it to the CPU
  • ALU - Arithmetic and Logic Unit - this unit is a powerful calculator that can do math, calculus, operations on character data, and make comparisons like equal, not equal, smaller, or larger
  • Registers - are a limited number of storage locations in the CPU where instructions are placed and data is manipulated
  • RAM - Random Access Memory - fast storage attached to the CPU where data may be read or written directly
  • ECU - Executive Control Unit - directs operation of the CPU, moves data from RAM to registers, engages the ALU and moves results back to RAM
  • Output Unit - moves data processed by the CPU to output devices like the monitor, network, speakers, or disk
  • CPU and Components are connected on a bus
  • Stored Program Concept - data and instructions are binary and are stored in the same RAM
[[[Need graphics to show progression from circuit boards to IC to VLSI]]]

Four Generations of CPUs

This topic is confused by CPU manufacturers, like Intel, that put labels like Core5 or Core7 on recent examples of their product line. These numbers roughly map the development of Intel CPU's from the 8086 through Core7. But they are all '4th generation machines'.

CPUs for personal and business use today are 4th Generation 'von Neumann machines' built on this basic architecture from the 1940s. Technology for von Neumann machines has progressed through four generations of technology:

  • 1st Generation computers arrived in the late 1940s and used vacuum tubes for the CPU and memory. These computers were the size of a warehouse with a CPU the size of a room. They were notoriously un-reliable, used lots of power, and generated lots of heat. ENIAC, regarded as the first electronic computer, was a war machine used to calculate ballistics tables for battleships and ammunition.
  • 2nd Generation computers arrived in the '50s and used discrete transistors the size of a finger tip, soldered onto circuit boards. A 2nd Generation CPU fit in a large chassis and the computer filled a large room. 'Solid-state' components were much more reliable than fragile vacuumn tubes. These computers were suited for 'batch processing' of data on punched cards and tape.
  • 3rd Generation computers arrived in the 1960s and used ICs-Integrated Circuits. ICs were built with photographic and chemical processes that integrated transistors with CPU and component circuits, a great improvement over transistors soldered to circuit boards. A 3rd Generation CPU fit on a circuit board and a whole computer fit in a large chassis
  • 4th Generation computers featured VLSI-Very Large Scale Integration since the late '70s. Further advances in chip manufacturing allow combining several other components on the die with the CPU, leading to personal computers that sit on a desk or lap, and others that we can carry around in a pocket. 4th Generation midrange and mainframe computers fit in a relatively small chassis the size of a filing cabinet. A 4GL SOC-System on a Chip places a whole computer on a chip the size of a button, poised to power the IoT-Internet of Things...

Although the materials have changed and circuits and components have been miniaturized to be incredibly small, the overall architecture for CPUs has been the same for all these decades.

A 5th Generation hasn't happened quite yet. There may be some more discoveries in materials or manufacturing techniques that will qualify as a 5th Generation. Or, we may see a whole 'nother architecture emerge from computer science in this new millenium. Quantum Computing or, IBM and DARPA's SyNAPSE technology might transcend von Neumann's CPU architecture and may become succesful for commercial and scientific use in the years to come.

[[[Quiz, CPU Components, Techs for 1st thru 4th generations]]]

Modern CPU Architectures -- RISC & CISC

From hundreds of CPU designs from the '40s through the '70s, two variations of von Neumann's architecture have emerged in today's CPUs. Both are important for modern computing platforms: RISC - Reduced Instruction Set Computers and CISC - Complex Instruction Set Computers. These two technologies make up nearly 100% of CPUs in use today.

The CISC line derives from the Intel 8086 that shipped with the first IBM PCs in 1981. RISC derives from the Motorola 68000 in Apples from the late 70's. (Apple used RISC until about 2005, starting with Motorola's '68K' and developing through IBM's Power before switching to Intel.)

We'll look at these more closely in a later topic, but here's what's important for the hardware platform topic at hand:

RISC - Reduced Instruction Set Computers

RISC chips have prevailed in sheer numbers since the '70s because they have been embedded in appliances, radios, automobiles, and almost any device that uses a computer. RISC chips descend from Motorola's 68000 chip, aka 68K, which was a relatively powerful CPU at the time. Motorola 68000 chips ran personal computers by Apple, Commodore, Radio Shack, Atari and others in the several years before before IBM released their Intel powered PC in 1981.

RISC CPUs were used extensively in the mid-range computers, aka mini-computers, of the '70s through today. Dozens of manufacturers, called OEMs-Original Equipment Manufacturers purchased Motorola's RISC CPUs, mainboards and other components to be sold with their operating system and brand.

Few companies advertised or touted their RISC processors, only their brand. The ordinary customer or computer user was unaware of the manufacturer of the RISC CPUs around them.

As the IBM PC came on the market it was 'cloned' by several other manufacturers, and so were the Intel 8086 and 8088 CPUs. AMD-Advanced Micro Devices, Zenith, ITT, DFI, and a few others sold CPUs compatable with Intel's product line. Of these, AMD continues to thrive in the PC market with unit prices a bit lower than Intel Sometimes offers a better performing option.

Intel's enthusiastic marketing along with an explosion of demand for PCs and clones led the market to believe that CISC was the predominant architecture and that others were not as good!

RISC Low End - Embedded Computers

Low and mid-performance RISC chips are sometimes called 'Motorola' by oldsters because they inherit the Motorola 68K instruction set. Motorola abruptly abandoned plans to build a factory West of River City, and sold their venerable product line to FreeScale Semiconductor in about 2004. FreeScale continued to advance their RISC line to higher clock speeds, multi-cores, and 64-bit while continuing to service the market for embedded CPUs in automobiles, and all kinds of appliances and portable devices. Their catalog lists CPUs for less than a dollar thru a couple hundred. These chips are used where performance isn't an issue and an 8 or 16 bit CPU is all that's needed. In 2017 Freescale, NXP, and Qualcomm provide most of these low-performance RISC chips.

Today the family of CPUs is for our mobile devices is generally ARM - Advanced RISC Machine. ARM Holdings is a UK-based company that owns the intellectual property for this class of relatively low-powered CPUs. It derives from the Acorn CPU that was somewhat succesful outside of the US market, so many people refer to ARM as 'Acorn RISC Machine'.

RISC Mid-range - Mobile Devices, Phones & Tablets

RISC, as ARM, has exploded into dominance in the markets for small computers, most of our smartphones, and tablets. Higher-end RISC CPUs are used in midrange, mainframe, and supercomputers. There are more RISC chips added to the cell and IT infrastructure since smartphones arrived on the market than there ever have been CISC. Low power consumption and low heat generated makes RISC the ideal architecture for small, battery-powered devices and also for big machines with lots of CPUs. Today, RISC is absent or rare in notebook, desktop, workstations, and small server-class computers but that is changing...

RISC chips are manufactured by a consortium of companies. The high end of RISC is dominated by IBM's Power chips which are used today in server-class, midrange, and supercomputers built by IBM and other manufacturers. Surprisingly, Sony Playstations and Apple Macs used IBM's Power technology in the form of 8-core, 64-bit Cell Chips until Sony and Apple adopted Intel a few years back.

Since 2014, IBM's Power line of CPUs is manufactured by GlobalFoundries following a deal where IBM paid them $1.5 Billion to take over manufacturing of Power chips from IBM's Rochester, MN factory and move it to their semiconductor factories in New York. IBM continues to develop this line which includes 64-core models in production and has prototypes with 1,000+ cores. RISC's low power consumption and heat dissipation favors multi-core technology and IBM has been the leader in high-performance RISC.

Recently, IBM's Power technology has seen renewed interest for cloud computing and massive virtualization. Google and RackSpace are using IBM's Power 9 processors, discussed in IBM Power Chips Hit the Big Time. Where running Windows isn't a requirement, it's easy to demonstrate that RISC delivers excellent performance and lower cost of operations. We can expect to see more RISC and Power in servers. This is likely a temporary development...

In 2017, IBM's Power series' dominance in high-performance RISC is impacted by ARM64 technology licensed through ARM Holdings. Where IBM's Power chips cost a couple or several hundred dollars depending on how many cores, we're expecting the new ARM64 chips to be a fraction of that. An Intel with 16 dual-threaded cores is about $1,700 in 2017, their Core i9 and X technologies are state of the art, wrangling with AMD's Ryzen and EPYC line in this high-performance market. We expect ARM64 to shake up the multi-core CISC market, too.

Sun/Oracle's Sparc & UltraSPARC are high-end RISC chips used across Sun's product line of workstations, midrange, and mainframe computers. Sparc CPUs have not been used much outside of Sun computers, but they crunch lots of very important numbers for enterprises and governments running Oracle and other ERP software on Solaris.

ARM has been primarily a 32-bit environment, limiting memory and other data structures to 4 billion addresses, which made these CPUs unsuited for enterprise-scale computing, desktops, or notebooks. New ARM64 technology lets this new class of hardware work at enterprise scale at a fraction of the cost and may push CISC out of the desktop and notebook market.

ARM CPUs made up the middle of the RISC market for nearly a decade. These 32-bit single thru quad-core RISC chips are used for smartphones and tablets. They are manufactured for a very competitive market by several companies like TSMC-Taiwan Semiconductor, QualComm, and Foxconn who build components for 'fabless' companies like Apple and Microsoft who lack chip-building and hardware fabrication facilities of their own. TI-Texas Instruments is notable for being a US manufacturer in the top ten worldwide. (Richmond was once a haven for semiconductor manufacturing!) These companies license the technology from ARM Holdings and manufacture the chips. With much of the manufacturing in China, there are lots of conflicts with intellectual property and patent rights...

In late 2016, UK-based ARM Holdings was acquired for $30+ Billion by the Japanese conglomerate SoftBank who is poised to capitalize on the surge of RISC in the marketplace for personal devices and the IoT - Internet of Things. ARM Holdings doesn't build anything but prototypes, but they own key patents for higher-performance ARM CPUs. Recently, renewed interest in RISC for server-class applications will result in more RISC in this market, which has been dominated by CISC - Intel & AMD in recent years. Facebook and Google have used generic Intel/AMD machines in their data centers since their startup -- In 2016 they both announced a move to RISC...

ARM Holding's recent advances with 64-bit RISC, traditionally built by IBM, will attract new manufacturers and competition for high-end RISC for servers, tablets, and phones. One recently announced 64-bit RISC SoC - System on a Chip has all the processors needed for Hyper Converged Architecture built onto one chip -- Multi-core Computer, RAM, Cache, Ethernet for networking, and Ethernet Fabric for storage. These promise to reduce the cost for 'a generic server' as deployed by Facebook, AWS, or Google from a few hundred dollars to less than a hundred. Mid-powered RISC SoCs for mobile devices have already reduced the cost of building these smaller devices and it appears this next wave of ARM64 may do the same for servers.

Recently, smaller 64-bit versions of ARM have emerged that will be important to help our smartphones and tablets become better at multi-tasking and providing a truly windowing interface rather than a list of 'recent apps'. The ability of 64-bit CPUs to use more than 4 Gigabytes of RAM allows these hand-held platforms to run applications that used to require a notebook or desktop computer.

At the low end of the range of RISC CPUs, we find chips built for decades to be embedded in appliances, automobiles, and many machines and devices. These are mostly built by FreeScale Semiconductor, who spun off the venerable 68K line from Motorola in about 2004, allowing Motorola to focus on its core of radio and phones. In 2016, Freescale merged with NXP, and is expected to continue building CPUs for embedded applications. Some of these are 8-bit or 16-bit CPUs that cost a few dollars or less but are hardened for industrial or automotive applications. The modern CAN-Car Area Network's bus favors embedded computers to control components vs. old switches and electronic components so there is steady demand for this product line which also fits into the IoT.

In years way past, electric windows, mirrors, and seats were a luxury -- now they are the least expensive way to open a window or adjust a mirror or a seat.

In 2017, Freescale/NXP was acquired by QualComm and it remains to be seen how this affects fierce competition for these small CPUs.

RISC is an open technology compared to CISC. ARM Holdings, IBM, Motorola, and others with intellectual property for RISC are not as stingy as the proprieters of CISC technology and make it relatively inexpensive for a semiconductor company to tool up and produce RISC chips. Even with the Freescale/NXP/QualComm big gorilla, we expect this market will remain very competitive.

Direct costs for manufacturing a prior generation smartphone with a modest graphics display in 2016 are something like $10 or $12 per phone. Some Single Board Computers without cellular tech but equipped for WiFi and/or Bluetooth cost less than $5 to make and several retail for $10 or more. Using FOSS software components, these little computers can be a fine web server or client, or both, with interfaces from buttons through Alexa, Siri, or OK Google including video, voice response, or a barcode scanner. This is taking us to stuff like this swarm and the IoT with 340 undecillion, practically limitless, unique addresses available for the IANA and RIRs to dole out.

CISC - Complex Instruction Set Computers

CISC chips predominate in notebook, desktop, and workstation/server/gamer platforms. Most CISC chips are descended from the Intel 8086 CPUs that powered IBM's original PC and the multitude of 'PC Clones' that IBM licensed at very favorable rates. This cloning helped to standardize the ISA, EISA, and PCI busses and components, and drove the prices down from about $3,000 for an original IBM PC in 1981 to several hundred dollars for a desktop system a few years later.

The CISC family is generally referred to as 'x86'. Where it's important, x86-64 or x64 refers to 64-bit x86 technology which is compatible with 32-bit x86. It's remarkable that the x86 family of CPUs has maintained backward compatability as it has grown from the original 16-bit 8086 introduced in the late 1970's through the 32-bit Pentium in 1993, 64-bit with Pentium 4 in 2000, 64-bit Xeon a couple years later, and into today's 64-bit Core technology in about 2006. This backward compatability has allowed people and companies to upgrade to faster hardware without ditching their investment in application software.

Most CISC chips today are manufactured by Intel and AMD - Advanced Micro Devices. Several other companies have licensed the technology, including some here in River City, but Intel and AMD account for most CISC chips in the marketplace. Sometimes AMD is considered as a cheaper brand than Intel, making less-powerful CPUs. But, AMD has at times made CPUs that are more powerful than Intel. The two companies play leap-frog with performance and prices for the latest CISC technology. In 2017, AMD's Ryzen and Intel's Kaby Lake are the dueling pair...

Where RISC technology is relatively open, CISC has been relatively proprietary and closed. It is expensive to tool up to build CISC and pay royalties for the intellectual properties held by Intel. There are very few CISC manufacturers compared to the number that build RISC, even with the folding of Freescale/NXP/QualComm.

Until recently, CISC technology has focused on speed and not so much on reducing power consumption and heat. CISC chips have not been easy on batteries and have required cooling systems involving heat sinks and fans or liquid to blow or wick heat away from the CPU.

In 2016, Intel announced its intention to focus on inexpensive, low-powered CISC CPUs to compete with RISC in the burgeoning market for mobile and portable devices. With some success for their Atom line, they are working on their Quark CPUs with products like the Curie SoC - System on a Chip and their Compute Stick, which can run Windows.

In the past decades, Intel has had tremendous success with CPUs that cost a couple or a few hundred dollars each. Now they're setting out to compete in a market where CPUs can cost as low as a few dollars.

RISC vs. CISC, Power Consumption and Dissipation, & Other Considerations

It's hard to answer the question "What's better? CISC or RISC?" The top response from Google today is from Quora.Com. Here's another article explaining The Difference Between CISC and RISC. For a decade or more I've pointed to this article from a CS course at Stanford: RISC vs. CISC.

Oldsters like the instructor might say that the only thing that CISC is better at is _marketing_ and that RISC has always been the better technology and will likely rise again. Prior to ARM and mobile applications, RISC has had very little advertising or marketing. RISC chips in all kinds of applications from embedded through midrange servers have always outnumbered CISC, but few people knew of them. When MicroSoft's maniacal marketeer Bill Gates and DOS/Windows got involved with Intel/CISC the public became very aware of Intel and it quickly took over major market share for 'personal computers' as they became popular for homes and essential for business. As other companies, like DFI-Dew Flower International, Acer, and Zenith began building 'PC Clones' they used Intel CPUs and the 'Intel Inside' stickers were on most desktop machines. As notebook/laptop computers became note-book sized and affordable in the '90s they also used Intel/AMD.

Apple's Mac used high-performance RISC from Motorola, IBM, and FreeScale through OS9 and switched to Intel after they released OSX. At that time Apple only held a couple percent of the personal computer market. Although Macs are superior in some ways, or all ways according to their aficiandos, they had only a tiny share of the personal computer market. Their workstations were on the desks of graphics artists, video editors, publishers, and power users and these workstation Macs, IMHO, are/were excellent machines that sold for a premium price. Many experts pointed to these Macs and PlayStatsions as strong evidence of RISC superiority for most classes in the range of computers.

In the '80s, when clock speeds were in the KiloHz range, it was easy to demonstrate that RISC was better for database and server-related tasks with no GUI, and CISC may have been better for desktop applications, including the GUI that came along with Windows. The Mac Classic GUI was notably slow compared to the Windows GUI and that was easy to see.

Now that clock speeds are measured in GigaHz it's harder to point out the performance differences for many applications. RISC appears to be gaining favor again.

Some Intel CPUs are actually built as RISC and use firmware, 'microcode', to emulate the Intel instruction set so they'll be compatible with 'x86' applications.

Higher-end CISC and RISC processors have been somewhat hybridized in recent years for better performance. Intel CPUs have added RISC-like features to their instruction sets (Xeon) and some RISC CPUs can process complex instructions. Some people say the differences between RISC and CISC don't matter much any more. These are exciting times for RISC vs. CISC where developments in both camps in 2016 and 2017 are beginning to settle into the marketplace.

SGI-Silicon Graphics' powerful CPU, known as the geometry engine, is a mostly RISC CPU with powerful CISC features that can do calculus, trigonometry, and manipulate multi-dimensional arrays very well. This made SGI ideally suited for 3D graphics and character animation so they have dominated in studios like Pixar, ILM, and Lucas Film. SGI is also well-suited for statistical analysis where computations for factor and Fourier analysis benefit since these complicated instructions are part of the CPU's instruction set. The instructor can relate crunches of large datasets that took hours on an ordinary RISC midrange machine that ran in minutes on an SGI.

GPUs are mostly RISC: For some years gamers, engineers, and graphic artists have favored CISC/Intel for the CPU of their Windows or Mac workstations and gaming consoles. These machines' CPUs are super-charged with at least several of the latest Intel or AMD cores to run the gaming, graphics, CAD, engineering, animation and statistical applications that satisfy their passions. The best of these workstations offload graphics processing from the CPU to RISC GPUs-Graphics Processor Units, usually deployed on a PCIe-PCI Express bus, dedicated to presentation of graphics. GPUs have multiple RISC cores and an instruction set that includes VLIW - Very Long Instruction Words engineered to handle parallel and repetitive operations for rendering high definition graphics, complex textures, and increasingly more life-like animations.

Network managers have favored RISC for their servers for decades because of the volume of transactions and number of users they can satisfy with their midrange machines. These machines don't need circuitry dedicated to graphics because they mostly handle input from users or networks, databases and numbers, and leave the presentation to the user's computer.

It can be argued in 2016 that Intel/CISC is better suited for powering personal computers that handle graphics on large, high-definition displays where power consumption and heat are not an issue.

RISC is better suited for smaller devices where power consumption affects battery life, and for servers that handle databases and transactions for large numbers of users and don't handle any graphics.

These observations of Power Dissipation point out a key difference between RISC and CISC. In general, a CISC chip is more powerful and generates a lot of heat. The current generation of CISC chips uses from 30 to 130+ watts and need big heat-sinks, fans, or liquid cooling systems. RISC chips have slower clockspeeds and generate a lot less heat, running from less than a watt to something like 30 watts.

An IBM Supercomputer like BlueGene, RoadRunner, or Watson uses RISC. These machines have 1024 through 8192 cores per chassis, can be built up to 128 chassis, and use lower-powered versions of IBM's Cell Chip. These beasts literally sit in cooling ducts need something like a quarter acre of air-conditioning compressors outside to cool them while they run. It wouldn't be feasible to build them with CISC, which generates much more heat, because of the cost of cool,ing.

Clock speed isn't the only factor in how fast data can be processed. In business database applications without a GUI component relatively slow RISC machines, maybe running 1.3 GigaHertz, can outperform faster CISC machines running 2+ GigaHertz.

For many business applications it makes sense to deploy a lot of lower-powered RISC chips or cores to handle a lot users or transactions. Programming for a multi-CPU environment can be very challenging and it was seldom done prior to the mid-'80s. High-end RISC chips have supported SMP - Symmetrical Multi Processing since the 1980s and Intel/CISC since the early '90s. SMP removes most of the burden of programming for multiple CPUs from the developers and largely automates the task.

In some applications, such as powerful desktops or notebooks, fewer CISC chips do a better job. Intel/CISC didn't bring SMP technology to the marketplace until the mid-90s, lagging some years behind IBM's Power technology. But, Intel's multi-core technology has matured quickly and server-class machines now have multi-socket mainboards with multi-core CPUs, 24 in 2016 and 32 in 2017. An Intel-based mainboard can have as many as 4 CPU sockets, so 128 cores is today's practical limit, soon to be eclipsed.

We expect the core counts to increase in the next few years. IBM and other RISC manufacturers have blown past 1000 cores in prototypes and are looking for applications!

A common automotive metaphor for CISC vs. RISC goes like this: CISC/Intel is like a Ferrari, RISC is like a school bus. If you want to get one person from point A to point B the fastest, take the Ferrari. If you need to get 50 people from A to B the fastest, use the school bus, not 50 trips in a Ferrari.

Operating Systems

The operating system is the software component of a hardware platform. Operating system components are compiled for a specific CPU. Some software houses make different versions of their operating system for more than one platform. For example, Red Hat Linux is a server operating system available for x86, Power, and zAPP CPUs.

A Little History of Operating Systems

OS - Operating Systems emerged in the 1960's. Prior to that time, computers didn't have operating systems. They ran jobs as 'batches'. We rolled our jobs, on punched cards and tapes, to the computer room on hand-trucks and left them, along with a signed authorization, to be run. A computer operator put the punched cards in a hopper, mounted tapes, loaded a program, put paper in the printer, flipped switches, and pushed buttons to run the job. When the job was complete and all the printing, card-punching, and tapes were done, the operator took down the job, put the cards and tapes back on a hand-truck, then set up the next job and ran it. There was considerable down time between jobs when the computer was idle.

Mainframes Replaced Tabulators & Unit-Record Equipment

IBM's System/360 is an example of these 2nd generation machines. They mostly ran batch jobs using punched cards and magnetic tape. Many of them added magnetic disk storage and IBM's DOS-Disk Operating System although it was prohibitively expensive for many others.

As disk storage became feasible for business computers and jobs required less setup, Operating Systems automated the process. Early mainframe operating systems were 'single tasking' and could run one job at a time without wasting time between them. JCL - Job Control Language was included in batches and computer operators could set up the next job while a job was running, and the computer would run the next immediately when the prior job completed. Where early computers output print directly to printers so we had to wait until all printing was done to take down a job, newer computers with operating systems could 'spool' printed output so that print from one job could continue while other jobs were running.

As cards and tapes were replaced with disk storage and terminals were placed on desks 'time-sharing', 'resource-sharing', 'multi-tasking' features of operating systems were developed so that several jobs, including interactive terminals using the TSO-Time Sharing Option could run concurrently without much attention of an operator. Data was processed in 'batches' from the 1890s through the 1960s, and moved 'on-line' through the '70s as magnetic disk storage became affordable and operating systems were adapted to handle networks of computer terminals for OLTP - On Line Transaction Processing. The phrases 'batch update and on-line query' and 'on-line data entry with batch controls' were commonly used as computers replaced unit-record equipment and interactive terminals replaced punched cards for many update and paper for reports.

Desktop Computers Exploited IC and VLSI

The Altair 8800 is an example of the first small computers on the market. These only ran one program at a time and had the most rudimentary operating system. We loaded the program on punched tape and pressed the Runs button.

Small computers for business, word-processing, and hobby that used keyboards and monitors came into the marketplace in the 1970s, but were very limited by their 8-bit CPUs. CP/M-Control Program for Microcomputers was popular for these machines and ran on dozens of manufacturer's computers using Intel 8080, Zilog Z80, Motorola 68000, and earlier CPUs. Atari and some other early gaming computers had their own operating systems. Most of these computers used cassette tapes for storage, and floppy disks were added at the end of the '70s.

Microsoft's Bill Gates secured the contract for IBM's PC-DOS and also released the nearly identical MS-DOS for the 'PC Clones' that exploded into the marketplace -- these were based on CP/M and grew to support the 16-bit Intel 8086 & 8088 and other CPUs that were in these early PCs. MicroSoft's DOS was joined with MS Windows in the early '80s and grew to support later Intel and other compatible CPUs.

Today's computers of many types are multi-tasking, including our phones and tablets, allowing a user to run and switch between several or dozens of applications almost seamlessly. Large servers and server farms are massively multi-tasking, can run applications for hundreds through millions of users simultaneously, giving each user sub-second response time.

Today's Operating Systems

Wikipedia maintains a huge List of Operating Systems that includes operating systems from the way past and current times. In 2016 there are about a dozen operating systems likely to be encountered in personal, business, enterprise, and government computer platforms. Many of them are descendents of the survivors of a great shakeout of platforms in the 1980s following anti-monopoly legislation and customer demands for 'open systems'. In the decades prior most companies were 'locked into' their hardware platform, unable to bear the expense of redeveloping their application software and moving their data to another manufacturer's platform. Standards emerged for unix, ANSI, System-V (System Five) and POSIX prevailed, and the surviving manufacturers were forced to compete in new ways since their customers were no longer locked into their platforms, many of them were able to move their valuable applications and data to whichever platform made sense in their area.

Today, unix and Windows are examples of this 'open systems' concept, where either OS may be deployed on hardware built by any of several manufacturers. Some unix is FOSS, and some is proprietary. Windows is proprietary.

A few proprietary platforms compete with these today, notably IBM's i5 and zOS, HP's HP/UX, Sun's Solaris, and SGI's IRIX. These 'not free' unices run a large share of enterprise and government systems.

Unix Pedigree - Timeline

Here is a Unix Pedigree showing how unix & linux have evolved over the decades since it was let loose as UNIX from Bell Labs. Originally developed to run the newly digital telephone systems, Unix was adopted by dozens and dozens of computer companies, most of which have gone defunct, leaving us with several 'unices' today. (We say 'unices', 'unix', or 'Unix' since UNIX is a registered trademark, disputed in recent years, but impolite to use when discussing the unix operating systems.)

Most unices today are some 'flavor' of Linux. The Finnish graduate student Linux Torvalds gifted the 'kernel' for an operating system he developed for a master's class to the open source community in the early '90s and it caught on through the decade to become an option for commercial operating systems by the late '90s. He also gifted 'Git', best known as GitHub.com, which has become the world's most used configuration management system.

Keep in mind that Torvalds mostly controls the 'Linux Kernel' and it takes dozens or hundreds of other components to make a usuable OS for a personal computer or server. The kernel is only the 'heart' of the platform, it requires lots of other software components to make the platform do any valuable work. There are literally hundreds of Linux Distributions that package Torvalds' kernel along with the other components for this or that platform or application. Some Linux distributions derive from the distributions with commercial support: Red Hat, SUSE, and Ubuntu. Slackware and Debian are community developed and maintained by people who make their livings working with and supporting the code. The open source and components are tweaked into hundreds of 'minor' distributions. Here is an Infographic Explaining Linux Distros.

For example: CentOS and Fedora Core are community-supported re-distributions of Red Hat for servers, desktops, and notebooks. Red Hat and Centos are very stable products that promise LTS-Long Term Support with new major releases every few years and 'patches' as they arise. Network administrators like the stability, don't want their systems to ever be down, and don't want to be changing OS software since it's seldom trivial. Fedora Core is 'bleeding edge', always trying new things, with new releases every few months independent of RHE, and some releases break applications written for the prior version. Fedora users like trying new stuff and enjoy the challenge of making apps work on the new distro, even if it takes a few days to make it happen. Red Hat packages and provides support for their RHE-Red Hat Enterprise for $79 through about $20,000 per year, depending on the level of support, for desktops through mainframes. (Check out RedHat.com near the bottom of the page: Purchase, Red Hat Store) They don't 'sell Linux' they support it and use it for their cloud and other enterprise-level features. When Red Hat releases a new version of RHE, including the open source code, it is picked up by CentOS and Fedora teams who tweak it into their distribution and provide it for free a few weeks later, with support by 'the community'.

Torvalds isn't the only originator of free OS software:

  • University of California at Berkley gifted us with BSD-Berkeley Software Distribution. Berkeley's computer science department for decades sold 'Berkeley Unix' to companies like DEC and Sun for use on their hardware. More recently, they released FreeBSD, OpenBSD, and NetBSD as open source projects. BSD has found its way into Android, iOS, Mac OSX, and other unices.
  • GNU-Gnu's Not Unix provides GIMP-Gnu Image Manipulation Program as FOSS and a host of unix-based software such as their venerable editor and developer tool Emacs. Originally for unix platforms, GNU also has lots of software for Windows.
  • Sun/Oracle provides Solaris as a free version and also Net Beans, the enterprise versions of these softwares are not free, but this allows students and developers to use and learn these products.
  • Blender3D, a powerful 3D animation tool that was freely available for personal use and had good commercial support, was handed over to the FOSS community after its primary developer, Ton Roosendaal, raised roughly $100,000 on-line to purchase the source code from his employer and turn it over to the community for further development and enhancement. Blender3D works for 3D printing with MakerBot, and mobile gaming with the Unity toolset and Visual Studio's Community version.
  • Recently, Microsoft released a free community version of Visual Studio as part of it's Mono Project.

What's not free? Application software, proprietary developer's tools like Visual Studio Enterprise, or Java Enterprise Net Beans, Google Docs, accounting, payroll, &c, &c... Many SaaS-Software as a Service companies use FOSS for their servers and web apps, and provide their software as a service. Google is $5 per month for gmail with a company domain and docs... So is Office 365...

To get your FREE Linux OS, check out mirrors.vcu.edu. There you can find several current Linux distributions, including the 'Live' versions for Ubuntu and Fedora. The live versions let you see how the OS will run without installing it on your compuer. On-campus, with a wired ethernet if you have it, the DVDs download in a few minutes, maybe longer off-campus. You can burn the ISOs to CDs or DVD or to a USB-drive and use them to build a server, real or virtual. Installing Linuxes on any machine laying around is much more valuable experience than installing it on none, even more if you can actually make the networking, web, and mail servers work.

Windows Development Timeline

Windows' development has been less chaotic since it's built in Microsoft's facilities by Microsoft engineers, and doesn't have the equivalent of 'open source developers'. Wikipedia's List of Microsoft Operating Systems includes a relatively tame time-line starting with MS-DOS (a more open version of IBM's PC-DOS) through Windows, NT, and Windows Server.

To get your FREE Microsoft Software, use the MSDNAA, where they make thousands of $$$ of Microsoft Software available for free. Current desktop and server operating systems plus SQLServer, SharePoint, Project and other applications are also available and can get you on your way toward Microsoft certifications. Check with Technology Services on the 2nd Floor for details...

Operating Systems in Common Use

  • QNX in a unix-like, real-time operating system for embedded computers that run our automobiles, cellphones, and other appliances.
  • Several flavors of unix and Linux operate computers embedded in applicances of all kinds.
  • Android is Google's open source OS for smartphones, tablets, and wearable devices by several manufacturers
  • iOS is Apple's unix for their iPhones, iPads, and iPods. It is based on BSD unix.
  • Windows has XP through 10 for PCs and notebooks, plus several releases of Windows Server from 2000 through 2016 that follow Windows NT.
  • OS X is Apple's unix graphical operating system for notebook and desktop systems, OS X Server runs Apple workstations and server-class machines
  • Linux and other open source unix-like operating systems power a wide range of servers from server-class through midrange and supercomputers
  • HP/UX is Hewlett Packard's proprietary unix for their line of midrange servers. HP is the leader of the midrange computer market, emerged from the great shakeout intact and with a commodity-priced product line that was hard to compete with.
  • AIX is IBM's proprietary unix for their pSeries midrange unix platforms. While some of their customers continue to use AIX, the pSeries will also run Linux and IBM is pleased to ship one either way.
  • i5/OS or IBM i is IBM's proprietary mid-range OS, descended from their System/36 and OS400 and BSD unix. OS400 and i5 continue to hold a a large share of the worldwide business and enterprise computing market. Currently, IBM's i5 OS runs on their pSeries hardware, either native or virtually.
  • Solaris is a proprietary unix-like operating system optimized for the proprietary Sparc CPU. It runs across Sun/Oracle's product line from thin client, server-class, midrange, and mainframe.
  • z/OS is IBM's proprietary mainframe OS, descended from their 390/OS released in the 1970s. Z/OS contains entirely modern features that help IBM's zSeries mainframes integrate with today's complex environments, and it maintains complete backward compatibility with their customer's core databases and applications developed in and since the 1970's.
  • About 9/2 Oracle laid off most Solaris and Sparc developers! Solaris is a proprietary unix-like operating system optimized for the proprietary Sparc CPU. It runs across Sun/Oracle's product line from thin client, server-class, midrange, and mainframe.
  • z/OS is IBM's proprietary mainframe OS, descended from their 390/OS released in the 1970s. Z/OS contains entirely modern features that help IBM's zSeries mainframes integrate with today's complex environments, and it maintains complete backward compatibility with their customer's core databases and applications developed in and since the 1970's.
  • FOSS - Free & Open Source Software

    FOSS continues to be an important feature of IT Infrastructure these days. In the 1990's groups of FOSS developers released Linux, Slackware, Apache, GNU Hurd, SSL, Red Hat and lots of other distributions of Linux, several free unices like OpenBSD, and lots of other OS components for web and mail servers, GUI and touch for personal and mobile devices. Much of FOSS continues to be well-maintained and supported today, is regarded as stable and safe by many IT managers.

    Free availability of reliable operating systems and application software has driven down the costs of proprietary operating systems and applications. At Y2K, as FOSS was becoming an option for managers, a CAL-Concurrent Access License for Windows NT or Server or SCO UNIX was about $179 per user. Now, a CAL is more like $29. Proprietary operating systems like IBM's i5 or HP/UX offer some advantages, have a large share of business and enterprise servers, and are here to stay at reasonable prices.

    Open Source means that anybody who wants to can modify the source code of the operating or application system and recompile or interpret it on their platform.

    Closed Source means that the developer distributes their application as binary machine code or byte code, unreadable by people, only usable on the hardware or software platform the developer intended, perhaps subject to licensing fees.

    Examples of closed source are MS Windows, Word, and almost any software that's purchased or licensed. The customer only gets the compiled .exe, .dll, .jar, .class or other binary or byte code. They don't get the human-readable source code for their application, only the binary or byte code for the hardware or software platform.

    Open Source examples are Red Hat Linux, LibreOffice, GIMP, WordPress, OpenSSL, Apache, Sendmail, and Postfix, and thousands of open source projects that are available at github and other repositories. These softwares are distributed with their source code and, of course, can't be sold. People and companies don't profit from software sales, they profit by supporting the software and using it to develop and run application software. Or, they wrap their proprietary components with an open source operating environment and charge a fee to distribute and support the package.

    Eric Richmond's classic and evolving article The Cathedral and the Bazaar explains FOSS.

    FOSS developers believe that software should be freely available and that people should make their livings by _supporting_ the software rather than selling it. They believe that software developed in an open environment, the bazaar, seen by hundreds or thousands of eyes, is more likely to be more reliable and more secure than software developed in a lab like Microsoft, a cathedral. If the stuff put in the bazaar is any good, it'll gain supporters and they'll use and continue to support it.

    This revolution led to 'The Software Wars' and the effect on the marketplace has been dramatic. In 1999, proprietary unices or Windows NT cost about $175 per CAL-Concurrent Access License. The operating system for a RISC or CISC machine used as a primary domain controller for 100 employees cost $15,000 or more. In 2017, Windows Server is not yet free except to students, but the price has dropped to about $25 per CAL and the server has matured nicely after a difficult childhood and adolescence.

    One reason Microsoft has been slow to gain traction in the mobile device field is that they've insisted, until about 2016, on about $79 per user for their mobile OS -- this didn't play well against Android, which Google has been practically giving away for free. It's hard for a hardware manufacturer like Samsung, Nokia, or Lenovo to provide a smartphone or tablet for a couple hundred dollars when the OS developer wants $79 per unit, so Windows was avoided by mobile hardware and application developer. Who has a Windows phone? It's rare for anybody to raise their hand to this question...

    The Software Wars: FOSS vs. Microsoft, Proprietary Unices & the Legacy

    • ca. Y2K
      softwareWars.gif (33384 bytes)
    • 2002
      softwarewar.jpg (186647 bytes)
    • 2003
      2003softwarewar.gif (64001 bytes)
    • 2006
      Software Wars 2006
    • 2011
      Software Wars 2011

    These diagrams are _not_ about Microsoft dominating every corner of the software market place as they'd desire, but haven't done! They _are_ about the influence of FOSS - Free and Open Source Software!

    During the time the diagrams represent the price for MS Server has dropped from about $179 per CAL - Concurrent Access License on a primary domain controller to about $29. MS Office Pro sold for decades at something like $500 per copy and is now $5 per month per user with Office 365, competing head on with Google Docs which is bundled with gmail for $5 per user per month.

    As FOSS has gained tremendous share of several markets it has helped drive down the cost of proprietary operating systems and application software that compete with it. Microsoft continues to dominate the desktop and notebook markets, which are huge, and they have some of the small server market. These 'battle lines' are on-going, and Microsoft has real products to pitch into the fray. But, FOSS-aligned technologies dominate in the markets for smaller and larger platforms than personal computers. The blue arrows are a very important feature in the series, so are Gates amd Balmer in borg headgear, but they haven't won all the battles.

    In 2016, Microsoft greatly reduced the price of SQL Server and has released a version of it to run on Linux. SQL Server has matured nicely and can scale up to run enterprise databases, meets ANSI standards, complies with transaction processing for ACID, and can be configured to provide seamless rollover of services in the event of a server fault. (MySQL can't do all this, btw.) Early versions of SQLServer cost upwards of $1,000 with a CAL of about $179 per user, competing with other commercial DBMS. Now, SQLServer is about $200 for the first user and $50 per CAL after the first.

    The Software Wars series appears humorous, with predatory monopolista Bill in Borg headgear at the center of the empire, but these drawings are full of truth about the what's in the legacy and what's emerging in operating systems and application software in the years since FOSS came on the scene. In class we looked at the 2000, 2003, and 2011 versions. 2011 - 2015 hasn't seen a lot of changes in the battle lines, is constant trench warfare, where Windows is maturing nicely and FOSS is surging at the trenches... 

    Placing Microsoft at the center of these diagrams makes it easy to think that Microsoft has won all its battles, but it has not. Microsoft is the undisputed leader of the market only for desktop, gamers, power-users like engineers, and notebook operating systems. Microsoft does not dominate markets of embedded processors, mobile and tablets, server-class, mid-range, or mainframe. And, since Apple released OS X on Intel, Microsoft has lost something like 30% of the notebook and desktop computer market to Apple, who has grown from a steady 2% of these markets to something like 30% in recent years.

    Apple's recent rise in market share for notebook and desktop computers isn't reflected in the Software Wars series. It should appear prominently as a rival of the Microsoft Empire if updated to 2017!

    The diagrams show a constant struggle by Microsoft to get into every kind of marketplace, including the Server Rooms which continue to be dominated by proprietary and free unix running on Intel, HP RISC, and IBM Power chips.

    In 2017: Microsoft has only about half share of the small server marketplace. Windows NT and Server run about half of the workstation/server class machines, but none of mid-range and mainframe computers where legacy, unix-based applications and databases go back to the '70 and '80s. Considered as a whole the legacy providers IBM, HP, and Sun/Oracle continue to dominate this business server marketplace worldwide. Microsoft has a huge 70% of the desktop and notebook market, but had about 95% of it in 2005, has lost 30+% to Apple in recent years. Microsoft has about Zilch in tablets and mobile although they have had attractive products there.

    For our IS majors, this means it makes good sense to learn about MicroSoft Windows _and_ Apple's Mac -- including the keyboard shortcuts for both. It makes very good sense to be skilled with Windows on a personal computer, even if you like Mac better yourself. Windows folks probably should cross-train on Macs to be the most useful. iOS, Android, and a couple flavors of unix like Red Hat Enterprise and Debian should all be familiar to anybody heading into IT.

    Becoming familiar with IBM zSeries is a good idea, too! A large percent of Fortune 100 through 1000 use IBM zSeries mainframes or iSeries mid-range. There are always thousands of managers looking to hire application developers and analysts for this environment. This is something you can do for free.

    Watch out for IBM's Master the Mainframe Challenge in the fall. It is an excellent way to learn zOS, COBOL, CICS, and how to integrate Windows, Mac, or Linux with an IBM eServer zSeries mainframe. Getting past the first challenge gets you a Tee-Shirt, getting past the third can get you multiple job offers North of $70K the next day...

    The 'hardware wars' began back in the '80s as anti-trust and other legislation rendered computing into a highly competitive environment, where in the few decades prior computing was highly proprietary and organizations were usually 'locked into' the computing platform they chose because it was too expensive to try to change platforms.

    When Windows NT emerged in 1993, Bill Gates immediately declared that Micro$oft had won the war, and that Mid-Range and Mainframes were dead with a client/server victory certain very soon. We all mourned the passing of the mainframe when IBM renamed it as the HESS-High End Super Server division in the mid-90's and spoke of its passing. But, as at 2010 or so everybody who could migrate from mainframes did, and the mainframe market has stabilized. IBM calls them mainframes now, and their mainframe sales have been flat for years, with a recent upturn in zSeries hardware to be used for Linux.

    In 2017 the battles and skirmishes continue, with Microsoft claiming most of the desktop and notebook marketplace years ago and still holding onto it. In the 20-teens Microsoft's share started slipping away for personal computers, but has recently grown to nearly 50% of the workstation/server farm market. Microsoft has zero% of the mid-range and mainframe market. Microsoft has moved into the hardware market with their XBox and Surface line where they are responsible for the entire machine and we expect their offerings in the near future to blur the line between 'gaming console' and 'gaming PC'.

    Notes about Software Wars

    Familiarize yourself with the organizations involved in the IT marketplace, their software, and their hardware.

    Here is another take on Software Wars, Software Wars Remastered, with more of a 'personal computing' view. The original, by SittingDuckBE, can be found at Deviant Art.

    Seven Functions of Modern Operating Systems

    Modern operating systems provide seven basic functions: interfaces, device management, file systems, memory management, process control, AAA-Authentication - Authorization - Accounting, and Networking.

    Please be able to name and briefly describe each of the Seven Operating System Functions.

    Common Computing Platforms

    There were literally hundreds of computer platforms during the decades from the 1950s through the '80s. Most were built specifically to be incompatible with other manufacturers' platforms. This had the effect of 'locking in' their customers to their platforms. Without head-on competition and industry standards, computers and their maintenance agreements were very expensive . Customers couldn't afford to move their data and applications to another platform so they were 'locked in' because it was even more expensive to leave it. The legislative and competitive environment changed in the 1980s and many manufacturers and software houses were not able to compete. Lots and lots of hardware platforms disappeared during the great shakeout of computer companies in the '80s and '90s...

    There are about a dozen computing platforms in common use today and they are competitively priced. Not only are operating systems highly standardized today, some of them are 'free'. In 2000, the cost of a CAL - Concurrent Access License for a unix or Windows server was about $179 per CAL. This meant that the operating system used as a domain controller for 100 users cost more than `$15,000! Today, Windows Server is about $29 per CAL. Linux distributors have no per CAL pricing, instead they charge a modest per CPU fee depending on the level of support purchased.

    Here are the platforms mostly likely to be encountered today:

    • Embedded Systems have been in all kinds of appliances and machines for decades. Most of these are are based on RISC chips with a unix-like OS, or no OS at all, depending on the application. Arduino, and Raspberry Pi recently emerged as very inexpensive hardware that can be used with several operating systems to learn this technology and develop prototypes for manufacturing. Costing from about $9 to $99, these 'single board systems' demonstrate a manufacturing cost of less than $1 for some CPUs available for developers today. The very low-end of this line might be the MSP430 Launchpad, with a whole 512 Bytes of RAM, for less than $5. The higher end of embeddable systems is represented by PC-104, which has been available for decades. Boards like the PC-104 may be combined with a Microsoft or Linux OS to make a sturdy platform to be embedded in anything from a web-camera to a helicopter.
    • Automobiles' ECU-Engine Control Units and other computers for control of body components are mostly RISC/QNX platforms. QNX is a unix-like operating system that runs on dozens of CPUs, including Intel
    • RISC/Android is a popular platform for smartphones, tablets, and notebooks. Google provides Android for free to several manufacturers of these platforms.
    • CISC/Android entered the market in the mid-'10s as Intel and AMD entered the mobile and embedded marketplace with ever-cheaper CPUs. Intel's low-cost, low-power Atom makes it possible for a entire tablet computer to sell for less than Intel's cheapest CPU a few years before.
    • RISC/iOS is Apple's platform for their smartphones, tablet, and pods.
    • CISC/Windows, commonly called WinTel for 'Windows on Intel', rules on desktop PCs and notebook computers with something like 80% of this dwindling market.
    • WinTel with Windows Server OS and Intel CPU has grown to take about half the market share for server-class machines.
    • Intel/OS X is Apple's desktop and notebook platform. Macs used Power and OS 9 before Apple transitioned to Intel in the mid '00s.
    • Intel/Linux, LinTel, is today's most common small server platform. 'Generic x86' and Linux forms the basic building blocks of Web-Scale IT and other approaches to highly scalable systems built with generic processors.
    • HP's mid-range server line has a few platforms. Their PA-RISC and Intel Itanium-CISC CPUs will run either HP/UX or Linux. The Itanium model allows for virtualization to run Windows servers and databases as virtual servers.
    • Power/Linux is also common among IBM's customers' servers. It provides excellent cost/performance, is scalable to overlap with mainframes, and has massive virtualization capabilities.
    • Sun's Sparc/Solaris platforms range from desktop through mainframe. The Internet was pretty much built on Sun's servers and Sun maintains considerable share in the large server market.
    • IBM's proprietary Power/i5 platform covers a range of servers from small server-class machines with a few cores through large mid-range machines with dozens of multi-core CPUs able to give hundreds of thousands of users sub-second response time.
    • IBM's Power/AIX, their proprietary unix platform, runs some of the largest mid-range platforms. Linux is becoming more popular for these and for some years IBM has been shipping more machines running Red Hat or Suse Linux than AIX.
    • IBM's zApp/zOS mainframe platforms have a huge share of the large server market, are involved in %90+ of Fortune 500 companies. zApp also supports Red Hat, Suse, or Ubuntu linux to provide an excellent platform for massive virtualization or cloud computing. IBM's zApp/zOS sales have been stable for several years with a recent uptick in zApp/Linux.
    • Supercomputer platforms deploy a mixed lot of CPUs including generic RISC, IBM Power, Intel Xeon, and later technologies. Linux derivatives are the most used operating systems for these huge platforms. A supercomputer may run several different operating systems, each modified for their specific tasks in the super environment.

    Businesses, enterprises, and governments usually run on platforms that few get to see or touch except network techs and managers who have access to locked server rooms and network closets. In many cases, the machines that run business are too big to pick up and carry around. Midrange and Mainframe computers by IBM, Sun/Oracle, HP, and other manufacturers are physically large, the size of filing cabinets or refrigerators. These larger machines are involved in thousands and thousands of businesses, enterprises, and governments all over the world.

    The owners of midrange and mainframe systems mitigate the risk of having an un-scalable system by buying into the lower end of the product line and scaling up their platforms to handle incremental or exponential growth of business without changing applications.

    Smaller servers, and small server farms and clusters, are the ordinary platform in SMBs - Small to Medium-sized Businesses, with Microsoft Windows Server recently gaining about half this market traditionally dominated by UNIX and Linux. Owners of these smaller systems can mitigate the risk of un-scalable platforms by buying or building application environments that can easily scale out to handle growth.

    It is important to choose a software house or system architect who can demonstrate success with scalability and can meet the standards of your industry group. Many application softwares for Windows or Mac are excellent and inexpensive for a small business but cannot scale up to handle medium or large businesses.

    Very large, warehouse-sized, server farms run Google, FaceBook, Amazon Web Services and in other huge application environments where they rival midrange and mainframe computers in capacity and speed.

    Enterprises can compare their IT legacy of midrange computers and mainframes with a host of emerging and recently emerged technologies as they consider the future: Web-Scale IT, Hadoop, and hyperconverged systems like Nutanix approach scalability using small, generic x86/x64 servers to scale out and promise 'five or six nines' of availability, 99.999% or 99.9999%. In recent years these approaches to enterprise computing have all demonstrated the ability to start small and scale out to handle huge loads.

    These days, mainframes continue to hold the records for 'transaction processing', where lots of transactions must be applied to the same dataset, as in credit-card, stock market, or large enterprise and government applications. Where data can be spread across data centers, like Google or Facebook, web scale technologies obviously work well.

    In many cases, it's easy to demonstrate that midrange and mainframe computers deliver better performance and lower TCO - Total Cost of Ownership than server farms. Most companies that could migrate their applications away from midrange and mainframe computers have done it and these markets have been stable for years. Nearly 100% of Fortune 100 companies run on mainframes, more than 90% of Fortune 500 companies use mainframe and midrange machines, and they don't do it because it's more expensive to operate them.

    Large Platforms

    The 'platform dependence' issue holds true for every class of computer system, especially larger platforms for business, enterprise, and government. Application software compiled for an IBM Power CPU and i5/OS will not run on an Oracle/Sun Sparc CPU and Solaris OS. Software for either of these will not run on an IBM mainframe's zApp CPU and zOS or Linux operating system.

    For example, many state and federal government systems built for Univac platforms have been running since the 1970s and will not run on IBM. Both IBM's IMS and Univac's DMS databases are almost infinitely scalable and very quick for transaction processing, but they differ in structure and are not compatible. Costs to re-develop these applications and re-deploy the databases would be huge. So, there remains a niche market for Unisys mainframes because of the extreme expense of moving databases and application software to another platform.

    Large application systems for business, enterprise, public utilities, and government cost millions or billions of dollars to purchase or develop. Systems in use for decades have been tweaked and customized to be perfect for the organization. Moving or redeveloping large applications and databases is very costly and risky and we expect these applications will stay on their platforms indefinitely.

    Applications the author developed for IBM, Univac, and HP platforms for government in the 1970s and '80s are still in use today. Most have been 'web-faced' so that they are directly available to users with a browser, which is much more economical than calling on the phone and having a clerk look up the answer. In many cases, large databases and core 'back office' functionality have been stable for decades and these applications are expected to remain on their platforms indefinitely.

    Large organizations like the Department of Defense, the IRS, an Industrial Commission that handles Workmen's Compensation claims, a Department of Motor Vehicles that licenses vehicles and drivers, a state's Medicaid or Health Regulatory Board, Banks, or a Ministry of Finance all use these large mainframe platforms.

    Platform Independence - Software Platforms

    Platform independence is possible by running a software platform, or virtual machine, like Sun/Oracle's Java or Microsoft's .NET Framework.

    But, virtual machines always detract from performance relative to 'native code' compiled for a hardware platform's CPU. Where performance is an issue, and it is on busy servers, it is often best to run native code without any virtual machine involved. We see this in Windows where there remain 'executable' .exe components of the operating system that are native binary code used where the '.NET byte code' in .dll components is not quick enough to satisfy the users. Operating system and application components that need to be quick like network, disk, or memory management or graphics are usually distributed as binary code compiled for the specific platform.

    We'll discuss software platforms more below...

    For larger systems on server farms, midrange, or mainframe computers the efficiency of code translates directly to the cost of hardware needed to run the application. So, system administrators are motivated to buy efficient, native code for the platform at hand.

    Different Versions of Software for Different Platforms

    Where a software platform is not an option, a software developer may gain share in the market for a platform by making a different version of their software for each platform.

    In some cases, like business applications, this can be easy to do by authoring the software with a language that runs on each platform. Then, application's source code can be compiled on each target platform without rewriting the programs. In other cases, like video games or high-volume transaction processing, this recompilation would result in sluggish performance. The developers must write and maintain separate versions of the source code for each platform so the application will run efficienly.

    For personal computing examples: Angry Birds, the FireFox and Chrome browsers, and the SFTP client FileZilla are available on multiple platforms. The distribution sites for these and other softwares usually determine which platform is requesting the download and deliver the version appropriate for the platform at hand.

    Google's Android OS spans multiple personal computing platforms and has been optimized for each. Android has been running on several RISC-Reduced Instruction Set Computers since the late '00s and has recently emerged on CISC-Intel. Either platform, Android on Intel or Android on RISC, provides a fine user interface and developers can relatively easily deploy their applications in either environment.

    Android, an open source project by Google, was designed to be easy to port to other CPUs and operating environments to help make it 'future proof'. Android Auto is for cars, Android Wear for watches, and Android TV provides a satisfying interface for the new generation of smart televisions, or dumb TVs with a USB system on a stick. Intel's new focus on low-powered, inexpensive CPUs to compete with traditional RISC chips will be facilitated by this ability of Android to morph easily to another platform.

    Desktop and notebook platforms are mostly WinTel - Windows OS on Intel CPU. Apple's OS X on Intel, or MacTel, adds to the number of Intel CPUs on these larger personal computers.

    Through OS 9 and into OS X Apple used IBM's Power/RISC for the CPU in their Mac platforms and switched to Intel/CISC as OS X matured. This is a rare instance where a computer manufacturer 'jumped platforms'. As a result, Macs now run Windows and Windows applications well when they're virtualized with a product like VMWare Fusion or Sun/Oracle's Virtual Box. Prior, Windows software ran terribly on a Mac with a 'PC Emulator' and most Mac software didn't run on Windows at all.

    Enterprise software like Oracle, SAP, or JD Edwards/PeopleSoft are distributed in separate versions appropriate for the platform at hand. Originally written for midrange or mainframe platforms, versions of these softwares are now available for server-class x86 machines running Windows or Linux.

    Range of Computing Platforms

    Here's another way to survey computing platforms, by classifying them across a range from smaller to larger. The boundaries between the classes shift as technology advances, where today's smartphones handle applications prior found on PCs, or server-class machines have capabilities only found on midrange machines a few years ago.

    Here is the range of platforms as at 2016: Embedded Processors, Dumb Terminals, Thin Clients, Mobile Devices, Personal Notebook and Desktop, Workstation/Server/Gamer, Midrange, Mainframe, and Supercomputers.

    Embedded Processors

    Embedded Processors are small CPUs in an appliance or other machine that is not a computer. Processors are embedded in refrigerators, microwave ovens, DVD players, automobile engines and bodies, cell phones, and many other devices.

    We'll see more and more embedded processors as the movement toward an IoT-Internet of Things gains momentum.

    Cheaper CPUs and SoC - System on a Chip technology make it easier and less expensive than ever to embed a system in a device. At the turn of the millenium, embedded processors able to be networked cost something like $75. Today, much more powerful embedded processors able to access a network or internet are a few dollars.

    Embedded processors are mostly RISC and there have always been more of them than any other class of platform. Motorola manufactured practically all CPUs for embedded use until the 1990s when they dropped out of the market and FreeScale took over this product line. Since then, several manufacturers have entered what has become a very competitive marketplace. NXP/Freescale is joined by Global Foundry, TSMC, TI, QualComm and several other manufacturers who compete for share in this market where the design is more 'open source' and less proprietary relative to CISC. We all benefit from fierce competition by lower prices and faster computers.

    Arduino and Raspberry Pi are examples of small systems that run on RISC chips. Developers may make prototypes using these inexpensive systems and send them off to manufacturers who use the prototype to build the embedded system required for the application at hand. A new Intel Galileo Arduino came on the market in 2016, as Intel gains toeholds into what has been mostly a RISC market. In 2016, Microsoft announced Windows 10 for Raspberry Pi, which they'll provide free to developers who are exploring Windows for the IoT.

    For decades PC/104, PC/104+ and similar systems have been available with Intel chips to compete in a mostly-RISC application environment. Intel is eager to gain more share of embedded processors, has recently released a button-sized computer, their Curie to get the attention of app developers. In 2015 the Compute Stick/WinStick showed up as a way to easily plug Windows into a TV or other appliance with USB.

    The IoT will require lots more, smarter, embedded computers and IPV6 provides an addressing scheme to network them. Entrepreneurs can Prototype with Arduino or Raspberry Pi and assembler or C++ and find an open-source manufacturer when they're ready for production. Embedded processors range from $0.79 (4-bit and 8-bit) to $179+.

    Dumb Terminals

    Nearing obsolesence since about Y2K, Dumb Terminals aka as Computer Terminals were the ordinary user interface for computer systems from the mid-1960's into the mid-1990s. They handle character data only, no graphics, and function only as simple I/O - Input/Output devices for the computer they're attached to. Their processors are very limited, usually with only enough intellegence to emulate their competitors' terminals.

    As desktop PCs emerged in the 1980s it was common to see a 'dumb tube' sitting next to a PC on a desk, taking up valuable real estate. As network interfaces got less expensive, most of the computer terminals were replaced with 'terminal emulation software' when it became feasible to connect the PC to the host and get rid of the dumb terminal.

    Wyse was the last remaining manufacturer of computer terminals and in their last years their terminals were able to emulate (act just like) most of the terminals made by their defunct competitors. Nearing the end of their life, Wyse merged with Dell and continues to lead in the 'thin client' and 'zero client' market that follows in the wake of dumb terminals.

    There is a huge legacy of application software written for dumb terminals. And, almost all operating systems provide a CLI-Command Line Interface that grew up on dumb terminals.

    So, although this class of computing _hardware_ is obsolete, the functionality of computer terminals lives on in terminal emulation software we run on our PCs and other devices. There are lots of software options for XTerm devices that were standard for UNIX X Window System and other more proprietary standards. Where the old computer terminals usually depended on a direct, serial connection to their 'host computer' today we use internet and ethernet.

    Here are some examples of terminal emulation software:

    • The open source Putty is the most-used terminal emulator in the world today on Windows computers. At this time, Windows does not have a built-in command line interface and Putty is free.
    • TinyTERM is not free, but emulates more terminals and has more features. It has been used since DOS and early Windows.
    • Mac OSX includes Terminal as a component of unix, as do all the Linux flavors. Carnation Software specializes in terminal emulation software for Mac.
    • For Androids, Google's Play Store includes software like the free ConnectBot or other fee-based SSH clients let a network tech manage a CLI from their Droid. Non-adware versions are available for a small fee.
    • The Apple store provides ServerAuditor for iPhones and iPads, and ServerAuditor also has versions for Android and Windows.
    • The Chrome Store offers Secure Shell as an add-on to the Chrome Browser that works with Windows, Mac, or Linux.
    • MochaSoft promises to connect any device to any host. They provide emulators for proprietary terminals like IBM's TN5250 for midrange or TN3270.

    Thin Clients

    Where it's not desirable to have a more powerful PC, Thin Clients have replaced computer terminals. These are low-powered computers that provide an adequate GUI and can handle a mouse and standard PC keyboard and monitor. Thin clients may not have any local storage, depending on the host computer to load the operating system when the thin client boots.

    Benefits include increased security and lowered maintenance costs. There is nowhere for a virus to hide on the thin client and no way for an employee to introduce malware or steal large amounts of data. If Chelsea Manning had been working at a thin client there would have been no way to carry thousands of secrets away on a Lady GaGa labelled CD.

    Some thin clients are built for a purpose, like POS - Point of Sale devices in grocery or other stores, or information kiosks with ruggedized user interfaces that can stand up to weather.

    With more and more Virtualized Desktop Environments and DaaS - Desktop as a Service, it makes sense to store a user's desktop environment on a server so they can use it with a thin client wherever they have bandwidth on a LAN or The Internet.

    Desktop thin clients cost from about $99 to about $399. Or, they may be built into a device like a POS terminal with interfaces to a store network and devices not found on an ordinary PC: heavy-duty touch screen, scale, barcode scanner, credit card swiper, thermal printer, smartphone scanner, and public display of items and pricing.

    Tablets, discussed next, may serve as thin clients in retail sales and service delivery of many types. Lots of shops that used a cash register and separate credit-card terminal a couple years ago are using a tablet-based thin clients now.

    Smart Phones, Tablets, Mobile Devices

    Where yesterday's cellphones were phones with embedded processors, today's smartphones are small computers that have a phone. Tablets, pads, and pods are small computers without the phone, although some have phone features, too.

    Mobile devices are mostly powered by RISC chips with some flavor of unix OS. CISC and Windows have a very small share of the mobile market in 2016.

    Mobile devices exploded into the market several years ago and already there are more of them in use than desktop and notebook computers that dominated the personal-computing marketplace since the 1970s.

    Prior to about 2014 these were powered mostly with single-core, 32-bit CPUs, had limited RAM, and could only run one application at a time, allowing quick selection from so most only present one application at a time. Since then, multi-core and 64-bit CPUs have reached the market that can have more than one 'window' active at a time and can dock with a full-sized keyboard and monitor to provide an experience that rivals a Windows or Mac PC.

    This class of computer has encroached on the share of Notebooks and Ultrabooks we carry around with us.

    BYOD - Bring Your Own Device works in some organizations, where employees access an intranet securely with whatever browser they have at hand or thumb.

    Desktop and Notebook - Personal Computers

    Desktop and Notebook computers are classed together here because they overlap in power and are used for similar applications for work and entertainment. And, they have similar specifications except for size and expandability.

    The Desktop and Notebook market is dominated by Windows OS on Intel CPU, commonly called WinTel. Intel and AMD have about 80% of this market in 2015 with Apple gaining 20% in the few years since they adapted Mac OSX to run on Intel. Prior, Apple had reached only about a 2% share of this market although their users have been extremely brand-loyal.

    Google's Chromebooks are an alternative to Windows or Mac notebooks, packaging an ARM CPU in a larger platform. Some Chromebooks use Intel, but reviewers claim there is no advantage. They have not gained much share of the personal computer market.

    When a PC is used as a 'personal computer' it can handle one user at a time.

    When used as a server, and many are, a PC-class systems can satisfy hundreds of users with sub-second response time and can be used in 'farms of generic computers' to satisfy thousands and thousands of users.

    These have one bus to connect components, and may have another that is specialized for graphics processors.

    The 'classic' desktop PC is in a small chassis, sometimes in a 'mini tower' that provides space for an extra disk or CD/DVD drive and one or more 'expansion slots' where Expansion Cards may be added directly to the bus. Depending on the speed and capacity these cost from about $299 through $599, plus keyboard monitor and mouse.

    In recent years, the NUC - Next Unit of Computing has become a good choice where no expansion is needed. These range in cost from a couple hundred to several hundred dollars. Although similar in size to low-powered thin clients, these machines can be very powerful, manufactured with the latest technology. With no expansion slots on the bus, any other components are added with USB-3 and USB-C.

    The NUC was Intel's response to the Mac Mini.

    Notebook computers pack a keyboard, trackpad, and monitor all into a portable case. Although some are equipped with telephone features, these are generally not regarded as 'mobile' or 'handheld' devices and are typically used on a desk or lap.

    Small size and light weight demand premium price, with Ultrabooks and MacBook Air starting at about $999 and going up to about $1,999. Ordinary notebooks start at about three hundred dollars for a larger, plastic model without much graphic power and go up to a thousand or more with lots of RAM, multi-core CPUs, and a HD display.

    Workstation/Server & Gamer's Machines

    Workstations are 'beefed up' personal computers traditionally used by engineers for 2 and 3D CAD, statisticians, animators, or artists who need more CPUs and RAM than the ordinary desktop PC provides. Until the '90s, most workstations were 64-bit RISC/unix platforms manufactured by IBM, HP, Silicon Graphics, and Sun as the lower end of their midrange computer lines. Since the '90s and the advent of 64-bit Intel chips, modern workstations are Intel-based and run unix, Windows, or Mac OS.

    Workstations are physically larger than personal computers. Their chassis are usually easy to open and have enough room for dozens of disk drives and several expansion slots. Where PC power supplies top out at something like 300 watts, workstations start at about 450 watts and go up to 1200+ watts. Where personal computers usually have a single CPU socket on their mainboard, workstations start with two and may have four or more. 64-bit personal computers may have a couple or four RAM slots to accomodate from 4 to 32 Gigabytes of RAM. Workstations may have several RAM slots and can accomodate a Terabyte.

    Gamers are attracted to workstations because of their expandability. Where a personal computer usually has an ordinary graphics processor built into the mainboard, gamers can plug in two or more state-of-the-art graphics cards to power very high definition and provide increasingly more realistic and responsive graphics.

    Network managers are attracted to workstations, too, because of their ability to support large numbers of disk or solid state drives. They can connect many Terabytes of storage to a machine that has many Gigabytes of RAM and support lots of virtual desktops or servers. Used as web servers or domain controllers a couple of server-class machines can support thousands of users. Many SMB - Small to Medium Sized Businesses get along fine with server-class machines.

    Powerful server-class machines may also be packaged as rack-mounted or as blades so they take up less space and are easier to manage. In these configurations they may have minimal internal disk or SSD storage. They have fast network connections to storage on NAS - Network Attached Storage or a SAN - Storage Area Network.

    Rack-mounted servers are made to fasten to a standard rack and will consume one through several units on a standard rack of 42 units. If direct attached storage is important a larger enclosure allows space for dozens of disk drives. Rack-mounted servers have their own power supplies and cooling fans.

    Blades are used in 'blade chassis' or 'blade enclosures' that have large power supplies and lots of fans for cooling -- concentrating so much computing power in such a small generates a lot of head. Some blade chassis have embedded processors and run an operating system of their own so that the blades they hold are easy to manage remotely, providing a KVM - Keyboard, VGA and Mouse switch for console access to a GUI or command line interface for each blade. Individual blades may be powered on or off without visiting the rack.

    Workstation/Server class machines have two or more CPU sockets to accommodate two or more multi-core CPUs and may have enough RAM slots to handle a Terabyte or more of RAM. Expansion slots on an industry-standard PCI bus can handle several expansion cards for more graphics processors to satisfy a gamer, or more communications cards to satisfy network interfaces for a web server.

    Workstation/Server class machines are not 'fault tolerant', must be powered down to upgrade or repair them. When they are deployed in clusters, server farms, or clouds they take on fault tolerant characteristics that rival the up-time of midrange systems. that can satisfy millions of users with sub-second response time.

    These generally have one bus except where used as a gamer's machine where they have a bus dedicated to graphic processors, or to high-speed network components for a server. They cost from $1,200 through maybe $3,500, and may house several thousand dollars worth of disk, SSD, and network interfaces.

    Midrange

    The feature that differentiates midrange from server-class is that midrange computers have multiple busses and server-class only has one. Midrange busses can extend between two or three large chassis where a server-class is something less than a foot and cannot be extended much. Midrange multiple busses are interconnected with multiple channels so there are lots of 'switchsets' and routes for data to travel rather than the server-class' North and South Bridges plus an Express Bus. A server-class machine's chassis holds one mainboard with one bus main bus. A midrange machine's busses can hold several or dozens of mainboard-sized components on its multiple busses.

    Midrange machines can extend the busses across two or three chassis, each the size of a file cabinet, so there is lots of room to attach CPUs, mirrored RAMs of multi-Terabytes, high-speed network and datacommunications processors, disks or solid-state drives. These machines are marvels of miniaturization and reliability.

    Tandem Computer, still an ordinary platform in banking, pioneered Multi-busses and Channels in the 1970's in their unix mid-range, fault-tolerant platforms.

    Where a workstation/server class machine has two 'switch sets', typically in the North and South Bridges, midrange machines have a switch set at each intersection of bus and channel.

    Today, one midrange computer can grow to handle the load of dozens or hundreds of server-class machines. They are getting more attention in 2016 for virtualized environments.

    The midrange market is dominated by RISC or proprietary CPUs and Unix, Linux, or proprietary OS like IBM's AIX or i5OS.

    The best of these midrange computers are entirely 'fault tolerant', with redundancy built-in, so any component can fail and be replaced without affecting the machine's availability. It's not uncommon for a mid-range computer built and supported by a manufacturer like IBM, HP, Tandem, or Stratus to run its entire service-life of 5+ years without ever being out of service.

    Used as servers for back office and web, midrange computers can satisfy tens to hundreds of thousands of users with sub-second response time.

    They cost from maybe $3,000 for an entry-level midrange computer for a small business through $1,000,000+ for a large enterprise. Midrange computers provide an easy way to scale up applications and databases. A company can buy into the low end of the range and 'scale up' the platform as needed to accomodate growth. Midrange machines give better ROI and offer a much simpler solution with a much smaller 'footprint' than equivalent power in a server farm. IBM and other midrange manufacturers cite the simplicity of these well-supported platforms -- one 'box' vs. dozens of server-class machines.

    [[[IBM, Sun, HP Product line showing smaller to 3-chassis]]]

    Whenever a decision about business software is upcoming, it's a good idea to find VARs - Value Added Resellers of IBM, Sun/Oracle, or HP for your industry group. These software houses can demonstrate scalable systems backed up by their engineers. These should be considered along with consultants, employees, and others who may have a solution in mind but none to demonstrate.

    Mainframes

    This market is dominated by IBM, Sun/Oracle (In Fall 2017 we wonder about Sun/Solaris), and UniSys. Hitachi, Fujitsu, Siemons and a few other mainframe manufacturers compete with IBM, mostly using IBM's operating system on their prior-generation hardware.

    Mainframes are similar to midrange with multiple busses, channels, and chassis. One mainframe can handle the load of thousands of server-class machines and provide as close to 100% availability as is possible.

    Mainframes hold the record for speed of 'transaction processing' and can handle a sustained load of thousands of transactions per second applied to a file system or database. Mainframes can satisfy millions of users with sub-second response time when used as web servers.

    Mainframes have multiple busses, similar to midrange, that can span two or three chassis, interconnected with 'channels' so that data can flow (like a firehose) among processors, RAM, networks, and storage on more than one bus. With the equivalent of a high-speed 'switchset' at each intersection of bus and channel, the mainframe OS is able to avoid 'bottlenecks' of single bus systems.

    They may have huge RAMs, like 16 or 32 TeraBytes operating 'in parallel' so most user workload is served from RAM instead of disk, increasing performance by a factor of thousands. RAMs are 'RAIDed'. In the very rare event that a memory module falters or fails, it may be replaced without bringing the system down for repair.

    Mainframes cost from $250,000 through $5,000,000 or more and they are completely fault-tolerant -- operated in a 'parallel sysplex' of two or more machines there is no system more reliable for database and transaction processing. Since a mainframe can handle the load of hundreds or thousands of server-class machines, a mainframe's operating costs need to compared to large server farms, not a single server. Mainframe cost of ownership is very reasonable for those who use them.

    When run in a 'parallel sysplex' of two or more machines in different locations they are very close to 100% reliable, some are better than 99.99999%, having never been down in decades of service. Some manufacturers can honestly claim that they 'have never been hacked'!

    Super-Computers

    Where servers support dozens or hundreds of users each running a process or two each, super-computers can support millions and millions of simultaneous processes for one user.

    IBM dominates this market, too. Although they may not hold the record for speed at all times, there are more IBM super-computers in the world than any other brand.

    An IBM super-computer can be as large as 128 chassis, each with 8,192 cores, so a total of 1,048,576 CPUs can be focused on large calculations. IBM's 'Watson' is a relatively small super computer with only 8 chassis and it was able to win the TV game show Jeopardy in a series of real time contests where it listened to answers and spoke its questions the same as the human contestants.

    Smaller supercomputers are more and more available and may be used for business analytics, data-mining, and other applications that weren't feasible until recently. These cost roughly $1,000,000 per chassis, although they can be cobbled together from components like PlayStations and open-source software for far less.

    [[[ Interactive, matching capabilities/characteristice with classes in the range of computers]]]

    Software Platforms & Virtual Machines - Portability vs. Performance

    Where portability is more important than sheer performance it makes sense to write application software for a 'virtual machine' than for a particular CPU/OS platform. Then, in some cases, the application software will run anywhere the virtual machine has been installed.

    There are several virtual machines important in today's IT marketplace: Java, .NET Framework, VMWare, IBM's VM and SLIC, Citrix Xen, Microsoft's HyperV, and others...

    Best known Virtual Machine: JVM - Java Virtual Machine

    Sun/Oracle's Java provides a 'software platform' vs. a 'hardware platform'. Applications developed for the JVM - Java Virtual Machine will run on any hardware platform where the JVM or JRE - Java Runtime Environment has been installed.

    When Java application code is compiled it doesn't result in binary, executable code for a specific OS/CPU platform, the result is 'byte code' that is translated to the CPU's binary instruction set by the JVM and presented to the OS for execution.

    The JRE already on the target desktop or server machine includes methods to handle most tasks of a user or network interface so the byte code downloaded for the application may be relatively brief compared to compiled languages. If a developer is familiar with the methods built into the JRE and uses them wisely it can result in a relatively efficient solution.

    Developing for Java allows programmers to 'develop once and deploy anywhere' vs. 'redevelop for each other platform'. Java runs practically everywhere. A Java developer can release software that will run on Intel/Windows, Intel/Mac, Intel/Linux, RISC/ChromeOS, Power/Linux. For business and enterprise computing, a Java-built application will run best on Sun/Oracle hardware but it will may also give satisfactory performance on any Linux, unix, or Windows server, IBM midrange or mainframe, HP, Stratus, or other manufacturers' platform where Java runs.

    Originally intended for programming embedded computers, Java has grown to be a general purpose application development environment for all kinds of sofware for work and personal use. Java runs DVD and CD players, some DVRs, lots of refrigerators, drink machines, automobiles, and is behind the dials of radios and appliances. Java 'Applets' can be delivered via a web page to run RIA - Rich Internet Applications that can use resources and save data on the desktop or notebook computer running the browser. Where JavaScript is limited in scope to the browser's window, Java Applets run separately from the browser and can be very powerful -- they can also be very destructive if malware is packaged as an Applet.

    The Java programming language is widely taught and used in IT for many fields. Java Net Beans and Enterprise Net Beans are rich 4GL-4th Generation Languages with a GUI - Graphical User Interface for developing business applications. Net Beans has introduced thousands of programmers to database programming for desktop and web environments.

    Windows .NET Framework

    Microsoft's .NET Framework is also a virtual machine. Several versions of the .NET Framework have been distributed with Windows operating systems since NT4, Server2000, and XP. Because is included in the operating system and we don't have to install it, few Windows users are aware the .NET Framework is involved with the Windows application software they run.

    Application development for the .NET Framework is mostly using Microsoft's powerful Visual Studio IDE - Integrated Development Environment. Microsoft's C# and Visual Basic programming languages are supported in Visual Studio along with several other languages. When a Visual Studio application is compiled, the result is CLR - Common Language Runtime for the .NET Virtual Machine, not binary executables for the CPU/OS platform. Since C#, Visual Basic, and other languages all produce CLR, application developers can use whichever language makes sense for the project at hand.

    The .NET Framework hasn't become a de facto industry standard like Java, yet. But, it has been deployed on several platforms and is gaining a toehold in new markets. The Mono Project is Microsoft's reach-out to and embrace of the open source community. The Mono software platform has been deployed on Linux, unix, OS X, iOS, and Android. It lets server and personal applications developed with Visual Studio to run nearly anywhere Mono has been installed.

    The .NET framework installed on a system has built-in methods for handling web forms, desktop forms, database, and networking tasks in its extensive Framework Class Library. A developer who knows the class library can be very productive and write efficient code.

    The Windows Operating Systems and applications built for Windows are distributed as collections of CLR in DLL - Dynamic Load Library files and binary executable code in .EXE - EXEcutable files. Where performance must be optimized the executable can be used. DLLs are used where the class library's method is good enough.

    IBM's VM - Virtual Machine, SLIC - System Licensed Internal Code, and Hypervisor

    IBM has been an industry leader in virtual machines since the release of its OS/370 mainframe in 1970 and its AS/400 midrange in 1988. Their now defunct OS/2 for desktop computing also included a virtual machine, but it's not a factor in the market today.

    IBM's VM - Virtual Machine OS for Mainframes

    The IBM 370 came after several years' use of their IBM 360 line of mainframes. The 370 was an very different machine that supported OLTP - On Line Transaction Processing, DBMS - Data Base Management Systems, and interactive terminals using ITF-Interactive Terminal Facility and CICS - Customer Informmation Control System. The 360's were mostly for batch processing of cards, tapes, and files. IBM 'virtualized' the 360 functionality with their VM - Virtual Machine operating system so that their customers buying IBM 370s could continue to run their valuable legacy of applications running on the 360s.

    IBM's mainframe customers could continue to support batch processes on the new machines as they phased into on-line and networked operations through the 1970s. IBM has always been protective of their customers' legacy of data and applications, has never or seldom obsoleted any of their software. Every mainframe since OS/360 has included options for IBM's VM operating system. Today's z/VM operating system is widely used by IBM's direct customers as well as those who run competitors' hardware with IBM's operating systems.

    IBM's SLIC - System Licensed Internal Code for Midrange

    IBM also uses a virtual machine in their proprietary midrange computer line, the SLIC - System Licensed Internal Code. The SLIC was distributed as firmware on ROM - Read Only Memory chips with their venerable AS/400 line of servers. Today the SLIC is included with their IBM iSeries line of servers built on Power technology. The 'i' stands for Integration and IBM intended to make this platform easy to integrate with any other system including Windows and Linux.

    IBM released the iSeries server line in 2008 after their AS/400 had gained huge market share for SMB, enterprise, and governments from the late 1980s and has a stable share of that market today. Thousands of IBM VARs - Value Added Resellers provided scalable application systems for the AS/400 and now provide them for iSeries where they have been thoroughly and securely 'web-faced' to The Internet. The AS/400 operating system, OS/400, was an object-oriented 4GL application development and deployment environment that was compatible with System-3 and System/34-38 and could run these applications. OS/400 ran on a firmware-based virtual machine, the SLIC - System Licensed Internal Code. OS/400 was somewhat difficult to integrate with other systems. OS/400s worked best with dumb terminals and proprietary IBM networks and file systems.

    Today, IBM's iSeries is built with Power technology and works fine with Internet and Ethernet. An iSeries can also accomodate Intel CPUs, packaged as xSeries, on its busses where instances of Windows Server are desirable. iSeries machines can accomodate dozens of LPARs - Logical Partitions aka virtual servers which can run Red Hat or Suse Linux on its Power processors. Many of IBM's customers had a legacy of application software for AS/400 and wished to extend the value of that legacy by deploying web-based components for their customers and employees.

    IBM's mid-sized and larger iSeries systems ship with a separate chassis for each machine that runs a 'Hypervisor' that manages the LPARs on the local system and any other systems within the IBM customer's enterprise. This provides a well-engineered and understood virtualization scheme that can scale up globally to fit the largest enterprises and provide a fault-tolerant environment with seamless recovery from loss of a data center.

    Virtualization Software: VMWare, Microsoft's Hyper-V, Citrix Xen & others

    IBM isn't the only company that can do virtualization these days. There are lots of options for lots of platforms.

    For example, an enterprise with offices in a dozen timezones can run a virtual server for each branch so each can have a clock and backup schedule set to its own timezone. Security may be enhanced with virtual servers, where each server instance can have its own authentication scheme appropriate for its role. Virtual servers may be secured with virtual firewalls rather than hardware firewalls of the recent past. For example, linux instances may handle security and firewalling among the virtual Windows servers operating on virtual networks.

    In 2016 a server-class machine with 48 or 64 cores and a TeraByte of RAM takes up a few units in a rack and compares favorably with a dozen machines with 4 cores spreading into a second rack. Network managers have many options to provide redundant and resilient systems for desktop, web, or database applications.

    VMWare is perhaps the best known provider of virtualization and cloud software and services. This approach is to install VMWare software on a 'host operating system' and the install 'virtual instances' of other operating systems. VMWare is available for x86 and x64 hardware running Windows, OS X, and Linux. So, a Windows Server or Workstation with VMWare can host other operating systems that run on x86 like other Windows instances, Linux, or OS X operating systems.

    Microsoft Hyper-V is Windows' entry in the virtualization market as of 2008. It distributes with Windows operating systems and has matured and gained good experience in the market. Hyper-V has encroached on VMWare's dominance of this type of virtual machine.

    Citrix Xenserver has a large share of the desktop virtualization market along with virtual servers. Citrix virtualizes server environments for Amazon, RackSpace Cloud, and lots of companies that support virtual desktop environments for their employees and customers.

    Citrix is a venerable software house that has been providing virtual environments and virtual presence for decades. One early product was Citrix WinFrame, which allowed down-sized companies that no longer needed mainframe power to migrate application software and databases to a 'virtual mainframe' running on Windows NT.

    Open source alternatives exist for all these commercial products. Red Hat, Ubuntu's Canonical Corp, Yellow Dog, Novell/Suse and others who provide commercial support for Linux can all demonstrate their expertise with virtualized resources, clouds, or hyperconverged solutions. IBM, Sun, HP, and other midrange manufacturers have embraced open source technology. They give their engineers time to work on open source projects and tweak their platforms to run Linux as well as their proprietary unix and other operating systems.

    Future of Virtual Platforms

    Virtualization is a broad topic and there are lots of examples of it in practical use. Not too many years ago some basic security advices were 'run each service on a separate server' and 'install a hardware firewall at every network interface'. Today, better advice is to run a virtual server for each service and use virtual firewalls...

    Into the mid-'00s virtualization of server operating systems was practical only on midrange and mainframe platforms with multiple CPUs and enough RAM to handle the job.

    Today's server-class machines can have 24 or more 64-bit cores and can support a Terabyte or more of RAM, encroaching on what used to be midrange machines' capabilities. Except for not being fault-tolerant on their own, these server-class machines are well-suited to running multiple instances of server or desktop operating systems. Ever faster networks with meshed network technologies like Ethernet Fabric allow manufacturers of server-class machines to compete with midrange and mainframe solutions.

    As at 2016, virtual server space in a Tier 2 or 3 IX-Internet eXchange is a commodity. Virtual servers from single through quad or eight-cores, 20 Gig through many terabytes of RAID-10 SSD, and lotsa terabytes of Internet throughput can be spun up in a couple minutes or less at AWS - Amazon Web Services, RackSpace Cloud, Digital Ocean, or dozens of their competitors. These companies make it economical for a business or organization of any size to implement globally dispersed, resilient, scalable systems with 99.999% availability.

    [[[Need a Pricing table here for RackSpace, AWS, Digital Ocean, + a couple more]]]

    Businesses and Enterprises that don't want to use 'public clouds' are using the technology to power 'private clouds'.

    Google has made a commodity for commercial email and business applications, at $5 per month per email address for a small to medium-sized business. Microsoft now hawks Office 365 for $5 per month per email address after decades of selling MS Office for hundreds of dollars per user and similar price for upgrades every few years. They compete head-on with Google in this market.

    Software houses with decades and decades of experience in vertical markets have been selling their application software for $3,500+ per CAL - Concurrent Access License, plus 12% per year for support. Now, in order to compete with upstart companies on The Internet, they now provide SaaS - Software as a Service, at prices like $15 to $50 per month per user. This allows small companies to mature quickly using software proven to be effective in their industries, and to easily scale up as their business grows.


    G Saunders,
    Dept of Information Systems
    VCU School of Business

    G Saunders Wings

    Content © 1999 - Today
    By G Saunders
    Images are Available on the Web