G Saunders' Home Page

Computing Hardware

This discussion is about digital computers: what's inside them, how data is represented and characters are encoded, what's attached to them, and the power and other systems the keep them running.

Computers in business and personal use today are the 4th generation of 'von Neumann machines' derived from the digital computers of the late 1940s. They count and calculate with discrete binary digits and logarithms built into their circuitry. This differentiates them from analog computers. Analog computers use sensors that measure temperatures, angles, pressures, chemical concentrations, weights, and other physical properties and use them to record or control processes or environment, as autopilots in airplanes, to aim the big guns on battleships in pitching seas, and other operations using measurements.

Whoever wins is usually better with numbers and engineering, and lots of advances in computing hardware have been put to military use. The machines that came along after ENIAC helped the Allies win WWII have been built to calculate with binary digits and binary logarithms. People are best working decimal, aka as Arabic, numbers. Our accountants and mathemeticians worked out ledgers, logarithms, geometry, calculus and other mathematics in decimal, Base 10.

To compromise when humans are working with computers, the binary values from computers are expressed using 16 hexadecimal digits, or Base 16.

Binary-based vs. Decimal-based Terms for Magnitude

How big, or small, can the numbers be? It depends on the size of 'the word', or number of bits the CPU can load at one time. And it depends on whether the number is an integer or a floating point number represented with binary logarithm. A 16-bit CPU can reference an integer of 64K, or 0 through 65,535. A 32-bit CPU can reference 4 Giga, or 0 through 4,294,967,295. A 64-bit CPU can reference 16 Exa, which is 17.2 Billion Giga! See the chart below. It's important to note that the largest value doesn't just double when the CPU word size doubles, it gets exponentially larger.

Some values in computing are based on powers of 2, not 10. They're interchanged somewhat capriciously depending on the context or situation at hand so it's not always obvious exactly how many bytes, or bits, are being counted. For example, a computer with 8 Gigbytes of RAM will count up to 8,589,934,592 during the POST, not the even 8,000,000,000 some might expect.

Powers of 10 are used for: network bandwidth, cpu clock speed, throughput, watts, volts.

Powers of 2 are used for: binary data in RAM, disk or other storage. 1 KB (one kilobyte) is 210, or 1,024, bytes; 1 MB (one megabyte) is 220, or 1,048,576 bytes. As the magnitude increases so does the difference between the decimal and binary powers of the same prefix.

There is also confusion with terms for measuring capacity of RAM or disk vs. bandwidth on a network. Capacities are usually noted in Bytes (8 Bits) and bandwidth or speed are usually stated in Bits on a decimal scale.

A 'kilo' in the metric system, like a kilometer, means exactly a thousand. In computer lingo, based on powers of 2, a Kilo is 210 = 1,024 in decimal. A mega in decimal is an even million, 1,000 times 1,000, but a Mega in computerland is 1,024 times 1,024 = 1,048,576. A Giga is 1,048,5762 = 1,073,741,824. As the magnitudes increase so does the difference between the decimal and binary-based values.

Decimal and Binary Magnitudes:

 
Common DecimalPrefixDecimalBinary
Trillionthpico10-12--
Billionthnano10-9--
Millionthmicro10-6--
Thousanthmilli10-3--
Hundredthcenti10-2--
Tenthdeci10-1--
One10020
Tendeka101--
Hundredhecto102--
Thousandkilo103210
Millionmega106220
Billiongiga109230
Trilliontera1012240
Quadrillionpeta1015250
Quintillionexa1018260

In order, mixing the 2-based values with decimal, it looks like this: 1,000 < Kilo < 1,000,000 < Mega < 1,000,000,000 < Giga < 1,000,000,000,000 < Tera < 1,000,000,000,000,000 < Peta < 1,000,000,000,000,000,000 < Exa...

Here's a link about Kilo, mega, giga, tera, peta, and all that.

Memory Addressing Schemes

 The CPU sees or addresses RAM as one row, aka a vector, of storage locations that can hold either one byte or one word of data and any location may be accessed at the same speed.  The addresses represent the offset from the first location, starting with 0 and continue to 11111111, 1111111111111111, 11111111111111111111111111111111, or 1111111111111111111111111111111111111111111111111111111111111111.

The largest integer value that can referenced in a computer's word increases at an exponential rate relative to the word size.  Doubling the length of the word from 8 bits to 16 gets 256 times more discrete values, not just twice as many.  Getting to 128 bit registers and IPv6 gets us to a number so large it's hard to imagine: 340 Undecillion. 

Some diagrams of computer memory, especially those used to introduce arrays and other data structures,  show memory arranged in rows and columns for better understanding by people, but the better representation is as a vecgtor.

Back in the '70s when CPUs were 16 bits and direct access RAM was limited to 64 Kilobytes, computers used 'paged memory' where data was constantly moved between 'main memory' accessible by the CPU and pages in expansion memory. Processing was incredibly slow by today's standards but was more than adequate for the character-based record-keeping and accounting tasks of the day.

Today, our 32-bit personal devices can accomodate 4 Gigabytes of RAM. 64-bit CPUs can 'theoretically' reference 16 Exabytes of RAM. A small server's practical limit is more like 32 Gigabytes through a TeraByte. Mid-range and mainframe computers' larger chassis can accomodate several TeraBytes or more.

CPU Word Size and Maximum Integer Value

Word Size Era Maximum Integer Value
8 bits
11111111
Early hobby micro & Mini and Embedded 256
Two hundred & fifty six,
the number of ASCII characters
16 bits
1111111111111111
Early Mini & IBM PC in the '80s 65,536
64 Kilo
32 bits
11111111111111111111111111111111
Mainframes since IBM360 in the '60s, Mini computer from the '80s, Desktop got to 32 bits in the 90's 4,294,967,296
4 Giga 
This is/was a limiting factor for 32-bit servers vs 64-bit midrange and mainframes through the '80s. 32-bits limited RAM and database sizes to 4 Gigabytes, for example, and large integer calculations required lots of machine cycles. Larger businesses and enterprise run mid-range and mainframe systems because the database is too large to fit on 32-bit server class machines.
64 bits
11111111111111111111111111111111
11111111111111111111111111111111
Mainframe from the 70's, mid-range from the late '80s, desktop and servers since mid- '00s 18,446,744,073,709,551,616
16 Exabytes; 17.2 billion Giga;
16.8 million Tera

16 Exabytes is more than any computer in business or enterprise uses today. Larger workstation/server class machines are barely stretching to 2 or 4 Terabytes in 2016 where the ordinary server may have 8 or 16 Gigabytes. With their physically large chassis, mid-range and mainframe computers can accommodate dozens of Terabytes RAIDed to provide redundancy for millions of concurrent processes. Super-computer RAMs can be configured to Petabytes, or thousands of Terabytes. We're a long way from CPU size affecting practical limits of RAM...
128 bits
11111111111111111111111111111111
11111111111111111111111111111111
11111111111111111111111111111111
11111111111111111111111111111111
IPv6 uses 128-bit addresses. IBM's mainframes since 370 have 128-bit registers 3.4028236692093846346337460743177e+38 or 340,282,366,920,938,463,463,374,607,431,768,211,456
Wikipedia's article on 128-bit calls this value about 340.3 Undecillion. Computers have been able to handle 128-bit calculations for decades even with smaller words, but as at 2016 there are no commercially available 128-bit CPUs. DEC's VAX, an early unix-based mainframe, called it an OctaWord -- eight 16-bit registers were harnessed making it a relatively slow operation.

340.3 undicillion IPv6 addresses is enough that each square meter of Earth's surface has 665,570,793,348,866,943,898,599 addresses -- that's about 665 Sextillion addresses per square meter.

For a more detailed table, look at this wiki: Integer (Computer Science).

Converting Values among Number Systems: Binary, Decimal, and Hex

About binary: There are 10 kinds of people in the world - those who understand binary and those who don't. Humanity is vexed by having more than 2 fingers in total.

Digital computers, storage, and networks all use binary numbers and logarithms using two digits, 0 & 1, or 'base 2'. People are best with decimal, Arabic numbers, using 'base 10' with ten digits, 0 through 9. Hexadecimal, or 'base 16', is a compromise for human readability and uses 16 digits from 0 through F. Octal, or base 8, was important when CPUs were 8 and 16 bit but has largely fallen out of use.

Binary and Hexadecimal are closely related because 16 is 24. Binary and Decimal are not so closely related, and behave differently when dividing and multiplying because they round up at different points and their repeating digits are in different places in the value scale.

Wherever a computer's binary calculations must exactly equal a decimal calculation, as in accounting, the computer must take extra steps to adjust the value to decimal, so these calculations take more time in the CPU.

Converting values among numbering systems is covered in the text's appendix A.  Discussion on the board in class will cover or has covered: binary, decimal, & hex number systems. There are lots of on-line tutorials for this important skill.

Quiz questions for this topic look like this:

2610 = ______2        2610 = ______16       

AA16 = ________2     AA16 = ______10       

10102 = _______16        10102 = _______10

Data Representation

Data Types are important in all kinds of programming and exchanging data among systems and applications. These basic data types are recognized in practically every programming and database environment, sometimes using different names:

The manual for any programming language or database has a detailed chapter that describes exactly how the basic data types are implemented in the language or database.

Character Encoding

Numeric data types exhibit a 'pure' relationship between the binary data in RAM, registers, or disk and the integers, real, and decimal values they represent. The data and the values are on a 'continuum' of negative and positive values with zero meaning 'zero'.

Character data doesn't have such a continuum except the 'collating sequence' we learn as 'the ABCs' of our language as kids. The meaning of the binary data that represents characters can only be discovered by doing a 'table lookup'.

In ASCII, it goes like this: 1000001=A, 1000010=B, 1000011=C.

In EBCDIC the binary values mean something different: 11000001=A; 11000010=B; 11000011=C.

When datasets are exchanged among systems that run different CPUs, operating systems, and applications there is likely to be some 'translation' from one encoding scheme to another. Moving files from Power/ARM/Motorola to Intel involves flipping bits from big-endian to little-endian orientation. Unix Web Servers have a rich legacy of ASCII data, using a couple of the ISO character sets to handle 'alphabetic languages' with only 127 or 255 different characters.

HTML Special Characters are important for accurately rendering non-alphabetic characters accurately. Web developers need to recognize and translate from the several encoding's they'll encounter and get emoticons, umlauts, grave and accute accents, and 100's of other special characters on the page. Here is W3C recommendations for web documents. It mentions the most-often seen data encodings.

The text gives a table showing ASCII & EBCDIC values for 'printing characters' 0 thru 9 and A thru Z.   Other printing characters are the decimal point, comma, and the relatively few other symbols we get when we shift a number key.

Here are a couple of websites that show the most often used character encoding schemes: ASCII, EBCDIC, UniCode, &c: asciitable.com; JimPrice.com.  

ISO 8859-1 is the English/Latin version of another ASCII-like character encoding scheme for alphabetic languages. ISO includes a couple dozen other versions to define characters in other languages so that their 'diacritical marks' and inflections can be represented, such as umlauts and accents.

EBCDIC is the coding system used for mainframes and some of IBM's mid-range systems.

ASCII is used almost everywhere else -- both fit in an 8-bit byte and can only represent 128 or 256 different characters. ASCII characters 0 through 127 are always defined the same, but the standard has dozens of more-or-less standard character sets for ASCII characters 128 through 255.

UniCode is Microsoft's character encoding scheme. It gained acceptance worldwide through the '90s. UniCode uses 2 bytes for each character, which makes 65,536 different characters. UniCode was important for Microsoft Server, SQL Server, Office, and other productivity tools like SharePoint and Project to gain market share in countries with languages such as Chinese, traditional Japanese, Burmese, and several others that draw an ideographic character for each of the several thousand words in their rich vocabulary.

UniCode supports the punctuation and inflexions of English, European, Russian, Mediterranean, Middle-Eastern, and other alphabetic languages that build their several thousand words phonetically from a set of a few dozen alphabetic characters, with a key for each character. Microsoft's customers use keyboards to stroke the characters any of dozens and dozens of languages and dialects, left-to-right, right-to-left, alphabetic, or ideographs.

UTF-8: The has been the ordinary character encoding on english and european-language web servers since the '90s. It allows UniCode characters to be preserved in strings and text objects that are predominately ASCII characters. Depending on the target system, UniCode characters can be output directly, or they may be replaced with html special characters.

Collating Sequences: The EBCDIC code for the number 0 is actually _larger_ (11110000) than the EBCDIC code the the letter A (11000000).  This isn't the case in ASCII.  So, mainframes sort letters before numbers and PCs and minicomputers sort numbers before letters.  Issues like this need to be considered and appropriate accomodations made made when data are exchanged among environments with different collating sequences.

Control Characters in data are used to control devices like printers, POS terminals, gas pumps, &c.  They allow a host computer or server to make a bell ring (more likely a beep today), make cash drawers slide open (maybe DC1), move the printhead to the left side (carriage return), move the paper up one line, (line feed), or eject a page (form feed) on the printers and other devices attached to it. Just as important, 'control characters' are sent from a printer to a computer to tell the computer to stop transmitting when the buffer is full, or when the lid is up.  

Memory - Primary Storage

Storage, Primary Storage, and RAM (Random Access Memory) are used interchangeably to refer to the circuitry on the bus, near the CPU, that holds data temporarily as the CPU needs it. In '09 we need to be aware that 'Thumb Drives' or 'Memory Sticks' are _not_ this 'Primary Memory', although it can be used to augment primary memory in some cases. Where access to the RAM on a mainboard is nearly instantaneous, access to data on a USB Thumbdrive is tar-pit slow in comparison. 

Bits, Bytes, Words

Data in memory are represented using binary digits, 0 & 1, called bits.  Eight bits make a byte.  

Data are exchanged between memory and CPU, with the unit of exchange a 'word'.  'Word size' for tablets and smartphones these days is commonly 4 bytes (32 bit CPU). Since about '09 CPUs of 8 bytes (64 bit) have become standard on notebook and desktop computers. Mid-range and Mainframe-class machine have been able to support 64-bit words since the '80s. In 2012 we're seeing a blur between 'workstation/server class' machines mainframes/midrange machines. It _used to be_ word size that differentiated these big machines from the workstation/gamer/server-class, now the midrange and larger machines' multiple busses, chassis, and channels are the difference. In the late '90s, _HUGE RAM_ of many TeraBytes was only available on midrange/mainframe computers, now it's available to workstation/gamer/server-class computers, too.  

'Word Size' dictates the maximum number that can be quickly referenced by a CPU. 32-bit CPUs can only calculate an integer of '4 giga', about 4 billion. This limits their RAM to 4 GigaBytes. A 64-bit CPU can calculate an integer of '16 exa', or 16 ExaBytes. No chassis today can accomodate that much RAM! In 2012 a server/gamer machine with an Intel or AMD compatable mainboard can be fitted with something like 8 or 16 TeraBytes of RAM and a midrange or mainframe can hold huge RAMS like 192 or 1024 TeraBytes. Access to data in RAM is hundreds of thousands times faster than access to data on disk, so machines that can handle large RAMs reliably can handle millions of users.

CPU Word Size Determines Max RAM

In the Old Days of 16-bit CPUs only 64 KiloBytes of RAM could be 'directly accessed'. If more than 64K was required memory was 'paged' -- data was swapped from 'back pages' or 'expansion memory' into the 'main memory' located on the bus with the CPU. This worked OK, but programs ran hundreds or thousands of times slower if paging was required, although they were fast enough to do the job. In modern times we usually gave have 32- and 64-bit CPUs working for us and they can directly reference 4 GigaBytes and 16 ExaBytes of RAM, respectively, so our computers seldom, if ever, resort to swapping or paging to run our jobs.

CPU word size, and the width of the bus between memory and the CPU are usually expressed in bits: early PCs (Apple, Commodore, TRS80, &c)  had 8 bit words, the 8080 series of processors in the first IBM PCs used 16-bit 8080-series CPUs from 80286 thru 80486, and into Intel's Pentiums that got us firmly into 32-bit CPUs. The Itanium, Intel's Core & other 64-bit variants of Intel's multi-core 64-bit CPUs got Intel into mid-range applications as fault-toleralant hardware manufacturers like Stratus provided chassis with hot-swappable components, big chassis, _huge_ RAMS, multiple-busses, and dedicated I/O processors that in the past were only associated with mid-range and mainframe product lines.    

Mainframes and large mid-range computers have used 64 bit processors since the late '1970s, but until recently they could only be located in industrial neighborhoods since they required 3 phase power. Until about Y2K, mainframes were liquid cooled to keep them from melting down, requiring three chillers in the basement per mainframe to provide redundancy. Since the late '90s Mainframes provided by IBM and Sun are air cooled and run on ordinary 110 a/c power. Some of their competitor's products are still water-cooled and are value-priced to extend the value of the legacy of the enterprises that deploy them.

Capacity is Expressed in Bytes, Bandwidth in Bits

Here's an opportunity for confusion: Sometimes capacity of a data processing component is expressed in terms of bits: kilobits, megabits, gigabits, terabits, exabits.  Other times it's expressed in terms of bytes: kilobytes, megabytes, gigabytes, terabytes, exabytes. Network bandwidth is usually referenced in terms of bits, where throughput and file capacity are usually referenced as bytes.

Sometimes, capacity is expressed in terms that _sound_ like the decimal system used everywhere but US & UK, where a 'Kilo' is exactly 1,000 Grams and a KiloMeter is exactly 1,000 Meters.

In the binary-centric legacy of computing a 'Kilo' is not exactly 1,000, it is 1,024, and a 'Mega' is not exactly a Million, it is 1,024 X 1,024. If you watch a computer with 4 GigaBytes of RAM boot up the count is in decimal, or 4,294,967,296 Bytes.  

This will soon get us into a discussion of binary, octal, decimal, and hexadecimal number systems and ascii & ebcdic codes for data...

For a desktop perspective: A PC or notebook built for Windows XP in the new millenium had 512 MBytes of RAM as a minimum, and running several applications could make use of the full complement of 2 GBytes that would fit on the mainboard. In '09, Vista was pushing us to need 2 GBytes of RAM at a minimum, and we wanted the Maximum 4 GBytes that a 32-bit CPU can use if we were running more challenging applications than a browser or email. Vista needs in excess of 1 GByte just to support a good experience at the GUI. In 2014 an IS major wanting to practice virtualization would like to have at least 8 or 16 GigaBytes of RAM  For the mid-range & mainframe perspective on RAM: These machines might be running a large enterprise with thousands and thousands of of processes active and have boards on its bus able to hold _multi-TeraBytes_ of RAM. Server-class machines in 2014 can be provisioned with up to 48 cores and several terabytes of RAM

RAM Access is Quick! Access to data in RAM is hundreds of thousands times faster than disk access -- so machines with 64-bit CPUs can 'keep everything in RAM' and do their work hundreds or thousands times faster than machines limited to 4 GBytes of RAM which must 'keep everything on disk'.

CPU Registers, Cache Memory

Before a CPU can 'operate on data' on a disk or arriving via the network it must get the data into RAM, close to the CPU, and into the CPU's 'internal registers' before the CPU can operate on the data. Machines with huge RAM can eliminate the very slow disk access and service hundreds of thousands, or millions, of users on-line.

RAM is still slow compared to the 'Cache Memory' where the CPU stores recently accessed data in a smaller, faster memory that is even closer to the CPU. Since much of computing is repetitious, this Cacheing scheme can improve application performance. (See more about Cacheing later...)

Volatility of RAM

Most primary memory today is 'volatile'. It is lightning fast, but only provides temporary storage for data.  If the power goes off, or the computer 'crashes' for some reason, all work in progress will be lost. Where a person editing a Word document might be inconvenienced by such a loss, an Enterprise depending on a huge RAM to operate might experience a devastating loss is the RAM is lost.

To mitigate this risk, Mid-range & Mainframe manufacturers engineer solutions to provide practically 100% 'up time' for their systems, making all components, even RAM, redundant. They have heavy, redundant power supplies in the chassis as a last line of defense, UPS close by to handle momentary failures, sags, or surges. Generators kick in if the power's off for more than several seconds.

To handle huge numbers of users, a mid-range or mainframe system may have several TeraBytes of RAM, or more. Their large chassis and multi-bus architecture that can span 3 chassis have space for as much RAM as is needed.

If a component of the RAM fails in these machines, the operating system is able to work around it and continue operations, allowing the system manager to replace the component while the system is running. These huge RAMs are often 'mirrored' so that the system can survive failure of a RAM, and the failed unit can be replaced without taking the system down.

Server-class systems can't do this mirroring or repair on the fly, and failure of a memory component will crash the system. Where 100% uptime is the goal, smaller servers are operated in 'clusters' or 'farms' with parallel processing so that the processes on a failed server can 'fail over' to another server in the cluster.

Non-Volatile RAM

Some types of RAM are 'non-volatile' and will hold the data when the power goes off, but it is more expensive and sluggish relative to ordinary RAM.

In 2016 3D XPoint, pronounced '3D Cross Point', came to the market and promises to develop into non-volatile RAM. Meanwhile it's speeding up SSD and Flash-like storage systems.

'SD Memory' or SDRAM is the ordinary type of non-volatile memory in 2017, can pack an entire server or personal OS and file system into case the size of a large, or small, fingernail. Is not as blazing fast as the RAM in a gamer's AlienWare, but is fast enough to give our smart devices very pleasing response times.

Since about '09 we are seeing SSD-Solid State Disk technology become more and more affordable. These are memory devices that mimic the signals of equivalent Hard Disk devices, and may be attached to an IDE, SATA, or other modern mainboard disk interface. They're still slow compared to the RAM adjacent to the CPU, but they operate at speeds a dozen or more times faster than a disk drive. There are other reasons they are desirable, have no moving or fragile parts and use lots less power, so SSDs are becoming more common in portable devices like notebook computers. SSDs have been available for decades for wearable & other 'rugged' devices, and recently are becoming more affordable for use in desktop and server applications.

ROM, EPROM, Flash Memory

ROM (Read Only Memory, _NOT_ CD-ROM!) is permanent memory & will hold its data without power, but can't be re-written, and the data or program it holds is 'burned into it' using a 'ROM burner' that quite literally burns the data into the ROM by using high current to burn out circuits representing 0 and leaving the 1s intact. These are very cheap and are used to program drink-machines, radios, & other devices where a program is written once and used for the life of the product. In early days of PCs, BIOS was ordinarily on a ROM -- to change the BIOS required swapping the old ROM with a new one.

EPROMs are 'erasable, programmable' ROMS that can be 'flashed' with new data or 'firmware' as needed without removing them from their socket on a mainboard or other device. This is the way the BIOS is typically stored for today's PCs and 'server class' machines.

We're used to seeing the BIOS or UEFI whiz by as our PCs and notebooks boot. Since the late '80s it's become common to have a 'flashable BIOS' kept on EPROM so the BIOS can be updated, or downdated, without having to swap out the 'BIOS Chip', a ROM, as was prior required.  An EPROM can only be flashed a limited number of times.

There are several other variations on memory, and their description could make a good term-paper topic...

Protecting What's in Memory - Backup Power Systems

Primary memory's volatility must be considered by system managers so that business isn't interrupted or lost by power failures.  Today, it's relatively economical to get past the 'power has to be on or you'll lose it' limitations of RAM by using UPSs (Uninterruptible Power Supplies) that filter out 'spikes' and 'brownouts' to provide 'clean' power to computers and networking equipment, and also kick in instantaneously to provide power during momentary, or longer, failures of the municipal power system. Backup Generators kick in within several seconds or minutes after a longer power failure to provide power for hours, or days, to keep the data center on-line.

Uninterruptable Power Supplies

Twenty years ago a UPS could cost as much as a computer so they were hard to sell.  So, we spent a lot of time repairing data corrupted by momentary power losses in those days, and the cost for systems programmers (we got $65 an hour for this tedious work, would be $100+ today) was justified because the UPSs were expensive.   One 200+ user system I provided in about 1985 had a $30,000 UPS to support a $295,000 of minicomputer -- the owner wanted to leave it out, so we made it a condition of sale, knowing that the alternative would have us spending hours patching data every time the power flickered. In a neighborhood with good electrical power, the machine it powered eventually ran without interruption for seven years.  

A UPS for a large machine, and the other equipment in the computer room, is still a costly item but it's not wise to run a mission critical computer without UPSs -- they are very inexpensive today relative to the loss of work, or fried equipment, that is inevitable without one.   I've got a $400, 1.5 KVA, UPS in my office at school that will support the five or six machines there for about forty minutes.  A similar one at home supports one machine for a few hours.  Cheap UPSs, starting at $75, will protect a PC and keep it running for several minutes. Big 20 KVA or larger UPSs can keep an entire office or network rack going for at least several minutes, costing several thousand dollars.

Most power failures are momentary, and the UPS is the first line of defense against momentary failures that can destroy the contents of volatile RAM. Many incidents where networking equipment is 'fried' happen when power is restored to the failed circuits, and the UPS acts like a powerful 'surge suppressor' whether the power failure was momentary or longer-term. Neither UPSs nor Surge Suppressors last indefinitely -- depending on circumstances they may last less than a year -- so they need to be tested regularly and the unit, or its batteries, replaced when the test fails. The batteries in a UPS might last a couple years or longer in a place with stable, 'clean' power, or as little as a few months in a place with 'dirty power'.

A 'racking system', 'power management system', or mid-range or larger computer can provide means to connect several UPS systems so that they can be easily swapped into & out of service as their batteries wear out every year, or two, or three.

Power Conditioning

A 'Surge Suppressor' or 'power strip' is the home or SOHO solution for 'power conditioning' and can protect a computer and some networking equipment against 'dirty power' or spikes. They can only absorb so many joules of energy before they fail to suppress surges. Always buy 'power strips' with beefy surge suppression and a light indicating if it's protecting or not. Or, buy surge suppressors that 'fail safe' when they burn out. Cheap surge suppressors can get fried in a surge, become ineffective, then the next surge (very likely!) seconds or moments later will fry the attached equipments. The MOVs-Metal Oxide Varistors used in inexpensive surge suppressors can become useless quickly on circuits with lots of small 'transient voltage surges' or instantly if they absorb a big 'spike' or surge. Other technologies are more expensive, like 'Silicon Avalanche Diodes', but provide even better protection where it is needed.

UPSs can also act as power conditioning systems, but power conditioners may be deployed as separate equipment in larger data centers. In many areas the municipal power is not very 'clean', can be subject to spikes, brownouts, or other problems that will shorten the life of electronic components. Power conditioning equipment can provide clean, steady power and help protect networking and computing hardware from damage.

The instructor has seen practically _all_ the computers and networking equipment got 'fried' during an episode where a nearby municipal switching station malfunctioned and sent repeated 'spikes' to businesses in several blocks of Broad Street. The spikes were so bad that they blew out incandescent and flourescent lights and ruined most of the dozens of PCs that were on at the time. My customer, who let their network grow without thought to power-condition, lost about $50K of equipment. Their well-protected neighbor next-door, with a large power-condtioning system handling several 200amp panels, lost nothing.

Backup Generators

Mission-critical applications require at least 'two levels of backup power'.  The big, battery-powered UPS and other 'power conditioning' components provide continuous power during 'sags and brownouts' in the municipal power grid, or during a power-outage, for a few or several minutes -- they kick in within less than 1/60th of a second so the equipment they power is not affected by the fluctuation in power. Within a few or several seconds after loss of municipal power, a backup generator kicks in, takes over for the UPSs, and can keep the computer equipment running indefinitely as long as there is a fuel supply.

Backup generators in an area with a gas utility are usually fueled by natural gas. A small server farm or mainframe, may consume a couple gallons per hour. Gas in the form of propane is a relatively stable fuel that can be kept in pressurized tanks indefinitely without 'going bad'. The backup generators for larger data centers, enterprise and ISP, are likely to be diesel, can be very large engines, a very smelly, to the consternation of neighbors!

Liquid fuels, diesel and gasoline, cannot be stored indefinitely in a tank so there must be an active program of consumption and 'rotation' of fresh fuel into the tanks. Gasoline quickly turns to varnish when not used, and diesel fuel grows great sponges of black bacteria if it's not used. This makes 'rotation of fuel supply' necessary. Sometimes, the diesel tanks leaving these sites are full. Imagine the expense of tending the tank-farm needed to power a server-farm for an outfit like Google, FaceBook or even Capital One!

An example of a natural gas-fired generator sits at the SouthWest corner of the old Business Building, now Harris Hall, catty-cornered to the 7-11. This is to support one of our gret Commonwealth's backup sites in the old computer room on the 4th floor. At a class meeting the summer of '12, we experienced the excitement of a lightning strike and power disruption during the meeting on INFO300 while _this very topic_ was being discussed! There was a hurricaine warning in effect, and in the several seconds after the lights went out we mistook the sound of the generator firing off on the other side of the brick wall as the pounding of a tornado, almost did the 'duck and cover!' After a few beats, we recognized engine sounds, not storm, and the building's power came back within a couple of minutes, the generator ran for several minutes after the lights came back on, then went quiet.

At Snead Hall, the generators are below the steel grates on the Southern side of the building where they suck in their air, with exhaust to the roof thru the offices on the North side of the hallways. (Their walls had to be torn out to fix shoddy installation after the occupants complained of diesel fumes.) Our network room on the 4th is protected by these generators, has two 20KVA UPS, the batteries failed and were replaced the summer of about 2010.

Hydrogen Fuel Cells by a company like Ballard are now an option. In the '90s Coleman Outdoor Products prototyped and brought to production a hydrogen-powered generator that used heavy, explosive-resistant canisters that would run the fuel cell for an 8-hour shift -- two were mounted in the chassis so they could be swapped without shutting down the power. A big filing cabinet filled with the cells could keep the thing running for a week! Hindenberg and other images of hydrogen explosions made this a hard sell, and Coleman sold their tech to Ballard. Hydrogen is becoming a reasonable alternative for an enterprise that tanks hydrogen.

'Fault-tolerant' machines often have several UPSs and very heavy power supplies that can survive a few minutes without power, and with regular monitoring and maintenance these chassis can be operated for years without failure. For server or blade farms, racking power systems can provide this extra line of defense, before the 'big UPS' and generator...

Almost all of the mid-range and mainframe systems that I have worked on over the decades have run for their entire lives without ever having an unplanned shutdown, and scheduled downtime for upgrades and maintenance was almost unheard of. A Sequoia I pounded on regularly for years was only rebooted once in its 11 years of service and that was after 7 1/2 years!  This reliability gives these beasts great inertia in the legacy. Enterprises with mainframes and mid-range machines are likely to keep them and spin their value into today's web.  

It appears that Windows Servers may someday be to the point where they can make this claim, of keeping _a machine_ running for a decade, maybe in a couple or a few years after 2013 as Intel-based mainboards take on characteristics of mid-range and mainframes beside 64-bit CPUs. A Windows Server needs to be rebooted regularly to apply patches for security, performance, and to correct errors in reliability, and because some of them 'leak memory' until contiguous RAM cannot be found to support the next process. Windows, adapting to the reality that their hardware must be rebooted at least several times a year, has deployed a sucessful strategy for making _clusters_ of machines run reliably, on a global scale, mostly using the GUI most of us find convenient and helpful. Each Windows Server, or virtual instance, is able to prevent 'the blue screen of death' as its applications leak memory, removing the machine gracefully from its cluster before it locks up and dies, reboots itself, and returns itself to service in the cluster after the reboot.

Backup Power Does Not Replace Backing Up Data!

The text doesn't get into the issues of 'backing up', or keeping copies of data so a system can be restored if there is some 'disaster' that destroys the hardware where it resides. So, I should mention 'disaster avoidance and recovery' strategies here, and introduce the ideas of: encrypted backups, tapes carried off-site daily or multiple generations of backups sent via VPN or The Internet to remote backup sites, transaction logging to a remote device, warm & hot sites, and schemes that involve replicated facilities that are widely separated geographically and connected by a fast network to provide uninterrupted nation-wide or world-wide system access in the wake of a building fire or collapse, or some regional disaster like a tornado or an ice storm.


G Saunders,
Dept of Information Systems
VCU School of Business

G Saunders Wings

Content © 1999 - Today
By G Saunders
Images are Available on the Web