The text packs high-level concepts into chapter 1. It sets the focus of the text, and for much of the course, as Operating Systems.
Figure 1.1 might be a little incomplete. Here's a diagram that shows a System or Network Administrator interfacing directly with the OS, and sometime directly with some of the hardware. Users of servers are typically isolated from the OS by the application software:
For IT professionals, knowledge and understanding of OSs is important for their organizations to maintain a competitive edge, keep their computing resources safe, secure, and economical. Choice of 'platform' (hardware & OS combination) influences, facilitates some options, and limits others. Choice of 'operating environments' is one of the most important strategic decisions an enterprise has to make.
Operating Systems are built for or adapted to each CPU they operate. The combination of CPU and OS is called a platform. Common platforms are WinTel-Windows on Intel, Linux-like on Intel, iOS on iPhone, HP/UX on PA-RISC or Itanium, iPad, or iPod, Android on ARM, i5 on Power...
Some application software, where performance is an issue, is distributed for a particular platform and won't run on another. We're familiar with this 'platform dependency' in the different apps available for Mac vs. Windows, or for iPhone vs. Android in each of their stores, but it also applies to servers and personal computers.
Some OS are built for more than one type of CPU: Apple's Mac OSX launched on RISC FreeScale/IBM Power CPUs, then switched to CISC Intel. The Linux kernel was developed on Intel and has been optimized for Acorn, ARM, Power, zAPP, Intel, Sparc, IRIS, and others. IBM has embraced the Open Source community and both RedHat Enterprise and Suse run the Linux kernel 'native' on IBM's p & z Series CPUs, which power their mid-range & mainframe product line. In Summer of 2015 IBM announced a liason with Ubuntu to run their zSeries mainframe/super-server -- one zSeries can handle the workload of a farm of 2,500 server-class machines.
Applications and databases written for one platform may or may not be 'portable' to another. We're familiar with this where some Mac or iOS applications won't run on Windows or Android. This is inconvenient for personal devices, but can be devastating for a business that has picked an application environment that will not 'scale up' when it's needed.
The best advice for any organization is to Expect Exponential Growth and buy into an application and platform with a proven track record for 'scalability'. It can be fatal or extremely uncomfortable for a business to be faced with an expensive change of software and procedures while they are growing by leaps and bounds.
Since the great shake out of computer manufacturers since the mid-'80s, two main CPU architectures predominate: CISC & RISC. There are other, proprietary, CPU schemes but CISC and RISC account for close to 100% of personal and business computers.
The acronyms stand for Complex Instruction Set Computers and Reduced Instruction Set Computers. Here are some differences:
Most CISC chips are engineered and manufactured by Intel, AMD, and a couple or few others who are licensed to make 'Intel CPUs' with special characteristics. SGI's Legacy CPU, IRIS, is a super-complex CISC chip, not x86 compatible, with a unique instruction set for a 'geometry engine' that can manipulate multi-dimensional matrices and other complex operations needed to render exquisitely textured 3D animations or crunch multi-dimensional data structures for factor or Fourier analysis and other statistics. IRIS/IRIX is the animation engine behind Star Wars, Hobbits, Harry Potter, Finding Nemo, and scores of other FX by ILM and other studios.
RISC means 'Motorola' to most oldsters, since most RISC chips derive from the venerable Motorola 68000 product line. Motorola no longer builds these chips, but passed the baton and tooling to FreeScale to build the low-end CPUs for embedded and other low-powered applications. IBM took over the manufacture of the high-end RISC chips for gaming, personal computers, and their own mid-range and super-computer lines. In 2015 IBM passed the line off to the NY-based Global Foundry. FreeScale, QualComm, Global Foundry, TI, and several other manufacturers supply the world's growing demand for low-end RISC for embedded computers and the more powerful 32 & 64-bit CPUs that run our tablets and smartphones.
Where CISC chips are proprietary and involve 'trade secrets' of companies like Intel or AMD, the RISC design is 'open' for anybody to see and the design is advanced by a 'consortium' of interested companies who manufacture RISC chips: IBM, TI, ARM, Acorn, QualComm, and maybe Google or Foxconn and any other outfit interested in and able to do RISC technology. It's easy to ramp up to the latest technology with RISC, CISC is more difficult. 24 Cores is no big deal in either CISC or RISC. For most purposes today, either CISC or RISC cab deliver excellent performance for most any application.
There is no quick answer to 'which is best, RISC vs. CISC '. CPUs are so fast these days that either architecture can work fine for many purposes. In 2010 it was easy to point and say things like 'RISC chips use less power and generates less heat' but recent developments by Intel and their licensees have produced low-powered CISC chips that run cool.
When CPUs were running a KiloHz or MegaHz it was easy to demonstrate which architecture was best for which kinds of tasks. The erratic 'wait state' of CISC could be evidence of wasted CPU time, and RISC won on networking, database, and other tasks important for servers. CISC often appeared better for graphics and the tasks important for desktop and notebook computers. Now that CPUs are running at GigaHz speeds and often deployed in multi-cores with 'pipelining', it's become harder to prove which is the better.
The market, for the time-being, likes CISC on desktops, notebooks, gamer's machines, and smaller servers. RISC rules embedded, mobile, tablets, and other personal devices, larger servers, mid-range, mainframe, and super computers. In 2015 Intel got a little of the mobile/tablet market with their inexpensive Atom CPU, and we're seeing them run Android and ChromeOS just fine.
So, the 'which is best' question can be answered like 'the cheapest', 'the longest battery life' , 'the fastest way to get this application to market'.
Since RISC is a fairly 'open' technology it's likely to be less expensive unless Intel/AMD hankers for this market. Since Intel and AMD are competitive, they're likely to match the price in some later product, maybe the Atom or Curie?
IBM's new Synapse technology may be poised as the next CPU Architecture? They have more experience than anybody else with AI-Artifical Intelligence and will be ready to exploit this new kind of CPU all the way to 'the singularity' if the odds are with them. Quantum Computing is another technology alogether. Whether these will be a variation on 4th Generation, or a whole 'nother lineage of machines remains to be seen.
There are fewer urgent reasons to 'jump platforms' these days. In the 1980s and '90s when computer manufacturers were going out of business and stranding customers on defunct platforms there were lots of companies looking to move their applications and data to another, viable platform. Or, with newer and faster server-class computers that cost a small fraction of the older mid-range computer, it made economic sense to retire an old machine the size of a refrigerator with a smaller server. Sometimes, some kind of unix would come to the rescue and a software house or consultant could 'rehost' application software, especially if the company owned the source code.
Rehosting applications and databases in a unix environment could be facilitated because there are standards-compliant flavors of unix for both RISC and CISC platforms, and they share several programming languages so many application systems are able to be 'ported' or 'rehosted' from RISC to CISC or the other way around. In any cases the data and programs could be converted from RISC's big-endian to CISC's little-endian using a *ix process similar to fnuxi. Then the source code could be recompiled to produce the app's binary object code for the CPU on the new platform.
These days, moving data between RISC and CISC can be very easy since the current generation of RISC chips allows the endian-ness to be selected as Little, making them easier to share data with the CISC-based Windows or Mac platform or wireless printer nearby or anywhere else.
If an application has been written in a very 'generic' way, not using any special features of a particular CPU's instruction set or operating system then it is very likely that the application's code can be moved very easily among any of these CPUs, making it very easy to deploy on another platform.
Microsoft is writing for RISC again! Microsoft NT Server _used_ to run on MIPS, Sparc, and Intel. Thru Windows 7 and Server 2010 it's been entirely Intel. With Windows 8, Micro$oft is deployable on CISC _and_ RISC for its inexpensive 'RT' line of tablets so it will be an important player of the RISC hardware used for smartphone, tablets, pads, pods, and other devices that need to drain batteries slowly. 2012 was an exciting year, seeing Google and MS face off on the same platform and similar applications. Windows has been slow to gain any marketshare in the 'phablet' market, which in part is due to their insistence on collecting about $79 from a manufacturer to run Windoze on a phone or tablet -- where google is delighted to see Android running for free. In 2014, Windows stopped charging for OS for a device that retails for low$ and their share of this market is creeping up...
The text briefly introduces this topic with a couple of block diagrams that emphasize the fit between OS and hardware, but it is an important practical matter to understand 'what is a platform?' since it's going to affect strategy...
This topic is expanded below, where the platforms you're likely to encounter are shown in some detail...
For a several decades new 'layers' have been added to the traditional OS cake that make applications less 'platform dependent'. The layer is sometimes called 'middleware' -- it goes between an application and an OS.
Java is the best-known example of middleware/virtual machine because so many of us use it:
Java has been adapted to almost every CPU known to man, from drink machines, self-checkout, DVD players, through teachers' gradebooks and business applications that run on Mac, PC, or Linux. Java Net Beans uses the JRE for free, Enterprise Net Beans is not free but can represent good ROI.
In the best cases Java applications can be 'developed once, deployed anywhere'.
Unless the features of a virtual machine are used to best advantage the VM is slower than a real machine would be. IMHO, Some developers don't learn all the rich stuff in the JRE, or the .NET Framework, and don't take advantage of them, instead reinventing stuff, and their applications run very slowly.
If Java or .NET were the fastest we'd be gaming with Java and .NET.
Java's critics, maybe customers of less-talented Java developers, to say 'develop once, sucks bandwidth everywhere'. It's best suited for relatively lightweight apps for business or personal use, or embedded in appliances it was built to operate. Java server applications run better on Sun/Oracle's Solaris running on Sparc than any
(IMHO none of the virtual machines allows true 'develop once deploy everywhere', yet, and developers need to test on every potential target device, virtual and real, so mobil developers need drawers full of phones and tablets... )
Microsoft's virtual machine is the .NET Framework. It was very well thought out and in its 3rd revision is stable and quick. It knows how to do practically anything needed for a windows desktop environment, browser, or web service (used for B2B exchanges) and does it very quickly. The Visual Studio IDE produces 'byte code' DLL-Dynamic Load Libraries for .NET. Microsoft distributes most Windows OS components as DLLs - Dynamic Load Libraries and uses EXEecutables for stuff that needs to be fast. Windows applications built with their Visual Studio produce DLLs with 'byte code' which may be obfuscated, making it more difficult for a customer or competitor to 'reverse compile' the application to get source code not included with their software distro.
The .NET Framework was added to Microsoft operating systems starting with XP and Server 2000. Compared to Java on Windows, .NET has a definite home court advantage and has run crisply since it was released on the market. The .NET Framework is comprehensive, includes essentials for web services, web pages, and GUI. .NET sits between the application and OS to run Microsoft's CLR-Common Runtime Language byte code. Applications developed using the languages of Micro$oft's Visual Studio compile to this 'CLR'. The CLR will run anywhere the .NET framework runs, which is Windows and, surprisingly, Linux and practically any portable device. Apps coded in a very 'generic' may run cross-platform with no or very little tweaking.
Lubricated by this CLR, the .NET framework has been tweaked to work with Linux servers, iOS, Android, OSX, and even mainframe-class hardware. (So something like this joke from the late '90s is actually happening...)
The Mono Project has been afoot for a decade or more, has gained some traction. A VCU IS grad, Mr Wickes, has been with it in Seattle for almost that long.
History: Although the .NET Framework and other virtualization schemes like VMWare, Virtual Box, and Windows HyperV get lots of attention these days, IBM was there first with a virtual machine. As the next-generation IBM370 mainframes arrived in the '70s, most included a VM-Virtual Machine environment that ran everything written for the prior generation, IBM360, OS. This allowed customers to operate their legacy of 'batch processed' card and tape oriented record-keeping and accounting systems as they developed or purchased applications that used the 'interactive terminal facilities' and TSO-Time Sharing Options introduced with the IBM370s, pre-dating PCs by a decade.
IBM has maintained backward compatability as their mainframes evolved from the '60 through today. IBM 3080, 3090, and the new zOS all include VM so customers' application environments have not been obsoleted.
IBM's mid-range AS400 and the new i5 also use a virtual machine, the SLIC-System Licensed Internal Code, for decades. This allows IBM to swap CPUs and make other engineering changes as they maintain the state of the art without their customers' valuable software legacy being obsoleted.
Some of IBM's customers operate legacy mainframe applications on competitor's equipment. IBM provides OS software to customers who run on mainframes built by Hitachi, Siemons, Amdahl, and other manufacturers. IBM's VM architecture makes it very easy for them to adapt their venerable OS to other hardware when their customer demand it.
VCU was an early adopter of IBM370 and operated the largest mainframe in the gret commonwealth of Va on the 4th floor of what is now Harris Hall, serving both the academic and medical campuses. VCU was among the first to deploy Memorex disk drives on an IBM mainframe, and also to jump platforms from IBM to Amdahl/OS3080 when the computer center was moved downtown to the Pocohontas Building and much of the quarter-acre raised-floor computer room was repurposed to classrooms. VCU returned to an IBM hardware platform after several years on the Amdahl, and retired a zSeries mainframe from service at MCV and VCU in about 2010.
IBM never obsoletes its customers' code and its enterprise and government customers own the largest systems on Earth. Some of this legacy code was written to run indefinitely and remains perfect for the application at hand. In many enterprises, legacy applications originally deployed in the '70s have been 'web-faced' and are thoroughly modern.
Operating systems have 'traditionally' provided five main funtions as in the text's Ch 1:
Prior to the '90s the purchase of a separate 'network operating system' was required if a company wanted to network their computers. Since the '90s as systems became more and more 'client/server' oriented and by more than one person, practically every OS likely to be used in a home or business provided two more functions:
Here's discussion of these OS functions. They will be demonstrated during class:
The User Interface, often called a 'shell', is what we see and use. Windows provides a GUI shell that handles most OS functions for ordinary users. It also provides a Windows Command Line (no more DOS since XP) Window where a 'command line interface' is available for less-than-ordinary tasks by owners or network managers.
A Linux or Unix user may have a choice of GUI by using one of the popular XWindows interfaces like Gnome, KDE, or Unity maybe enhanced by Compiz Fusion or other eye-candy for desktop functions. But, many system and network management functions require use of the 'command line' or 'character based' interfaces. Some servers, routers, and other industrial-strength OSs don't provide a GUI at all.
We'll cover a few OS/shell combinations in later weeks. The shell allows a user to control the other four components.
OSs also provide other, non-human, interfaces: the API-Application Program Interface provides a way for programs to interface with the OS, allowing a script to direct the OS to do something or query it to get data like file-sizes or the current system date/time. In OS that support web servers there is the CGI, Common Gateway Interface that makes it easy to get data from browsers' or web-services' GET & POST data and Cookies.
On a PC Device Management components control access to: peripherals like keyboard, mouse, monitor, printers, speakers, &c; and, Secondary Storage devices like disk, CD, or tape; network devices; and anything else that is put into a slot or plugged into a PC Slot or USB -- usually the same CPU (or CPUs) that run the OS and programs manage devices. On a larger machine, mid-range or mainframe, specialized CPUs might be dedicated to managing devices so the CPUs that run programs aren't interrrupted for device management tasks.
Spoolers Printers and other 'asynchronous' processes like email generally have a 'spooler' so that jobs don't get mangled together when a network full of people are printing and handling email.
When a print job starts, the output is first placed in 'spool files' on disk, then doled out to the printers using rules set by a network manager. When the job has printed the spool file is deleted. 'Runaway spoolers' are perhaps the biggest cause of system crashes, so careful placement of spool files is critical to prevent spooled print jobs from consuming all the space on a disk...
File Management functions let users, or application software, do the basic 'file functions', 'CRUD', with files: create, read, update, delete, + copy. The OS generally handles entire files, locating them in a hierarchical directory path (hierarchy commonly denoted with \ in Windows, / in *ix, or : in early Mac) and by name.
Windows and Linux both provide features for searching file systems using 'pattern matching' or other techniques, so that we can find most anything if we can recollect bits or snippets of the content.
To get at the data _inside_ the files usually takes some kind of application software, sometimes provided by the OS and other times purchased thru some developer of software for a platform. For example, OS components called 'editors' (like vi in unix or notepad in Windows) allow users to modify the contents of ASCII, 'plain text', files. But, another, 'non-OS provided' editor provides more functionality for the non-casual user, especially a programmer or administrator, so we find data processing pros choose editors like EMACS, Midnight Commander, Crimson, Visual Studio, DreamWeaver MX, or FrontPage (aggh!) to provide extra power for the particular tasks at hand.
The contents of files containing other data for images, spreadsheets, or Word documents are maintained by applications like GIMP, Excel, or Word.
Memory Management is concerned with the system's Primary Storage, RAM and Cache, located further and nearer from The CPUs. Many operating systems provide Virtual Memory to swap the contents of RAM to disk when there is not enough contiguous space in RAM to handle demands from active processes. DMA-Direct Memory Access channels built into a computer's switch sets move most data without burdening the CPU.
Memory management includes 'Garbage Collection' to return memory no longer needed by retiring processes to the pool for new processes to use. Garbage collection routines attempt to regain large, contiguous, blocks of space and keep them available for the next process that is launched. 'Memory Leakage', which is rampant in some application environments, especially Windows IMHO, which may result in 'blue screen of death' that was so familiar to NT administrators if they failed to reboot their servers often enough. Later versions of Windows, personal and server, handle memory leakage more gracefully but haven't entirely ridded the environment of the problem. AIX and other UNIXes, especially the non-stop versions, may run their entire service life without leaking memory.
Process Management schemes in personal devices and larger computers are almost all 'multiprocessing' these days. Flip phones and other primitive personal devices only run one process at a time. Everything else runs at least several concurrent processes, with mid-range and mainframe computers handling millions at a time.
Using ps (process status) in *ix or the Process Manager (ctrl-alt-delete) in Windows shows a list of processes that are running on your computer. Android and iOS can also show us the processes that are running, maybe using up the battery? An operating system's process management functions more-or-less equitably divvy up the limited resource of CPU Time.
In most GUI desktop systems (Windows, Mac, or Linux) there are usually a couple or a few dozen 'processes' vying for compute cycles. Even in a single 32-bit CPU, space in RAM & Cache, and bandwidth on a disk or network controller has been more than adequate to keep a person happy. These systems have provided very 'crisp' response since processors have been at 400-800 MegaHz, where an active GUI presentation & mouse-event capturing was taking about 80% of the CPU's power. Today's 64-bit, multi-core processors only use a few percent of their bandwidth to handle our GUIs and make our applications run faster than ever.
The busy 'host computer' or server that is running an enterprise may have hundreds or thousands of users entering and using data processed by the applications it runs. Each user may be running running one or more processes as they do their work. The mid-range or mainframe computer has more resources and more 'channels' & 'dedicated processors' to manage them. For example, a 32-bit desktop PC could manage 4 GigaBytes of RAM and this was OK for a gamer or engineer running a 'compute intensive' task, or a small server handling a few dozen users, and 'disk swapping' is minimal.
But, a computer with 64-bit technology is able to reference a TeraByte or more of RAM and avoid much disk access altogether, working directly from huge RAM-drives. (IBM has been providing dual-core, 64-bit processors in the Power line since the '90s and 8-core Cell technology since about Y2K. Intel, AMD, and others got there about 2006 and they're headed to the desktops.)
Larger mid-range and mainframe machines can handle several or lots of TeraBytes of RAM and juggle millions of processes among dozens of CPUs to satisfy hundreds of thousands or millions of users' processes.
Programming for multiple CPUs can be very difficult. Luckily, technology has mostly automated the job so that most application developers don't have to worry with the complexity of multi-programming to take advantage of the multi-processor environment. They write the code and the OS figures out how best to deploy it on however many CPUs are available.
Along with referencing relatively huge amounts of memory and cache with their 64-bit CPU words, modern workstations, gaming machines, mini-computers, and mainframes may have two or more CPUs/Cores working in parallel using 'SMP' (Symmetrical Multi Processing). This allows 'multi-threading' techniques of modern OSs & programming languages to be used so that an application's processes can be programmed to run concurrently, when appropriate, instead of in sequence as is required when only one CPU is available. SMP components on the CPUs, mainboards, and operating systems make all this happen automatically. Languages like C++, Java, and VS.NET let programmers write code to take better advantage of multi-CPUs if needed.
An OS that supports SMP automatically divvies up tasks for the multiple CPUs without any of the Programmers' concern. These machines & OSs can service thousands of time-shared Users' keystrokes & requests for database access so efficiently that they all get sub-second response times. Of course, any computer can be 'over loaded' and Users of inadequately-sized systems decorated their cubicles with a picture of skeleton sitting in front of a computer terminal over the caption 'How is the response time?'.
As an historical note, the Motorola 68000 line of included features for communication with a 'supervisory processor' on a board or a backplane that made them more suitable for deployment in a multi-CPU environment for years before Intel processors. When the Intel's 80486 processors came out with this feature, manufacturers of 'highly-available' or 'fault-tolerant' hardware platforms that had used RISC CPUs for a decade, like Stratus, were able to use whichever CPU gave the best bang/buck performance in the season the machine was delivered. This allowed them to deploy multiple unix or Windows servers in one fault-tolerant chassis.
Where we oldsters used to say that RISC processors were better suited for multi-CPU deployment of enterprise applications, there are lots of systems with CISC CPUs that do it well, too. ARM-64 technology may push the server market more firmly toward RISC again, with both Google and Facebook deploying RISC in their data centers. SoC-Systems on Chips built on with ARM-64 technology include Ethernet Fabric processors on the same die with the CPU and are ideal for hyper-converged systems using spine and leaf architecture for their storage-area networks.
We used to see, through the '90s, that the 'point of diminishing returns' was reached at something less than six or eight CPUs in a SMP scheme. But now that number is higher, with server-class machines handling a couple dozen cores per socket and mid-range computers making effective use of hundreds of CPUs for SMP.
IBM provides the ultimate in process control, 'Capacity on Demand', where systems are shipped with additional CPUs to be used, and paid for, only during peak seasons and turned off at other times -- since many of their customers are in retail and 'mail order' they need more CPUs in the lead up to the Holidays in late Fall and the returns season than the other 10 months of the year.
The text discusses the same Five OS Functions the instructor's been discussing since the '80s. Since networks and The Internet came on the scene personal and server operating systems have included two more critical functions:
AAA Authenticating users, Authorizing their access to system resources, and Accounting for their activity are parts of modern OS like Unix, Linux, or Windows NT or 2003 Server. Even a 'desktop OS' like XP or an OS for a portable device is likely to include methods for someone with administrator privileges on the machine to set up profiles for individuals who will be using the machine. DOS and early Apple OS, on the other hand, had no way of identifying individual users, and made everything on a machine more-or-less available to whoever flipped the power on.
A UserId and PassWord combination is involved in most methods for authenticating users as they 'log on' to a server or host machine. Magnetic stripes, rf chips, or biometric devices are also involved in some systems.
Modern 'multifactor' authentication schemes involve more than one factor to authenticate persons:
Other schemes for authenticating users as they connect from one node on a network to another involve keeping a 'private key' in the user's home directory on their 'home machine' and storing a 'public key' in their home directory on machines they visit. This also works to support non-repudiation in eBusiness, where part of the setup for an EDI trading partnership includes exchange of public keys. The PKI-Public Key Infrastructure provides a very secure authentication scheme.
You're welcome to set up rsa keys at info300.net so you can log in from a trusted device without keying the password.
In organizations where there may be a large number of servers it would be extremely inconvenient for a user to be challenged for a password by each of the servers/domain controllers that provides services. In these cases, a 'super domain' scheme like Kerberos, LDAP-Lightweight Directory Access Protocol, or Windows Active Directory is used to authenticate users once and then 'trust' and continue to authorize their use of resources in a secure way. VCU's CAS-Central Authentication Service works this way -- log in once and get access to BlackBoard, VCU's GMail and Google Apps, eServices, and other university on-line resources.
Ethernet and Internet protocols have been included in OSs since the mid-80's. Windows, after 3.11 for workgroups, and Unix have provided networking functions built into the operating system. For years, Apple bundled AppleTalk into their Mac OS and started providing Internet and Ethernet in the late '80s.
Earlier desktop and server OSs, like DOS or CPM, required separate purchase of a 'Network OS' like Novell, LanTastic, Banyon Vines, ThinNet or other network OS so that a PC could share networked resources. Since Windows 3.11 for Workgroups Microsoft included support for SMB and other protocols for a GUI-managed 'Network Neighborhood' with peer-to-peer and client-server relationships.
All today's personal operating systems have support for Ethernet and Internet protocols built-in. Linux, Windows, Mac, Android, iOS, even Windows CE can do Ethernet and Internet via Cell or LAN. We expect our computers to come out of the box able to participate in LANs and access The Internet as presented by our ISP. Personal devices from smartphones through notebooks and desktops handle the several protocols of the TCP/IP suite, like SMTP, ICMP, POP3, IMAP, HTTP, SFTP, SSH, SSL. TCP, IP.
Common Server OSs are any of these: Proprietary unices like AIX or HP/UX, or SCO UNIXWare; Open Source unices like Linux or OpenBSD; IBM's proprietary non-unix i5/OS or z/OS; or Windows proprietary Server 2000+. These all have protocols built-in for Ethernet, Internet, plus they can interface directly to T-Carrier, E-Carrier for telephone and OC-Optical Carrier for high-speed fiber optices. Since the late '90s practically all servers handle security protocols like SSH, SFTP, and public/private key TLS/SSL for browsers, web services, and applications. Linux and unix servers may be used as gateway, firewall, and/or proxy for internet traffic of http, smtp, pop3 on wired or optical circuits.
'Virtual Memory' is a scheme used in multi-tasking OSs where process, memory, and file system management cooperate to run lots of processes thru a number of CPUs that reference the same RAM. Most OSs since the '90s are multi-tasking, whether they operate systems for one or millions of users. PC OSs like Mac, Windoze, or Ubuntu run lots of tasks for a single user -- the W7 I'm using now is running 107 processes for me at the time of this keystroking. Droids run dozens of processes -- my Droid GingerBread is running 47
Servers, like Windows Server, Linux, or zOS run a few tasks at a time for each of the dozens, hundreds, or millions of users attached to the server. Info300.net, aka info202.info, reliably supports a lab or two of 30 users pounding at vi, debugging websites, and running scripts and database queries -- each of these users runs two or three tasks at a time, and would hit a hard limit at about 32,000 tasks. A busy pair of mainframes can handle millions of users' tasks at a time, some of them for employees using the enterprise's application software and others for customers using the enterprise's websites.
As each task is presented to the OS executive, it is assigned a location in 'virtual memory' located on the system's disk and the application's code and data are placed there. The task is scheduled to run on a CPU until completion, usually according to a 'timeslicing' scheme where lots of tasks share a CPU. When a task has a slice of time its code and data are moved into RAM where the data and instructions are available to the CPU and its registers. When the task's timeslice is used, the OS moves the data and code back to Disk and moves the next task's data and code to RAM. This 'swapping' to and from virtual memory to RAM goes on & on until the task is complete.
The benefit of virtual memory is that a multi-tasking server can handle more users' work simultaneously, or a personal computer can handle more tasks for a user at the same time. The cost is that disk access is much slower than RAM access, so a system that needs to swap runs lots slower than one with enough RAM to accomodate all the users' processes' code and data without swapping.
DMA and a large, contiguous 'swap area'
laid out cylinder-by-cylinder on disc make swapping as quick as it can be, but swapping is
always slow compared to systems that have so much RAM that they don't need to do much swapping.
This is true whether the system is a notebook used by one person, or a server used by many.
Servers that need to process 'real-time' are configured with enough RAM so that
swapping isn't necessary.
Occasionally, virtual memory is used to do the 'garbage collection' required to get enough contiguous space in RAM to launch a large process. We may experience this as several seconds, or more, of a paused interface where we wonder if the system has 'hung up' or 'frozen'.
Huge RAMs on 'Heavy Iron' Mid-range and mainframe computers, with
their HUGE RAMs (Several to many TeraBytes!)
are able to handle many thousand through millions of tasks without swapping to virtual memory at all
-- they do the timeslicing, but don't have to do much swapping.
The sloth of swapping can be demo'd on a personal scale by running Windows Vista or 7 on a machine with only 1 gig of RAM and then loading a few memory-loving apps, or by switching between applications on a cheap tablet with only 256MBytes of RAM. It can be done, but the OS takes up most of the CPUs' bandwidth leaving little available for the users' tasks. If every application has to swap as it runs its timeslice, the effect is not a pleasant experience for the user.
Thrashing is the official term for this. This wiki link describes episodes of thrashing, where the virtual memory system is spending more time swapping users' tasks than processing their work, taking a _lot_ of system resources without doing much real work, ticking off users, and customers, who get a multi-second response where sub-second is the best.
If a non-scalable application environment was chosen, a thrashing system can be the death of an enterprise which can't afford to upgrade or jump to a scalable platform. If scalability was a key objective, an enterprise anticipated exponential growth and can handle it by adding more hardware or computers, quickly 'throwing hardware at it' to keep pace with new acquisitions and markets.
Midrange and mainframes systems with two or three large chassis that can hold lots of components can hold multi-Terabytes of RAM, minimizing the need for swapping, and run literally thousands of times quicker than smaller machines that need to swap.
Server and blade farms made up of smaller machines can also handle exponential growth and give millions of users good response time using 'load balancing', high-speed storage networks, and a lot of power and a/c. The trade-off to give millions of users sub-second response time is a couple of farms of a couple thousand 'server class' machines vs. a couple of big mainframes.
Business decisions about computer systems aren't usually about the theoretical stuff of CPU architecture and OS features. They're usually about software and databases that will provide a competitive edge and the platforms available to support them. Here's a practical discussion of platforms and their providers in the marketplace these days: