Today's IT Infrastructure is often stated as three components: computers, storage, and networking. It's hard to say which is most important since all three are usually needed to do any real work, or entertain us. Early computers, through the '70s, were seldom networked but today's are usually networked and many mission-critical applications require networks to run.
Networks were rare in computer systems from the '50s and emerged in the '70s as enterprise found it was cost-effective and enhanced control to connect branches to computers at the home office. Networks became more common in the '80s as PCs became common in the office and at home, but the networks were expensive and very limited compared to today's internet and ethernet.
Today's networks are as at least as important as the computers on them. Our personal computers and phones don't do much interesting without their networks. Commerce, relationships, and entertainment depend on networking. 'The network is the computer' is one opinion, and in many cases it seems the computer is the network. Networks, computers, and storage are closely related, engineered to fit like gloves.
Hyperconvergence: The state of the art in 2017 puts networking, storage, and 64-bit computer components together on the same chip, making the fit more comfortable than ever. These networked SoCs - Systems on Chips range from the smallest embedded processors through multi-core, 64-bit servers. ARM64 SoCs are enabling a new class of hyper-converged machines with processors for data, storage, and networking all on a chip. They will do it all faster and cheaper than ever before.
So, what is a network?
Some definitions for network are like this one from an old notebook: Nodes connected by a common medium sharing a common addressing scheme and protocols. This is broad enough to describe every kind of network from tin cans & string, teletypes, space stations, electrical grids, computers, and any other kind of network.
Computer networks are the ordinary type of network these days, so a definition like this also applies to this discussion: Networks link computers together so they may share hardware, software, and data.
'The Cloud' rides on fiber-optic circuits that make up 'The Internet Backbone'. This will be discussed in more detail later, but here are good references to get your eyes on this component of IT Insfrastructure: Trans-Oceanic and Coastal Fiber-Optic Cables that carry Internet traffic around the world. Geografia e Geoolitica shows Major Land-based Fiber-Optic Cables.
Fiber-optic cables are mostly immune to the EMI-Electro Magnetic Induction that plague copper cables: sunspots, solar storms, thunderstorms, ground currents, &c. Copper oceanic and land-based circuits required 'repeaters', aka filters and amplifiers, every several miles and were very expensive to operate and maintain.0
Today's fiber-optic circuits also require 'repeaters', aka optical amplifiers. But, they're at intervals of 100 kilometers or more. Fiber circuits require must less electrical power to operate and are provide virtually error-free transmission at truly amazing bandwidth. Fiber circuits require periodic maintenance and repair. They are utilities on a truly global scale, and keep a fleet of ships busy with routine and emergency service of The Internet and The PSTN-Public Switched Telephone Network.
I used to joke about sharks biting undersea cables, now after reading this Slate article I speak seriously!
Here's a Built Visible article about The Halibut and Historic Vulnerabities of Undersea Cables. Today's circuits are no less vulnerable, nor stategic...
'The Internet Backbone' is provided by big telecommunications providers referred to as Tier 1 Networks. These companies 'peer' with each other inside IX-Internet Exchanges by interconnecting their industrial-strength routers with fiber-optic jumpers inside these huge facilities. Since they own the WAN media, Tier 1 providers don't have to pay anybody else for network bandwidth. But, they are required to connect with competitors' networks and carry their traffic. It's an odd arrangement, compared to the highly-proprietary circuits before The Internet, but it works well.
Tier 2 and 3 ISPs may co-locate some equipment in Tier 1 providers' IX for rates like $300 per month for a secured network cabinet or $8000 for an 8X8 ca ge. They may also have facilities in their areas of service and build or lease high-speed circuits to connect their facilites to the next-higher tier.
Of course, this isn't free. Tier 2 and 3 ISPs pay Tier 1 providers for their circuits and bandwidth, and their customers, residential and commercial, pay them for internet service.
(6/13) This accompanies the sketch and discussion from class: A Sketch with Key Features of The Internet and Ethernet.
You can't fix it, or tell what needs fixing, if you can't _see_ it! You don't want your _customers_ to be the ones informing you the system's down! Without the right tools you'll just be guessing...
This is just a quick demo of unix at the command line, hoping it'll prompt students to set up their own servers and or hacking platforms and get their hands on unix/Linux. LAME-Linux Admin Made Easy is a free, quick read and has specifics about seting up and caring for a Linux server, or desktop running X. A manual like Red Hat 9 Unleashed can get you headed to the valuable RHCE-Red Hat Certified Engineer. Or Kali Linux - Ethical Hacker's Cookbook can get you moving to be a Certified Ethical Hacker.
more /var/log/messagesor 'Grepping the Log'
grep someuserid /var/log/secureshows history of login activity and other events like attempts to sudo for someuserid. dmesg shows stuff important from the recent bootup and after. history shows a user's command line history. Event Viewer does similar for Windows with a GUI, and Windows' logs are also available without the GUI at the command line, for scripting, or analysis with 3rd party tools.
More tools for network management, security, and filtering spam:
This is a side trip before diving into the networking topics. It demo's setting up the DNS at GoDaddy, setting up a virtual Linux server, firewalling it, installing web, database, and mail servers on it, and getting the server on the web. The notes show additional steps for securing it with a GoDaddy SSL Certificate and configuring TLS for the web server and mail server. And, the demo in class will use Digital Ocean and not RackSpace...
Watch this spot for more. The following link is still being edited...
If you want to install WordPress install links, php, php-mysql, and mariadb client and server. Links is a command-line browser that's helpful to get stuff. WordPress runs on PHP and MySQL, MariaDB is a recent replacement for MySQL:
yum install links php php-mysql mariadb mariadb-serverModify /etc/httpd/conf/httpd.conf to serve index.php:
# # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. #Use systemctl to restart httpd:
DirectoryIndex index.html index.shtml index.php index.py
systemctl restart httpdCreate a directory like /root/WordPress, cd to it, use links to navigate to the downloads page and download wordpress-9-9-9.zip
links wordpress.orgUse unzip to unzip the zipped file:
unzip wordpress-9-9-9.zipCopy the wordpress files to your web directory:
cd wordpress cp -r * /var/www/html/
Secure the root user for mysql/mariadb since it comes out of the box unsecured, read up on other steps to secure mysql/mariadb:
grant all on *.* to root@localhost identified by 'agoodpassword';Create a wordpress database and a wordpress user + password:
create database wordpress; grant all on wordpress.* to wordpress@localhost identified by 'agoodpassword';Continue on with WordPress famous 5 minute install!
Sketch on the board and discuss traffic management and practical applications of the basic Network Topologies: serial/point-to-point, bus, ring, star, tree/hierarchical, mesh.
Terms introduced: Polling, RTS/CTS, Terminator resistors, Trunk, branches, RS232, SDN-Software Defined networks, Points of failure, VLAN-Virtual LAN, Etherfabric, CSMA/CD, CSMA/CA, Clustering, Heartbeats...
Chapter 4 in the text, and the early chapters in any network certification guide, discuss network topologies and methods for traffic management on networks. Network topologies are covered in the text and in many on-line references. Wiki: Network Topology is a good intro.
Traffic Management on Ethernets: Ethernets use bus topology where all devices have a unique MAC address and are connected to the same bus, which may be copper wire, optical fiber, or a radio frequency. The main difference is between 'wired' busses where all the connected devices can hear each other all the time and 'wireless' where all devices can hear the access point but may not be able to hear each other.
Wired Ethernets use 'collision detection', CSMA/CD, to manage traffic -- all the devices connected to the copper Coaxial cable or Ethernet Hub can 'hear' each others' transmissions so CSMA/CD works. It works well enough that Ethernet is the most used type of local area network. When a device is ready to transmit a frame it 'listens' until the carrier is clear, transmits the frame, and listens to the echo for a 'collision'. If the echo is the same as the the frame transmitted, there is no collision and the sending device waits for an acknowledgement of the frame from the destination. If there was a collision, all devices with a frame to transmit calculate a 'random backoff interval' and the device with the shortest interval attempts again to transmit its frame. In a lightly loaded Ethernet collisions are rare and throughput is high, although it is never anywhere near the bandwidth available. In an overloaded Ethernet collisions happen a lot, QoS suffers, throughput is only a very small fraction of the bandwidth, and access to networked resources can be very slow.
Modern Ethernets usually use Ethernet Switches rather than Hubs. Switches have a bus at their heart that runs 10 or 100 times faster than the ordinary ports, and the 'collision domains' are better managed so more bandwidth is available for network traffic. The bus in an Ethernet switch is connected to the ports through a 'switching matrix' of circuits and buffers that greatly improve performance and enhance security relative to the old Ethernet hubs.
WiFi uses WAPs-Wireless Access Points to connect wireless devices to an Ethernet. Most residential internet service by DSL, Cable, or fiber includes a 'wireless router' and many residences don't use wire at all.
Wireless Ethernet uses 'collision avoidance' to manage traffic: CSMA/CA. Wireless devices at opposite, extreme edges of the WAP's range may not be able to hear each others' transmissions so collision detection will not work like it does on a wired network where all devices hear all traffic all the time.
When a device has a frame to transmit, it listens for quiet on the frequency and transmits an RTS-Ready To Send request, not the frame as in CD. The WAP actively manages traffic by listening for RTS requests and sends a CTS-Clear To Send signal to the device it chooses for the next transmission. When a device hears its CTS, is transmits its frame and waits for an ACK-Acknowledgement from the destination.
The RTS/CTS of CSMA/CA uses more of the LAN's bandwidth to manage traffic than CSMA/CD. But, in a properly sized WiFi it works fine. QoS suffers if there are too many devices, or too many of them are streaming video or other large files.
Both CSMA/CD & CA involve 'listening' and an element of chance for the random backoff interval following a collision or missed CTS or RTS. They are called 'stochastic' networks, managed by chance. QOS-Quality of Service plummets at peak periods of use if an Ethernet is not sized properly. Maximum sustainable throughput for devices on ethernets is always much less than the bandwidth.
Here are sketches showing behavior of CSMA/CD and CSMA/CA:
Where nothing should be left to chance Ethernet is not the best choice for traffic management. Networks for surgical robots, process control, cars, or aircraft use 'deterministic' networks where QoS can be engineered and not left to chance. But, Ethernet works well enough that it is the most common type of local area network worldwide.
Traffic Management on Ring-shaped networks is often by 'Token Passing': Token Ring networks are a variety of ring topology that uses Token Passing for traffic management. Token Rings are 'deterministic', not stochastic. A node on a token ring transmits only when it gets the token, so QoS can be engineered to handle peak traffic and there is no contention for bandwidth. The 'token' is an empty packet the nodes pass among themselves. When the token arrives, the node transmits packets for it's alloted time, then passes the token to the next node.
ARCNET is a bus network that uses token passing. It was a predecessor of Ethernet and remains in use today. Where Ethernet can be flooded and gives poor QOS at peak demand, ARCNET QOS remains stable, allowing a slower ARCNET (2.5 mbps) to out-perform a faster Ethernet (10 mbps). Ethernet prevailed in the market as it got to 100 mbps and was more flexible than an ARCNET, allowing a tree/hierarchical arrangement of hubs or switches where ARCNET is limited to a bus/coaxial cable topology.
Serial Networks use star topology where each remote device has a dedicated, serial, connection to a central unit. Some versions of serial networks 'poll' their devices and avoid contention all together. Or, they may respond to 'interrupts' or RTS signals from devices so bandwidth is not wasted polling idle devices.
Printers, lab equipment, scales, and other serial devices that would be cabled directly to the serial ports on a mid-range computer in the past may now connect to 'serial gangs' in expansion slots on server-class machines, or to 'termservers' that connect the serial devices via Ethernet or ArcNET. There is usually no address for the peripheral devices in Serial Networks -- each device takes the address number of the port they are jumpered into.
A DSL or other leased circuit that connects to an ISP or branch office may be referred to as 'the serial link' since there is no choice of route. PLC-Programmable Logic Controllers for operating manufacturing equipment and utilities like dams, barrages, or power stations often use RS-232 serial networks to connect to sensors and controls, as do audio-visual systems for classrooms and business.
Far from obsolete in spite of decades and decades of service, The Internet's suite of protocols includes SLIP-Serial Line Interface Protocol and PPP-Point to Point Protocol for more efficient operation of serial circuits that are switched or fixed, vs. packet-switched circuits of Ethernet and Internet.
Traffic Management on Internets is more complex since there may be multiple routes involved, where only one route is available on the prior network topologies. Routers connect two or more networks. A common arrangement is for a SOHO LAN to connect via a 'DSL Modem' or 'DSL Router' in the home or office via a 'serial link' (copper wire) to a port on a DSLAM-DSL Area Multiplexer located in the neighborhood or in the ILEC's facility. The DSLAM connects to 'industrial strength' routers in the DSL provider's IX-Internet eXchange, which connect to multiple fibre circuits on the Internet Backbone, which connect to other routes across The Internet. We want our ISP to be well-connected to lots of high-speed fibre, and we don't want them to oversell their services and create congested networks.
Routers are 'gateway devices' that dispatch packets of data through an internet via the best available route at the instant each packet arrives for transmission. Routers adapt to 'line conditions', busy circuits, circuits that go dead, or circuits recently provisioned, to ensure quickest, error-free deliver of packets.
The 'store and forward' method for IP Routers results in error-free transmission of data in somewhat unreliable networks where traffic can become congested, or circuits or routers can go out of service from time to time. 'Error correction by retransmission' ensures that dropped or mangled packets will get to their destination even if another route must be taken.
On the 'inside', LAN side, of the gateway, Routers use ARP-Address Resolution Protocol to match up IP addresses with MAC addresses on the LANs they service. On the 'outside', internet side, routers use RIP-Router Interface Protocol or OSPF-Open Shortest Path First protocols to discover neighbouring routers in The Internet, or an internet, and other protocols (sometimes proprietary) to gather metrics about routes over the horizon. Routers' operating systems dynamically supply Dijkstra's Algorithm with the best data to make the choice of route for each packet that is transmitted. They use IP-Internet Protocol to move packets from router to router on routes provisioned by TCP and other handshaking protocols. An IP router's 'routing tables' are a mix of hand-edited routes, sometimes to reflect commercial agreements, and dynamic routes kept fresh by the router's OS.
TCP handshakes to establish a 'connection' between the end points. IP manages the traffic of packets on the connection, involving error-detection & correction and sliding-window flow control to ensure accurate transmission of data.
Ethernet Fabric is LAN technology that provides multiple routes for network traffic. Instead of a 'trunk' where all traffic flows, each device may be connected to every other device, so traffic isn't concentrated on the trunk. Ethernet fabric is used in SAN-Storage Area Networks and data centers. New ARM-64 SoC components include Ethernet Fabric processors adjacent to the CPUs and are optimized to participate in Spine and Leaf networks.
Telephone networks are relatively ancient compared to computer networks, but they're an essential component for many computer networks today. Telephone carriers provide 'last-mile' connections for most residential, business, and enterprise networks.
Telephone was invented in or about 1850 by the the Italian Meucci, again by Frenchman Bourseul, and again by Alexander G. Bell in 1876 for a US patent for a device with a transmitter and a receiver, a switchboard suitable for a neighborhood, and a scheme for operating them. All these gentlemen, plus at least one Russian, a German, another American, and a Brit, were able to build devices to transmit a voice on an analagous electrical signal in a copper-wired DC current loop to a receiver on the same circuit.
In the US, Bell's patents continued to define the technology and infrastructure for the industry and eventually eventually covered telephones, switchboards, and other components for the copper-wired, battery-powered current loop circuits for early telephone companies. Bell's wasn't the only telephone system, but it was the most widely used and pushed efforts for long-distance services.
Local telephone service was common at the turn into 1900, and long-distance came along through the 1920s.
Switch Boards were tended by telephone operators at the private, local, municipal, and regional exchanges where cables between offices, neighborhoods, cities, and regions were connected. A call within a private or local exchange involved one operator to connect, in the same city at least two, and across country needed several. Long-distance connections were expensive, several dollars per minute, and there was very poor quality of service on the early copper-wired, analog, long-distance circuits that connected most large markets by the 1940's.
Copper, analog circuits are/were very noisy from EMI - Electro Magnetic Induction of many types. They require filters and repeater amplifiers every several miles. Longer distance telephone service that passed through several cities or under the ocean was not always reliable or usable in those years. When noise and distortion made long-distance phone calls difficult, people used telegraph.
Telegraph, in contrast, used much simpler and robust Morse Code signals that were relatively very easy to filter from noise and required fewer repeaters at much longer intervals. The British All Red Line was the first telegraph system to reach all the way around the world, came into service in 1902 after about 50 years' experience with trans-oceanic cables by the British Eastern Telegraph Company. The Italian Marconi made the first trans-Atlantic radio transmission at about the same time, following several years' experimentation. By the 1920's these technologies and companies became Cable & Wireless, whose subsidiaries and spin-offs continue to innovate in telecommunications.
Telegraph circuits were improved to 'multiplex' signals onto the copper-wired circuits and eventually to use teletypes with keyboards and printers. Western Union provided 'store and forward' that made early TWX and Telex nearly 100% reliable. Teletypes dominated the market for long-distance communications into the 1970's, able to transmit a dead-reliable stream of 10 or 30 characters per second anywhere in the world. Telegraph co-existed with telephone through the 1980s, with many companies using telegraph, aka Telex or TWX, as their preferred communication method well into the '90s. Telephones were for local calls with customers. Any hotel or larger motel provided Bell Telephone and Western Union Telegraph services for their guests from the Roaring '20s into the '60s.
Telegraph and Telephone merged into the 20th Century, with Bell and Western Union networks becoming AT&T. Following the splitup of these monopolies, Western Union continued into satellite services, datacommunications, military networks, secure transfer of funds, and other networking tech.
Telephone companies started using digital circuits for long-distance service among cities and exchanges in the mid-1960s and this made long-distance calling 'so quiet you can hear a pin drop', as advertised by long-distance carriers at the time. Legislation and re-regulation of long-line carriers introduced competitively priced long-distance calls beginning in the mid-1960s.
It also made telephone circuits more suitable for computer networks, but they were very expensive and didn't get popular until the '80s when fiber-optic circuits provided 1,000s of times more bandwidth than digital copper circuits.
Telephone and telegraph networks used copper wire from the 1800s through the 1980s, with some microwave links added in the later years. Fiber optic long-line circuits and cellular service were added in the 1980s. Telephone and Telex use the PSTN-Public Switched Telephone Network's addressing scheme: Country Code - Area Code - Local Exchange - Telephone Number. It is global in scale and fully automated, also applies to data circuits for ISDN, T-1, and T-3. Telephone protocols were tweaked through the 1990s to make most telephone and long-line carriers' OC and SONET networks anywhere in the world compatible with one another.
Computer networks developed from the late 1960s as data processing moved away from punched-cards and printed output toward interactive terminals such as teletypes, 'dumb tubes', point of sale devices, personal computers, and the many other networked devices essential for business today. Since the '60s, there have been hundreds of combinations of media, addressing schemes, and protocols that have been called computer networks, most of them incompatible with each other.
Legislation and market pressures from the mid-1980s through the mid-1990s favored the standardized computers and networks we have today. Lots of computer and network manufacturers and software houses that couldn't sustain business in a competitive marketplace fell by the wayside from about 1985 through 1995.
Today's legacy manufacturers survived the shakeout: IBM, HP, Sun/Oracle, Tandem, and a few other companies not only survived but have more or less prospered in a time where computers are practically a commodity, where everybody knows the $ for price & amp; performance.
By the later '90s Ethernet and Internet standards really caught on for business and home use. Lots of people who where using their computers on private networks or no networks were ready for The Internet to come along. Most offices and residences were eager to connect. Today, these two computer networking technologies are in use almost everywhere, are built into most operating systems, and bandwidth keeps getting faster and less expensive!
After several decade's experience engineering all grades of networks, three emerged as standard and essential for business and personal use today: The Internet, Ethernet, and the PSTN - Public Switched Telephone Network. They will be encountered almost anywhere around the world in almost any kind of enterprise, business, organization or residence.
Lots of other types of networks will be encountered by network managers and technicians in specialized applications in some industries or trades. Serial networks using RS-232 standards from the late 1960s are in common use in some niches like gas pumps, lab equipment, audio-visual, and point of sale devices. Token-based networks like IBM Token Ring or ARCNET are common in some legacies.
The Internet can be loosely defined as: A public network that links billions of computers on practically any media using IP - Internet Protocol.
The Internet (with capital T and I) is the world's largest and most used computer network, truly global in scale. It can run over practically any network media including satellites and microwaves. The most common Internet media are high-speed Fiber Optic Cables connecting IXs - Internet Exchanges located around the world. ISPs - Internet Service Providers co-locate in the IXs and also in the ILEC's facilities to connect with their customers' neighborhoods, mostly over dedicated telephone circuits.
The Old IP Address scheme, IPV4 is limited to roughly 4 billion addresses, all of which have been assigned out to ISPs. The Internet's new addressing scheme, IPv6, is practically infinite.
Client, Server, and Peer devices on The Internet use a very rich suite of protocols to handle almost any kind of business or personal communication from supply-chain management through streaming video for education and entertainment.
Here's an exhaustive list of Internet Protocols. Here are some of the most commonly used:
IP Routers are the data communications equipment that makes The Internet, or an internet, work. Big, industrial-strength routers handle traffic on The Internet's backbone and at ISPs - Internet Service Providers' and other network rooms.
These big routers may link dozens of high-speed optical networks on The Internet to dozens of even higher-speed optical networks within an IX - Internet eXchange facility or network room. This heavily meshed system of Routers and optical circuits leads us to expect sub-second response time from websites, and we're often satisfied with the result. The same is true for most B2B exchanges, instant and very inexpensive.[[[Link to CISCO, Brocade sites with bandwidths for their product line]]]
Small IP Routers packaged with Wi-Fi and a few wired ethernet ports handle traffic for many residential or small business networks. Some residential and business Internet services do not place a router at the customer's premises, where 'the box' may be a DSL or Cable Modem or other device that connects to the ISP's networks and routers. Where industrial-strength routers can link lots of local area networks with lots of Internet circuits, a small router usually links to only one Internet circuit and one LAN.
Learn to say 'router' without sticking 'wireless' before it. Some of the most important routers are not wireless at all...[[[Assortment of Routers]]]
Early IP Routers for DARPA were designed to support military networks as a kind of doomsday communications machine that could pick out networks remaining after some or most of the nodes were destroyed and deliver messages. Today's routers are relatively intelligent computers dedicated to datacommunications tasks and they're constantly in communication with one another about conditions so they can make the best choice of route.[[[Interactive with terms above]]]
Ethernet is the world's most used LAN - Local Area Network technology. Ethernet is seamlessly compatible across three common media: Copper Wire, WLAN - Wireless LAN, and Optical. Ethernet's addressing scheme is standardized around the world so that every node has a unique address assigned as it is manufactured. Several common protocols are involved to move data around the local area network, connect printers, storage, and other hardware, and to connect to The Internet.
Ethernet Switches aka Switches are the data communication equipment that makes most LANs work. Large ethernet switches for offices may have 32 or 64 ports and may be 'uplinked' to other switches to link large numbers of computers in a network. A small office or residence may have a small switch with a few or several ports. Servers, storage devices, desktop and other computers, and other devices a connected to a port on a switch by a jumper wire or network drop. A switch may be jumpered to a port on an IP Router to provide Internet access for the LAN serviced by the ethernet switch.
Wireless Access Points connect a wired Ethernet or Router to Wi-Fi equipped notebook, mobile, and portable computers and other networked devices like TV remote controls or cameras.
A comprehensive suite of Ethernet protocols is shared by Windows, Mac, Android, iOS, Linux, every other kind of unix, PlayStation and most other computing platforms in use today. Ethernet and Internet were designed together so LAN and The Internet work together seamlessly. Most printers for business and personal use have Ethernet interfaces today, wired, wireless, or both. An Ethernet may have a Gateway, usually an IP Router, that connects the LAN to The Internet or a private internet. The gateway device may be located at the internet subscriber's premises or at the ISP. Gateway devices may have integrated Firewalls to protect the LAN, or to provide access to it from anywhere on The Internet.
Ethernet and Internet protocols work together to provide error-free transmission of data. EDC/ECC - Error Detection Code/Eror Correction Code handles retransmission of packets if errors are detected, ensuring that if data is moving, it is error-free.
Internet and Ethernet are Packet Switched networks vs. Circuit Switched for the PSTN. Data is 'packetized' into packets with addressing and checksums for error detection and sent across the network to the destination. Networked devices are usually connected to their networks full-time, listening for packets with their address as the destination. Internets, and some Ethernets, allow the packets to travel on more than one route to the destination where they're assembled into what we perceive as a 'stream' of data. This 'inverse multiplexing' helps make communications faster when traffic conditions are light. The bane of packet switching is that QoS - Quality of Service may deteriorate with network demands.[[[Interactive with terms about Ethernet]]]
The PSTN - Public Switched Telephone Network runs on a massive infrastructure of copper wire, optical circuits, and radio frequencies. The addressing scheme, with country & area codes, exchanges, and telephone numbers, applies worldwide so that every node can connect with another instantly for voice or data communications. Several protocols have been adopted since the '90s so that telephone and other telecomm services are compatible globally.
The PSTN instantly provisions dedicated analog or digital connections between any two telephones or computers almost anywhere in the world within a fraction of a second. These virtual circuits carry voice, FAX, or data in real time in 64Kilobit per second increments.
Where Internet and Ethernet are packet-switched networks, The PSTN is circuit-switched. Old copper-wired telephone systems used mechanical switching to make a continuous metallic circuit. Modern telephone systems use a combination of metallic, cellular, microwave, and optical circuits to provide a continuous connection for real-time communications, voice or data.
The PSTN was built on a copper-wire infrastructure beginning in the late 1800s, battery-powered and similar to the telegraph networks in use since the early 1800s. Wireless cellular technology emerged for personal and business use in the mid-1980s and works seamlessly with the copper-wired POTS - Plain Old Telephone Service favored by business. High-speed fiber-optic circuits carry telephone and data traffic between exchanges inexpensively, providing private 'virtual connections' where they are needed. Mostly incompatible with one another well into the 1970s, the several standards that make up the PSTN are almost universally adopted today.
PSTN connections are Circuit Switched where nodes are connected via a private circuit for the duration of the telephone call or data transmission.
These circuits used to be 'metallic' from end to end, with copper wires traveling among electronics to direct and amplify the communications between central offices and branches several, hundreds, or thousands of miles apart. Now, the circuits are 'virtual' with a combination of copper-wire last-mile connections and fiber-optic circuits making the long-line connection. When a voice or data communication is ready, the connection is made between the nodes and data are transmitted as a stream in real time. Errors are not corrected by the equipment in the PSTN. When errors occur they're experienced as noise on the connection. Fiber-optic circuits are pratically noise-free, where copper and radio are very noise-prone.
Where Internet and Ethernet QoS suffers if network traffic is heavy, circuit switched networks deliver at a constant rate. There may be rarely be a 'busy signal' if the local exchange or VOIP provider is very busy and has no circuits available. But, after the circuit is switched the bandwidth is private and not shared with others. Circuit switched networks may also benefit from inverse multiplexing, where packets or frames are sent over as many circuits as are available. A`ll the circuits involved are dedicated for the duration of the connection.[[[Interactive with terms about PSTN]]] Early telephone networks required an operator in a Telephone Exchange to switch the circuits through manually operated switchboards. As telephone technology developed into the early 1900s most local telephone exchanges were automated and local calls could be dialed directly in most cities by the 1940s. Operators were required for most long-distance calls through the 1950s into the mid-1960s. Overseas calls were hand-switched into the 1990s.
From the mid-1990s there were initiatives by the ITU-International Telecommunications Union to standardize telephone and data networks worldwide. Today the PSTN supports 'direct dialing' voice and data calls from almost anywhere to anywhere else.
Settling on PSTN, Ethernet, and Internet standards worldwide has made telephone and data communications faster, better, and cheaper than ever before. Open standards for networks and the computers they link along with anti-monopoly legislation made costs for computers and networks plummet through the 1990s!
Prior to the Breakup of Ma Bell with anti-monopoly and anti-trust legislation in 1982 and the Telecommuncations Act of 1996, computer networks could be prohibitively expensive for SMB - Small and Medium sized Businesses and there were very few recreational uses for computers or networks. Computer networks were proprietary and mostly incompatible with one another. Private network links usually required the same brand and model of datacommunications equipment on each end of a circuit, each costing hundreds or thousands of dollars to handle one or a few users' keyboards and CRTs over leased circuits costing something like $1 per mile per month.
Private networks for supply chain management emerged as JIT-Just In Time inventory management techniques were adopted starting in the late 1970s, before The Internet broke loose. GENIE, Sterling, McDonnell Douglas, BP/TYMNET, AT&T Long Lines, and a host of other private networks carried more and more EDI traffic into the '80s and '90s. These networks cost about a hundred dollars a month to join, plus a fee of about a dollar per document placed in an EDI Mailbox for pickup by a supplier or customer. Connections were made to these private networks via a corporate or personal computer via modem to a local POP-Point Of Presence provider or to a number in another area code at the prevailing long-distance telephone rates.
Through the 1980s more and more industry groups required standard EDI purchasing documents for supply chain management. Enterprise, government, and business of all sizes were required to use EDI and most enjoyed greatly reduced costs of ordering and operations as a result. Although using these networks was very expensive relative to today, EDI made good sense and was widely adopted. Industry, trade groups, utilities, and governments hammered out which EDI documents were required. They either levied fines, in the form of deductions for payment on invoices for non-compliance, for example $27 per carton that arrives without benefit of EDI, to cover costs of manually processing paper documents. Or, they refused to do business with suppliers who did not use EDI.
Today, most EDI documents are transmitted via The Internet directly from server to server as B2B-Business to Business exchanges without any private networks involved. PKI - Public Key Infrastructure ensures confidentiality of the data and the authenticity of trading partners.
In many SOHO or SMB the only full-time connections are local, last-mile circuits to one or more ISPs-Internet Service Providers at each branch. The Internet carries the data among them. VOIP by Vonage, Skype only needs fast, reliable Internet and can be provided over fast DSL, Cable, FTTD, or OC circuit. ILECs, CLECs, and a host of other compete to provide commercial bandwidth at any address.
The cost of reliable bandwidth for WANs - Wide Area Networks has dropped by 90% or more compared to the costs into and through the '90s.
Networks for personal communications and entertainment prospered from the late '70s through the '80s. AOL, CompuServe, Prodigy, EarthLink, Erol's On-Line and some other modern ISP began as 'bulletin boards' and text-based 'chats' that connected people using modems on their personal computers. (CompuServe was aka CompuSex and AOL as something other than America On Line. It was not an environment where a grandmother kept in touch with their kids...) Almost any telephone line could provide access to a BBS-Bulliten Board Server at 1200 - 19200 bits per second, increasing to a max of about 56K in the '90s. These were text-based services with primitive graphics that were intolerably slow to develop at these low speeds.
A local telephone number could be provided, and may still be where privacy is important, via a POP-Point Of Presence provider where access time is metered, with costs like $1 per hour. Any business anywhere can provide a local number to woo anybody with any phone to call them.
The early BBS were largely incompatible so, for example, an AOL user couldn't send an email to a Prodigy user. With connection speeds from 2,400 bps through 48,000 bps in later years there wasn't bandwidth for graphics, only text.
As The Internet came along in the '90s it displaced most of these bulletin boards. 'Newsgroups' on The Internet were easy to set up, but were quickly eclipsed by MySpace and other social media as Google, Yahoo, Ask and other search engines indexed The Internet for us.
The several BBS listed above connected their systems to The Internet more or less directly so that email could be sent from their service to an Internet address. Some became major ISPs through survived into today's broadband.
Smaller ISPs have mostly disappeared now, were not able to compete with Verizon and the cable companies who already had wires to most customers' doorsteps.0 [[[Exercise with timeline and emergence of the network tech]]]
Today IP Routers, and Ethernet Switches Bridges, Wi-Fi, and the several adapters that physically connect telephone company circuits to our networks are highly standardized and well-understood. It hasn't always been this way...
Lack of standards in computing and networking from the '50s onward kept costs very high and increased the risk that expensive software and networks might be abandoned if a manufacturer went out of business. Since systems weren't compatible, expensive application software couldn't be moved from system to system. With lots of computer companies failing, there was a high risk that applications might need to be redeveloped or purchased. In the '80s business and enterprise customers began to demand that manufacturers adopt standards for all facets of computing, storage, and networking hardware and software.
Through the '70s and into the '80s there were also dozens of types networks, mostly incompatible. In those days, a 'network operating system' was purchased from the manufacturer of the computer hardware or from a networking company if the hardware support agreement allowed. There were many alternatives, too many.
Since Windows 3.11 for Workgroups in 1992, Microsoft has included Ethernet 'for free' and made it very easy and economical to deploy an NT Server for a bunch of Windows 3.11 Desktops in the Network Neighborhood. Apple's Mac has included Ethernet since OS 8 in the later '90s, following about a decade of AppleTalk, which didn't talk with anything else but had quite a following among Apple devotees.
With nearly 100% of the desktop computer market getting Ethernet for free, the other network operating and equipment manufacturers quickly left the marketplace. Gone are ThinNet, Lantastic, Banyon Vines, and several other networks that connected IBM PCs and clones to each other, servers, midrange, and mainframe computers.
Computer networks prior to Ethernet could be a nightmare, since Ethernet they are a breeze...
Recently emerged oerating systems like Android and other unix flavors, OS X, iOS and practically every other personal or server operating system in an appliance like a DVD player or refrigerator includes Ethernet and Internet. This all adds up to huge demand for network services and bandwidth on all kinds of networks.
Industry standards for computers, magnetic disks, operating systems, and networking were adopted through the '80s and '90s and some of them apply today: ISA, EISA, and PCI bus in personal computers and servers; a few grades of RAMs - Random Access Memory from cheapest to fastest; a few Flash memory variants for storage; IDE, SCSI, SATA, and SAS disk and solid state drives; Serial and Parallel interfaces; PS2 Keyboard and Mouse; USB - Universal Serial Bus; and others for game controllers and MIDI.
Standardization of components has driven down the cost to a small fraction of what it cost in 1990! The first Ethernet card the author saw cost $750! Now, they're $7.50 or less in bulk, or they are a component on a SoC - System on a Chip that costs $5. The first IBM PC in 1981 was $3,000+, nicely equipped. A Raspberry Pi is more powerful today for $25! A gamer's machine today may be a couple or few thousand dollars, where an engineer's or statisticians's workstation might cost $15,000 or more in the '80s.
Operating Systems proliferated in the '70s and were mostly incompatible, too. Incompatibility of OS and networks made exchange of data between systems difficult and time-consuming. Computer manufacturers made their own, proprietary operating systems and networks and required their customers use them or they would violate their terms of service.
As the marketplace became more competitive, customers demanded standardized operating systems so they were not 'locked into' one manufacturer's systems. UNIX and lots of commercial 'unices' adopted standards promulgated by Bell Labs, UC Berkley, and POSIX. In 80s and early 90s non-standard, proprietary, operating systems and the hardware that depended on them disappeared from the market. With today's Internet, practically every computer network is compatible with every other.
Many consultants and value-added resellers specialized in 'rehosting application environments' for businesses that had excellent software and databases in their legacy but whose computers and networks were becoming obsolete. Most of them moved on to some kind of unix server.
The good news for many of these victims of forced obsolescence was that the new, standards-compliant hardware and operating systems cost a fraction to buy and operate relative to the retired system. The author retired several obsolete systems that cost $100,000+ to purchase in the early or mid '80s with 'generic' Intel servers costing less than $10,000, plus a good fee for the elbow-grease and expertise for rehosting.
Organizations running systems by IBM, Sun/Oracle, HP, SGI, and a few other midrange and mainframe manufacturers survived the shakeout of the '80s with their valuable legacy systems intact. They, too, benefited from the shakeout by greatly reduced prices by these companies as they competed in an industry where computers were becoming a commodity. IBM, for example, priced their product line two or three times as expensive as 'generic unix' into the '90s -- by the '00s several price reductions brought their prices in line with their competitors. IBM's VARs - Value Added Resellers are excellent, price-competitive, options for computing solutions in their vertical markets today.
Settling on industry standards for networks, computers, and telephone systems has had dramatic results, as did legislation aimed at creating a vibrant, competitive marketplace for telecommunications services of all kinds. Today's networks are faster, cheaper, and more accessible than ever. Every body, business, and government benefits from today's standards-based and intensely competitive computing environments.[[[Timeline with emergence of standards, or name of standards and their applications ]]]
The seven-layered OSI - Open Systems Interconnect Model provided a standard vocabulary for many conversations about networking as Internet and Ethernet came to prominence, and it's the most frequently referenced network model today.
Promulgated by the ISO - International Standards Organization in 1984, the OSI Model describes hardware and software components of networks. It was important ss Internet and Ethernet were challenging proprietary standards like ArcNET, IBM's SNA, and other proprietary products at the time. Some decades later, we can see that open systems standards prevailed.
The OSI is not a detailed engineering standard, it is a conceptual model that helps explain the functions and protocols involved in networking. The first three layers describe the hardware and protocols that operate it and layers 4 through 7 describe the software components of the network operating system.
There is another popular model, The Internet Model aka The TCP Model, that represents networking in a 4 Layered scheme. It's referenced and illustrated in the wiki on The Internet Protocol Suite.
The OSI Model was helpful as manufacturers began to make network equipment that is compatible with other manufacturers' equipment and system managers added them to their skillsets. Through the '70s and '80s most manuals for networking equipment were proprietary and purchased through the manufacturer. Now, everything's open and on-line.[[[Need graphic similar to links below, common protocols for each layer]]]
Here's a quick look at OSI Layers with protocols involved at each later; Here's a link to SearchNetworking.com's OSI Reference Model page, which does a good job of showing how elements of the TCP/IP application suite fit the model; To see how network devices work, check out the Networking page at HowStuffWorks.com.
Internet Governance involves several organizations and standards:
Together, all these agencies and the standards they promote make The Internet a very reliable network for all kinds of business, social, and entertainment communications. Connecting computers was relatively expensive in the 1980s, as much as $1 per transaction or $1 per hour of connect time at a bulletin board like AOL or CompuServe. Today's Internet connections can be $79 per month or cheaper, making them a fraction of a penny per minute. Transactions are practically free compared with outdated technology!
There are some situations where a private network using leased lines or PSTN is required today. But, for most purposes a connection through The Internet secured by PKI is secure enough for eCommerce and private networks are used less and less.[[[Interactive with quiz about agencies names/acronymns and what they regulate]]]
The IEEE - Institute of Electrical and Electronics Engineers is dedicated to advancing technology of all kinds for the benefit of humanity. Among other advancements, The IEEE's 802 standards for Ethernet apply world-wide for LANs - Local Area Networking and MANs - Metropolitan Area Networks. After its invention by Xerox, Ethernet standards came under the care of IEEE and they have continued to grow the family as technology has developed since the 1990s.
Ethernet was originally a copper-wire bus technology and has grown to include WLAN - Wireless LAN aka Wi-Fi, and Fiber Optics. In recent years IEEE released Ethernet Fabric protocols that have dramatically increased network throughput for disk and SSD storage devices, servers, and other applications requiring very high-speed networks.
The common addressing scheme, MAC - Media Access Control addresses is regulated by IEEE. IEEE maintains a list of OUIs - Organizationally Unique Identifiers for manufacturers of the chips that manage Ethernet interfaces. These are the first half of a 48-bit MAC - Media Access Control address assigned to every Ethernet interface. An electronic serial number assigned by the manufacturer is the second half of the MAC address. Every Ethernet interface has a unique address that identifies it on the LAN where it is attached.[[[Graphic showing MAC Addresses and how they work with IPV4 and IPV6 ]]]
The ITU - International Telecommunication Union is committed to connecting the world. The ITU-T is the division of the ITU that coordinates standards for telecommunications services of most types, including landlines, radio, and satellites.
The ITU was formed as the International Telegraph Union in the mid-1800s as telegraph systems began to span continents and connect them together. As telephones emerged in the later 1800s the systems were largely incompatible and remained so through the 1970s. It required expensive connections made through 3rd party companies to bridge incompatible systems, making international calls very costly. In most cases, a telegram was less expensive and much more reliable.
Long-distance and international phone calls and network connections required manual switching and cost dollars per minute through the 1980s. Incompatible systems, limited bandwidth on long-line networks, and a lack of competition kept costs very high.For example, telephone companies in the US and its allies standardized on the T-Carrier System for telephone and data in the mid-1960s. A few years later, the E-Carrier System emerged in Europe and was adopted by other countries around the world. The E-Carrier system was a couple year's more advanced and had more bandwidth per circuit than T-Carrier, but the equipment was not compatible with T-Carrier. For years these systems were incompatible and calls or connections between them required manual switching which made them very expensive.
In the 1980s and '90s the ITU was tasked by the UN - United Nations to make telephone and optical networks compatible world-wide. Today, we see their efforts were successful! As new fiber optic circuits replaced old copper long-lines surplus fiber circuits were laid underground and under the ocean.
Today, we have lots of bandwidth, lots of companies providing it, and real competition for telecommunications services which are faster and less expensive than ever.
After decades of ITU's coordination, long-distance and international calls cost pennies per minute today. International calls may be dialed directly without an operator switching circuits, and the quality of service is very high on optical circuits. Everybody understands that networks are more valuable when they connect with other networks without a human's touch.[[[Interactive with PSTN facts]]]
One way of surveying networks is by their size, or how much area they cover. From larger to smaller, these terms and acronymns are commonly used to classify networks:
Another way of categorizing networks is by the technologies they use for moving data around networks of differing sizes. These are often referred to as 'network media'. Some are involved in more than one of the size discussed above.
The #1 requirement for copper-wire connected networking equipment is that it all must be connected to the same electrical service and ground source. This is usually not a problem in a small office or home. But many larger buildings have more than one electrical service and ground source and it can be expensive to discover this _after_ a network has been installed and computers and networking equipment is intermittent or fried.
The little D-shaped conductor on a 110-volt electrical connection carries the ground and the other two wires carry the electrical current. For many appliances the ground is a safety precaution. For networks the ground also provides the 'zero' reference level so it's very important that copper-wired networks operate on the same electrical supply and ground.[[[Need images similar to those in the links for this important discussion]]]
Here are pictures of a typical residential ground stake and a commercial grounding rods. These are mentioned because most problems with network and computer equipment getting fried can be traced to problems with the ground. The grounding system for many addresses, residential and commercial, is inadequate either because of poor installation, damage, or because some of them deteriorate over time. While most appliances can tolerate poor grounding, Ethernet and other copper-wired networks are very sensitive to it.
If an ethernet switch and a PC are connected to different grounding rods some distance apart, a ground loop results where potentially high differences in ground potential will be carried over network wiring and circuits designed for low voltage -- 'frying' the equipment at either end of the connection. Ground potential constantly fluctuates and weather or other atmospheric conditions can cause huge differences in ground potential between grounding rods installed some distance apart.[[[Google 'gound loop illustration' for an assortment of diagrams showing ground loops...]]]
The LAN media usually carry Ethernet these days, but there remain some offices where there are IBM Mainframes still run an 'IBM token ring network' instead of Ethernet. Token Ring equipment is so quick, stable, and reliable that even fifteen years later the quality of service is excellent and there's often no compelling reason to replace it.
Today's most common type of LAN equipment is an Ethernet Switch that runs 10baseT Ethernet, 100baseT 'fast Ethernet', and 1000baseT 'gigabit Ethernet'. Most inexpensive LAN equipment today is '10/100 Mbps Ethernet' and is cabled with 'CAT-5e wiring' that uses the familiar 8-conductor RJ45 connector. For longer cable runs, CAT-6 is required to carry 1000baseT.
10 Gigabit Ethernet is an option for larger Ethernets and can be run over copper wire, CAT6a. Faster speeds for Ethernet require optical circuits.
Copper wire and radio frequency are the most common physical layer today. Fiber is an expensive option for LANs where it's required, but it can carry data at extremely fast rates, 1000s of times faster than copper wire. Ordinary Ethernet users seldom need these higher bandwidths so most connections to offices are likely to use copper wire or radio frequency for some time to come.[[[Collection of Ethernet packet and equipment]]]
In recent years, WLAN - Wireless LAN or Wi-Fi has become so reliable and affordable, is much less expensive than running copper wires, and is becoming more and more popular for home use. Wi-Fi users at home or managing services for business must be proactive and vigilant about securing wireless networks.
Perhaps the most important feature of Wi-Fi is its extreme vulnerability for hackers who may use a directional antenna to sniff, or invade, a network even from some distance away. Next-door neighbors of a residence, business, or dorm room don't even need the directional antenna, only an inclination to sniff and exploit your Wi-Fi and some free software. Or, a WAP - Wireless Access Point in a backpack or other 'hotspot' in a cafe can be used in a 'man in the middle' attack to get your userid, password, and where it's used.
WLANs provide additional points of vulnerability and require proactive strategy by LAN administrators striving to provide security for sensitive customer and business data.
It only takes a few minutes to crack the SSID - Service Set Identifier and password for a WLAN secured by the faster, older WEP - Wireless Equivalency Protocol. It only takes a few hours to crack the newer WPA2 - Wi-Fi Protected Access with TKIP or AES. It only takes a few seconds to set up a 'man in the middle' attack on an unsuspecting and careless Wi-Fi user and steal their userid, passwords, and account numbers.
One of the most powerful demos of cracking Wi-Fi the author has seen was from a student who brought a small desktop tower computer, with a screen-door handle on the top for portability and two Wi-Fi adapters. It had no hard disk, was booted from a floppy disk and loaded the application he had crafted from another floppy. One of the WAPs constantly searched for Wi-Fi SSIDs, hidden or not, figured out the passwords, and kept a little database of available Wi-Fi networks. The other adapter robbed Wi-Fi bandwidth for the student and his roommates who spent $0 for their Internet bandwidth.
Where very fast access is required among computers in a cluster of server farm a SAN - Storage Area Network is often deployed. These are networks specialized for connecting servers to storage devices and optical Ethernet is the favored technology.
In other LANs fiber optics are exceptional and their use is limited. The physics of optical networks provide the the highest bandwidth where needed. With ordinary Ethernet working at Gigabit speeds, optical networks aren't needed in many applications.
Optical networking equipment and cables are very expensive relative to copper wire and Wi-Fi. Optical equipment and cables can cost 10 times more than than their copper counterparts.
One problem that can be solved easily with optical networks is the 'ground loop' problem mentioned above. It can be much less expensive to invest in an optical 'backbone' or trunk to connect buildings with different electrical services and ground sources than to remedy the problems with the ground. Since the 1970s there have been Optical Isolators to solve the problem in networks. Today, optical ethernet switches or bridges do the job.
Telecommunications services and providers get their own chapter later in this text and are introduced here as they extremely important for a WAN.
In tarrifs for telecommunication services, the local telephone company is called a LEC - Local Exchange Carrier who provides the Last Mile circuits that connect businesses and residences to the PSTN. They also provide leased circuits for data, alarms, and controls that are full-time connections and not switched. In most areas the LEC shares right-of-way with other municipal or rural utilities, running their wires on utility poles or in underground trunks along with cables for electrical and cable services.
Local exchanges operated as branches of the Bell Telephone Company, a monopoly, since they were built in the early 1900s. Since the Telecommunications Act of 1996 the LECs have been obligated to provide space and services at their local exchanges for competing local, long-line, and Internet service providers at fair rates according to the tarrif. This competition fostered the growth of a diverse lot of telecommunications providers, facilitated The Internet's growth, and provided a dramatic decrease in the cost of telecommunications services.
Now, tarrifs reference two kinds of LECs: ILEC - Incumbent LEC and CLEC - Competing LEC. The ILEC owns the central offices, aka switches or exchanges, where local and long-distance cables are connected to their telephone and network equipment. The ILEC is involved in the CLEC's service delivery, and the relationship is strained in some cases, but the result has been better than the monopoly that existed before.
In most US cities, the ILEC is now Verizon, who bought up all the failing Bell companies following the split-up of Bell Telephone in the 1980s. In Richmond, we have two CLECs providing telephone services who have survived and prospered: Cavalier and USLEC. There were a couple others who started after '96 but failed to continue past Y2K. There are also other competitors who provide networking services but no telephone services: COVAD/Megapath is one, ESR - Electronic Services in Richmond is another.
ILECs own a lot of copper wire for the local connections among their telephone customers. Since the late 1800s the infrastructure for telephones has been managed as any other utility and the telephone wires share/rent space on the same utility poles and rights of way as the power, water, cable, and gas utilities. Now, they're obligated to provide local connections to their competitors' customers.
Most neighborhoods have had copper wires in place for telephone services, many in place since the 1940s, and these same wires also carry DSL - Digital Subscriber Line services and ISDN - Integrated Services Digital Network. Other copper circuits carry ISDN, T-1, and T-3.
Business parks, high-rise buildings, hospitals, universites, and data centers form neighborhoods that have had traditional Telco fiber optic circuits to their premises for decades, since the 1980s.
Last mile media are generally named by the service provided by the LEC or other carrier in the exchange. Examples that are easy to look up are analog POTS and leased lines, leased digital circuits, ISDN BRI & PRI, T-1, T-3, OC-3, and OC-12. These range in price from about $29 per month for ISDN BRI with 128 Kilobits per second bandwidth through a couple thousand or more for OC-12 with 155 Megabits per second.
More recently Verizon FIOS, Google Fiber, and other providers started bringing optical circuits directly to residential doorsteps, too. These media support telephone, Internet, and television. They compete very favorably for business services and should be considered along with traditional telephone services.
Cable companies have offered broadband Internet since the 1990s using the same copper-wire infrastructure as provides cable TV. More recently, 'hybrid' fiber-optic services like Comcast xFinity bring The Internet to the neighborhood on high-speed fiber circuits and distribute it to the doorstep on the legacy copper-wire circuits that have been in the neighborhood for decades.
Managers who need telecommunications services are advised to seek bids from at least a few providers, including the ILEC. With new services being deployed by their competitors some telephone companies have become very aggressive in pricing for their traditional services.[[[Telco cabling and other infrastructure, stuff hanging on utility poles]]]
Frame Relay is another important WAN medium provided by telephone companies. Frame Relay service is provided over any of the above last mile services. It provides the security of a real private network when used to communicate with other nodes on the Frame Relay network, and it can connect to The Internet.
Frame Relay service is similar to The Internet in that it is a packet-switching technology that can relay a 'frame' of data instantly from one Frame Relay interface to another anywhere in the world.[[[Show a diagram of the telco frame-relay network]]]
Frame Relay allowed many businesses to retire the leased circuits that interconnected their branches and offices and replace them with local, last mile connections to the telephone company near each facility.
Frame Relay doesn't travel on Internet circuits so data remains on private networks.[[[Show a Frame Relay Packet w/ an IP]]]
Frame Relay is also an option for connecting to an ISP and should be considered when contracting for bandwidth in a business neighborhood. Telephone companies are pricing their traditional services very competitively these days and are often in a position to bid the best price.
MPLS-Multi Protocol Label Switching circuits are emerging as a better alternative to Frame Relay today. These are compatible with telecommunications services and can be used to extend LANs over long distances.
SONET - Synchronous Optical Network standards emerged in the early '90s and have made optical circuits of the several competing network providers compatible worldwide. Prior to the '90s these optical networks were mostly incompatible but telecommunications carriers quickly learned the value of making networks compatible. Today, these networks provide inexpensive, reliable bandwidth for The Internet and the PSTN.[[[Table showing Bandwidths for OC-3 through OC-192, 256?]]]
A Storage Area Network links storage devices with servers in a cluster or a farm. SAN technology places storage devices and servers on a very fast network of their own, apart from the LANs that connect users to the network. Most SANs use high-speed optical Ethernets and Ethernet Fabric technology to ensure the quickest access to data by servers. SANs transfer data between servers and and storage units in 'block mode' similar to the way directly attached disks behave.
NAS-Network Attached Storage should be mentioned along with SANs. NAS units are relatively inexpensive and easy to attach to a LAN so clients and servers can share arrays of disk or solid state drives. They do not scale up to handle larger loads very well since the heavy traffic among storage and servers competes for bandwidth on the same LAN as the clients. QoS is poor at times of peak use. Many SANs replace NAS in a maturing business, often following a period of 'poor network performance'.
Like other large organizations with offices all over town, VCU maintains a private 'Metropolitan Area Network' (MAN) made up of fiber they've laid between buildings on campus since the 1970s, leased circuits to connect facilities scattered across town, and several OC - Optical Carrier connections to The Internet in redundant locations. Some of the manhole covers around campus say VCUNet.
VCUs MAN is well-protected from the bad guys on The Internet, and provides excellent ISP services for some 40,000+ of us in the University community who enjoy the quick response we get on the University's Internet 2.0 Superhighway. VCUNet is constantly defending us against the threat of Crackers the world over who are drawn in by the bandwidth of a University and the possibility of loosely tended servers that can abuse that bandwidth.
It is not easy or cheap to provide enough bandwidth so an atrium, or classroom, full of students is mostly happy with the response of social media, Pokemon Go, and maybe even courseware on their mobile devices and notebook computers.
On 'The Internet side' of a university or enterprise network, multiple fiber optic connections provide fast and reliable internet access to and from the rest of the world. There is enough bandwidth to support multiple 'distance learning classrooms' and other real-time video links to other classrooms, surgical suites, and other facilities around the world while taking care of the many thousands of students and faculty who are getting email or browsing the web at any time.
PANs aka PicoNet came along with advent of personal computers in the early '80s and exploded in popularity with the emergence of personal devices for smartphones, pods, and tablets that have been added to our personal networks in recent years.
Early personal computers had separate ports with different connectors for parallel, serial, keyboard, mouse, scsi, game, and midi controllers. These connected printers, keyboards, mice, modems, mid-range or mainframe computers, scanners, games, musical instruments and lab equipment to desktop and notebook PCs.
Since 1996 USB - Universal Serial Bus technology has eclipsed all these ports and connectors. Recent USB-3 and USB-C technology is so fast and flexible that it can replace the other connectors. If you've got a legacy device in your PAN that has no USB connector there is likely a converter for it.
Cameras and audio need lots of bandwidth and that's been provided with Firewire, and later Thunderbolt, which are ordinary ports on an Ultrabook or computer built for graphics processing.
Today's PANs use USB along with Bluetooth, HDMI/4K, Firewire, or Thunderbolt to connect personal and mobile computers to personal devices of all types. Wired Ethernet, WiFi, or PSTN connect the PAN to a LAN or The Internet.
Bluetooth was intended to be a direct replacement for the cables connecting keyboard, mouse, and other i/o devices to a computers and the first version only worked within three feet. Current Bluetooth works for 30 ro 40 feet and is subject to interference from other Bluetooth and Wi-Fi. Protocols for Bluetooth have progressed so that 'pairing' of personal devices with each other is simple or entirely automatic. Once paired all we need to do to get a connection is get the devices close enough, power them up, and they connect.
Security is a concern especiallly with wireless on a PAN. Every week there is news of some vulnerability or exploit affecting some brand or version of some personal device.
USB emerged as an industry standard in the mid '90s and quickly eclipsed other connectors in the PAN. Windows and Mac suppliers quickly made drivers so that serial and parallel printers, modems, game controllers, speakers and headsets, MIDI - Musical INstrument Digital Interface, Ethernet, WiFi, and other legacy devices could continue to be used with the new USB technology. As folks bought new computers they found inexpensive adaptors to connect their personal devices to the new machines. If there were not enough USB ports an inexpensive USB Hub with several ports made it easy to plug in several devices. USB quickly became Universal.
USB has seen three versions since the '90s, each magnitudes faster than the prior, so that USB-3, aka SuperSpeed USB, gives us 5 Gigabits per second of bandwidth to operate our devices. USB connects printers, Hard & Solid State Disk, CD, DVD, BlueRay, and other peripheral devices to our PAN. USB-C, with a small connector that can be plugged in either way, handles all earlier flavors of USB, VGA, HDMI, Ethernet, Thunderbolt, and the ancient Serial networks. USB-C links computers to home theatre and audio equipment to drive Dolby 5.1 or 7.1 Ch and whatever comes next.
As portable computers have gotten smaller and smaller there is less and less room for connectors. Some UltraBooks, small and very high-powered notebook computers, have only USB-C and USB-2 ports on them. It's likely the USB-2 ports will soon vanish since they take up about 4 times as much real-estate on the edges of our portable devices as earlier USB and are much more flexible than the Mini and Micro USB ports now used on smaller devices.
Today's PANs are easier to use, faster, and cheaper than ever before. Small USB ports replaced bulky serial and parallel network connections and helped to simplify and reduce the cost of PAN connections. Most personal computers today don't include serial or parallel ports, and many notebook and ultrabook computers don't include a wired Ethernet port.
HDMI-High Definition Multimedia Interface is a proprietary industry standard that is widely licensed. In 2016, HDMI ports on personal computers and devices have mostly replaced the VGA-Video Graphics Array interfaces introduced by IBM in 1987. HDMI and the newer 4K - Ultra High Definition make it easy to mingle computers and televisions as our viewing habits are influenced by technology. Where VGA - Video Graphics Array has been ordinary since the 1980s, it's been rare on personal computers since the 2010s.
Bluetooth technology is built into almost all of our mobile devices and many desktop systems. It is an industry standard managed by the Bluetooth Special Interest Group. Bluetooth provides a wireless way to connect with earbuds and headsets, keyboards and mice, game controllers, or OBD-On Board Diagnostics and audio systems in our automobiles. One Bluetooth controller can accomodate up to seven devices and the protocol allows for an automatic connection when a device comes within range or is powered up.
Bluetooth puts a low-powered radio transceiver on a computer chip. Bluetooth Version 1 had an effective range of only three feet, but the most-used version today has a range of 30 feet. It shares bandwidth in the 2.4 GigaHz range with WiFi and portable phones. If there are lots of Bluetooth devices near each other there may be problems with interference.
Because there are so many Bluetooth devices in the field, Crowdsourced Location Devices are becoming popular. These use a small battery-powered 'tile' that is put into or stuck onto a bike, dog's collar, car, phone, wallet, suitcase, or other loseable property. If lost or stolen the Bluetooth tile will connect with your mobile device, or one of another subscriber's, and the tile will report the location to the owner.
There are concerns with security using Bluetooth, with many documented hacks available.
Apple gave us Firewire, IEEE 1394, with bandwidth of 400 Megabits per second. It was released in the late '80s and was only recently eclipsed by newer USB. Firewire ports became standard equipment on most high-performance notebook and desktop computers, even Intel and Windows, as manufacturers of video cameras and other entertainment media adopted it.
Intel and Apple worked together to add Thunderbolt to the PAN. In about 2015 Thunderbolt has been combined with USB-C and gets us to an amazing 40 Gigabits per second! Originally expected to use fiber-optic connectors, Thunderbolt uses copper wires to provide high-speed connections.
The most personal of PAN technology uses sensors attached to our bodies to make a BAN - Body Area Network. Some measure tiny electrical potentials, PicoAmps, and provide feedback about heart rate through a smartphone, pod, pad, or watch we wear. Others use motion sensors. Blood sugar, pressure, PH, or oxygen level are easy to measure. Sensors for EEG - Electro Encephalogram and ECG - Electro Cardiogram can continuously monitor brain and heart activity. The applications range from athletics through healthcare, helping tune our fitness or provide 'round the clock monitoring for oldsters or patients. They can also improve the function of life-saving technologies like pacemakers, artificial pancreas, or neural stimulators for Parkinsons or other brain disorders.
BAN devices are easy to connect to PAN devices, or they can connect to healthcare providers via The Internet to get real-time monitoring that was not possible in the not-to-distant past.
CAN - Controller Area Networks aka Car Area Networks have become more common in our automobiles in recent years. It allows some components to be controlled by a computer where other sensors and switches control components without a computer.
The bus of the CAN model replaces hundreds or thousands of feet of wires in our cars. Without the CAN, separate wires link sensors, actuators, lights, seats, mirrors, shifters, and engine components, the CAN provides a bus-like structure where one cable links lots of components. Popular Mechanics provides a good introduction to this technology: The Computer Inside your Car.
Since the '80s cars have had computers embedded in their engine controls, and the years following were frustrating for customers of early adopters as the bugs were worked out. Today, the bugs have been gone for years and embedded computers run our cars better than ever. Gone are the days of 'tuning' the engine every 6,000 miles or so -- solid state distributors don't require replacement the way old metallic 'points' did and the computer dynamically adjusts the timing. The embedded computer constantly tunes our engines and greatly extends the mileage between tune ups so that many cars run for 40,000 or even 100,000 miles without tuning or replacing electrical or fuel system components.
CANs have already been abused: In the summer of 2016 Volkswagen reached a $15 Billion settlement following proof that they had programmed some models of their diesel engines' computers to behave differently when on the road vs. when plugged into a diagnostic computer. When emmissions are being measured, the computer tweaks the engine for low performance and low emissions. On the road, the computer tweaks the engine for high-power and emissions go way past the limits. Now VW will buy back or repair a half-million diesel-powered cars, plus give the owners $10,000 for their troubles.
Automobile components that used to be entirely mechanical or hydraulic, like the transmission, are now computer controlled. The gear-shift lever is no longer attached mechanically to the gears, it's connected to the computer. Jeeps had an issue with this recently, where the gear-shift lever doesn't indicate Neutral vs. Park and Jeeps were driving away when the driver stepped out with Neutral selected instead of park. Anton Yelchin, famous for his role in Star Trek, was killed by his Jeep in the Spring of 2016. The fix wasn't to replace the lever, it was to reprogram the computer so that the Jeep engages Park whenever the driver's door opens and the vehicle is not moving.
Many aircraft are entirely 'fly by wire' and automobiles are headed that way. Airbus cockpits have been entirely electrical, with the huge airplane controlled by a digital joystick at the pilot's hand, for more than a decade. Boeing's earlier models use analog controls, but the 777 is fly by wire -- the traditional 'control yoke' is connected to a computer network, not a hydraulic system or electrical servos as before. The current exploration of self-driving cars uses 'drive by wire' controls similar to aircraft.
With a computer network in place, it becomes less expensive to use motors to adjust mirrors, seats, a/c, heater, or windows rather than the cranks, gears, and cable actuators used before computers took over our cars. Through the '50s and '70s electric windows, seats, and mirrors were expensive and only found in luxury cars. Now, these are the least expensive way.
The unix-like QNX, maintained by RIM/Blackberry on a RISC CPU by Freescale/NXP is the ordinary platform for a CAN. OBD - On Board Diagnostic standards make it easy for a mechanic, or an owner, to interface with the QNX/NXP platform via a cable or bluetooth unit plugged in under the dashboard.
Today's CANs also include USB, Bluetooth, and WiFi components that reach our and embrace our PAN devices as we strap on our seatbelts.
Although it doesn't fit with the XAN 'area network' scheme, SCADA is an important acronym for modern networks. SCADA networks largely rely on PLCs - Programmable Logic Controllers that interface with serial networks, Ethernet, and Internet for ICS - Industrial Control Systems.
Factories use PLCs to link process control computers to sensors and the equipment they control. The instructor worked to support equipment in yards that manufacture concrete and asphalt, where PLCs were used to direct the materials to and control the mixing towers that loaded the trucks. Power distribution systems, all kinds of utilities like water treatment and distribution systems, dams, barrages, and spillways are all operated with SCADA technology. HVAC - Heating Ventilation and Air-Conditioning and other environmental systems for buildings, large and small, benefit from SCADA and PLC technologies.
LabVIEW is software that provides a GUI used to design and test control systems with 'virtual' components, and also to control operations for real. Google labview and check images for an eyefull of LabVIEW applications.
Several companies make the PLCs: Siemons, Motorola, Panasonic, Qualcomm, and others. The PLCs are highly standardized and well understood, making them economical and reliable components for critical manufacturing and social infrastructure.
The standardization also makes them vulnerable to crackers or warriors. Stuxnet is perhaps the best known PLC exploit, where some unknown entity planted the Stuxnet virus in Windows machines in Iran and they sought out PLCs controlling nuclear enrichment centrifuges and damaged them.
Today's 'smart grid' power systems rely on SCADA networks and PLCs to make our electrical supply more reliable. These networks can be of strategic importance and it is more critical than ever to see that they are operated securely, to deny vandals and enemies this vector of attack.
Yet another way to survey networks is by the 'protocols' used for communication.
Two protocols predominate EBusiness and home networking and internetworking. For LANs there is 'Ethernet', and for connecting LANs to other networks there is 'IP', Internet Protocol. These technologies were engineered for compatability and have eclipsed most other proprietary LAN and WAN protocols. IP handles the traffic among trading partners in EBusiness and their customers, and it has become more and more important for inter-personal and social communications, too.
LAN protocols are made to run on very reliable, wired or wireless, networks. They devote relatively little bandwidth and intelligence to traffic management.
Windows computers typically use a set of 'LAN Protocols' to make what is shown as a 'Network Neighborhood,', SMB. Macs & Linux machines are comfortable in the Network Neighborhood running SMB protocols as an open source version called 'Samba'. Windows also supports what used to be Novell's IPX/SPX, and arcane protocols like NetBEUI & NetBIOS are important for some equipment in the legacy. IP (Internet Protocol) will also be supported when a LAN is attached to The Internet, but it is not as quick and efficient for LAN traffic as the LAN protocols that handle traffic among servers and PCs on a LAN.
Microsoft included a suite of Ethernet protocols in Windows 3.11 for Workgroups in 1992. Prior to this, LAN technologies were proprietary and there were a dozen or more of them in common use and they were incompatible with one another. Soon after, Apple supplemented their AppleTalk networks with Ethernet. With these companies providing Ethernet practically 'for free' it soon became the ordinary LAN technology and the others have mostly fallen out of use.
Computers' and Routers' Ethernet Network Interface Cards (NICs) each have a unique 48-bit MAC-Media Access Control address, also called a 'hardware address' or 'physical address'. It is 'burned' into them when they are manufactured, like an electronic serial number.
The IEEE-Institute for Electrical and Electronics Engineers regulates MAC addresses to ensure that each NIC's MAC address is unique. The first 24 bits are the OUI - Organizationally Unique Identifier assigned by IEEE, the last 24 bits are assigned sequentially by the manufacturer. (Click OUI List to see a recent OUI list.) The NICs are labelled and and the MAC address verified as part of the quality control process as they are packaged.
The networking components of computers' and routers' operating systems query attached devices about every minute using ARP-Address Resolution Protocol so they are aware of the MAC addresses and IP addresses assigned to each machine on a LAN. An ARP request broadcast on a LAN literally asks 'who has this IP address' and each device replies with its IP and MAC address.
At the center of the LAN is an 'ethernet hub' or 'ethernet switch' to which all the computers on the LAN are attached and which provides the 'common medium' for the network.
An Ethernet Hub is a relatively simple device, OSI Layer 1, that 'repeats' any network traffic it receives on one port to all devices attached to the hub. All the NICs attached 'hear' the traffic, including their own, but mostly they are programmed to only accept Ethernet traffic with their MAC address. This can cause a security problem in a LAN where a curious or unscrupulous employee can use 'sniffer software' to see data intended for any of the other users of the LAN.
Ethernet Hubs are not entirely obsolete yet because they are durable devices and some have been in service for more than a decade. Hubs are seldom or never deployed as new devices these days because Ethernet Switches are better in practically every regard and have become very affordable.
An Ethernet Switch operates at OSI Layer 2. The protocol allows the switch to 'learn' the MAC addresses of each networked device as it is attached to one of the switch's ports. It uses the MAC address to direct network traffic only where it is destined. This provides additional security and helps improve network performance, reducing the 'collision domain'. Even if someone runs a network sniffer, they'll only see the traffic for their machine. 'Broadcast packets' are sent to every node, but other traffic is inherently more secure on a switch because other packets cannot be 'sniffed'.
Here's a diagram showing the difference between 'switches' and 'hubs':[[[Need hub vs Switch illustration]]]
Switches are generally more desirable than hubs, and have become more affordable in recent years. At about 1999 an ethernet switch might cost $150 per port, and they were used judiciously to solve problems in LANs, where ethernet hubs cost about $20 per port. Today, 'low end' switches might cost several dollars a port and are generally used instead of hubs.
'Managed Switches' are very desirable in a larger LAN since they provide valuable information about usage and error messages which can save the LAN's administrator valuable hours when LAN problems arise. A modern managed switch is likely to provide access via https for the network administrator, who plugs the switch's IP address into their browser's Address bar to see reports about performance and configure the switch. 24 or 32-port managed switches cost in the range of several hundred to a couple thousand dollars, but the performance benefits and information they provide are invaluable for the network administrator.[[[Graphic showing manager's interface for an HP or Cisco managed switch]]]
Modern switches may also work on OSI Layer 3 to provide routing functions, and on higher OSI Layers to provide QoS-Quality of Service depending on which applications are in use. A LAN has traditionally been defined by the switch at its center. In recent years, more intelligence in switches allows for very flexible VLANs- Virtual LANs and SDN - Software Defined Networks that allow a network administrator to operate a network across several switches and to dynamically define and reconfigure networks according to demands for bandwidth for access to servers and storage devices in a network room or data center.
IBM's Token Ring Network came onto the market before Ethernet became popular. Although they operate with less bandwidth than an Ethernet they provide excellent quality of service. Token ring networks are common today where a business or enterprise uses IBM midrange or mainframe computers. Although they're not likely to be used for new installations many of them have been in use for decades and there is no good reason to replace them. A MAU - Media Access Unit that runs a token ring network looks similar to an Ethernet Switch and is likely to use the same CAT-5+ premises wiring.
ARCNET is another early LAN technology, dating from the 1980s, that may be encountered in some niches of the infrastructure. It was used by Radio Shack, DataPoint, Data General, McDonnell Douglas, and several other computer manufacturers prior to the widespread acceptance of Ethernet. It continues in use today.
Both IBM Token Ring and ARCNET are managed by 'token passing' and a QoS - Quality of Service may be engineered. Ethernet is managed by collisions in the network traffic and QoS declines during periods of peak demand -- QoS is achieved by not overloading an Ethernet, but the things tend to grow...
Serial Networks, especially RS-232 and its somewhat compatible cousin RS-422, were the ordinary LAN technology through the '80s. They were widely used to connect dumb terminals and printers to midrange computers of the day and as personal computers emerged they were the ordinary interface with printers and larger computers. They also played in WANs, where a modem was used to modulate the digital, serial signal so it could be carried over analog telephone circuits and demodulate it back to digital on the other end. Serial networks continue to be used in some niches. Examples are controls or sensors for A/V or industrial equipment, and gas pumps. Cash machines in remote areas are likely to connect to the bank network via a modem and serial link.
To connect two or more LANs, perhaps in a large building, or even scattered around the world, requires internetworking equipment: Routers or Bridges.
If the networks are some distance from one another and all owned by the same organization, it may described as a Private WAN.
An internetwork is a network of networks, created when privately owned LANs or WANs are interconnected using leased lines and other 'private' media such as Frame Relay.
Many of these internetworks use IP and may be called Intranets. They may be well-connected to The Internet or not at all. Most long line carriers can provide private circuits for private internets that do not share bandwidth with the circuits they provide for The Internet.
Routers are the most common internetworking equipment. They may be built-for-purpose devices built by companies like Cisco or Brocade. Or, they can be implemnted in software on server-class or mid-range Unix or Linux platforms with multiple NICs able to handle multi WAN and LAN media on the same bus as the CPUs and storage devices. Companies like Barracuda, Juniper, and Netgear make networking appliances that behave like IP routers but also provide firewalls and other security like 'net nannies' or secure email where the LAN borders with The Internet.
On the LAN side, routers use ARP - Address Resolution Protocol to regularly poll the devices on the LANs attached to them and update a table of which MAC address is associated with which IP address on the LANs for which it routes. Since all requests for Internet resources go thru the router, it can keep tabs on which MAC Address requested which service from The Internet and reliably get packets back to the requesting MAC address.
Routers also do NAT-Network Address Translation, SNAT-Source NAT, port-forwarding, mangling, and masquerading to help secure resources and balance the load on networks they service.
And, most routers and network operating systems can act as Firewalls.
On the internet side, Routers use RIP-Router Interface Protocol, OSPF, IS-IS, and other proprietary router protocols to chat amongst themselves and report the 'metrics' that let routers know traffic conditions over the horizon. When they are making a choice of route for a packet about to be dispatched they use these metrics to make the best decision. Routers are able to adapt very quickly to changes in routes as routers and circuits go down and come back up.[[[Illustration showing LAN/WAN protocols and ports]]]
In larger organizations and Internet Exchanges larger routers may have connections for several networks. The router may have up to a couple dozen 'slots' that accept processor cards for each of the WAN and LAN media they handle. This makes it easy to configure them for as many different kinds of networks as are needed. For example, a router might connect to wired and optical ethernets, ATMs, Frame Relay, cable and dsl modems, and to digital telco services like T-1, T-3, and OC-3 or higher SONET specifications.
The routers used by ISPs and very large organizations that connect directly to internet backbones may be addressed by ASN-Autonomous System Numbers and use BGP-Border Gateway Protocol and IGP-Internal Gateway Protocol.[[[Industrial-strength vs consumer-grade routers]]]
When a computer on a LAN needs data or services from a computer on another LAN, or on an Internet, it's a Router that 'figures out' a route between the computers' LANs, keeps track of the requests, and gets the results back to the requesting NIC. It's often a Router that is designated as the 'gateway' device in the internet setup for PCs in a LAN.
Usually, the routing is 'transparent' to the users of the LANs who are provided with convenient links & icons, & 'network places' so we point & click to get to resources on a remote LAN on The Internet just like they point & click to get to programs and other things on their desktop.
PCs and other personal devices do their share of routing, too. Commands in Windows like route print, arp, netstat, ping, and tracert can start your investigation into routing.
In public networks like in cafes and classrooms people can walk in, turn on a Wi-Fi or plug into a wired Ethernet jack, may be asked for a password, and are automatically connected to The Internet. Their IP address, gateway, and domain name servers are set up indirectly by the network manager by configuring a DHCP - Dynamic Host Configuration Protocol server, often the same as the gateway. Security on these networks may be enhanced by using VPN - Virtual Private Networking software to do better authentication of users who pop up for Wi-Fi services.
In business networks handling personal data for customers and other sensitive information DHCP is viewed as more vulnerable to hackers and the IP configuration may be done manually by the network manager working as administrator on the desktop or other computer and assigning the IP address, gateway, domain servers and other network properties. Some require that the MAC addresses for devices they expect to use the LAN be registered in the router. In these networks it's not a good idea to let somebody walk in, plug in, and gain access to LAN and The Internet via some rogue machine.[[[Exercise about ipconfig and network manager.]]]
Online Service Providers sprung up in the early 1980s to support individuals who 'dialed into' their networks from home or a small office thru a PC's serial port using a modem that dials a telephone # that is serviced by a bank of modems, where an ISP who may be located anywhere can lease a 'Point of Presence' on The Internet from a POP Provider who may be the local ILEC-Incumbent Local Exchange Carrier or CLEC-Competing Local Exchange Carrier, or a privately owned 'phone bank' or VAN.
These online services were frequented by users of early Apples, RadioShack, Atari, and other personal computers in service prior to the IBM PC and Mac. They were not very important for business at first, but some like Prodigy derived some revenue by advertising.
Some of these on-line services included email, chat rooms, instant messaging and some graphics by the mid and late '80s but they were largely incompatible with each other and email and other features only worked on the subscribed service.
As the World Wide Web gained traction in the mid 1990s some of these online service providers connected their networks to The Internet and became ISPs - Internet Service Providers.
Very few people still connect to The Internet through an ISP like AOL, Earthlink, or Erol's via dialup. Broadband internet services are available in most areas, urban or rural, and many subscribers kept their accounts with these ISPs when they stopped using dialup. The number of hands raised in class when asked 'does anybody _know_ anybody who still uses dialup?' has dwindled from a dozen to one or two over the past few years. Usually, it's an elderly relative or somebody in a very rural area that they know. And, they know not to put big attachments in the email they send since that can tie up a 'phone line for a long time and risk the wrath of the receiver.
Current trends toward VOIP for local & long-distance for individuals and business, and for real-time delivery of High-Definition Video via IP are using The Internet in ways that were not entirely foreseen by engineers of IP & WWW way back in the '70s and '80s. The Internet proved to be a robust and scalable technology, able to connect to any kind of network and handle whatever we throw at it today. De-regulation and anti-monopoly tactics in the USA and around the world have resulted in real competition and make The Internet a very affordable medium for real-time, point-to-point communications of all types.
Industrial Strength Routers are used by Backbone providers, ISPs and larger organizations with multiple connections to The Internet and their internets. They have a few or several routes available to them. The routers that connect a small or mid-sized LAN to an ISP are typically used as 'bridges' where there is only one line on one side (ethernet to the LAN) and single connection to a DSL's 'serial line' and switch or coaxial for broadband.
The common addressing scheme for Internet Backbone Providers and other larger networks is the AS - Autonomous System Number. This scheme and the protocols associated with it provide the mechanisms for operating a network on a truly global scale. They allow a manager, or an automated system, to instantly reroute networks to provide seamless recovery from computer or network failures or provide better throughput for their communicataions.
Where consumer-grade routers typically have only one route, the big IP Routers have a _choice_ of routes among several 'long line carrier' networks, and they are close to The Internet Backbone or on it. The switchrooms of traditional local telephone providers (CLEC & ILEC) have banks of relatively huge, DC-Direct Current battery powered, 'Telco Switches' providing services like POTS, ISDN, T-1, and T-3 for their telephone customers on copper-wire circuits. On the other side of the switch room, there are the heavy duty IP routers and switches, usually Cisco, Brocade, or Juniper that physically connect the 'Telco Infrastructure' to The Internet.[[[USLEC switchroom or other central exchange showing telco, DSL, and internet equipment
The ISP is (or should be) connected to higher tier of 'The Internet Backbone' (here's another link) by fast digital media, preferably to multiple carriers to provide those it services with redundant connections to provide more speed and reliable internet service. This skitter diagram captures a moment of traffic on The Internet, showing who is connected to who, by the AS, Autonomous System Number -- see who's got the most...
This Internet Service is not to be taken lightly if customers are to be kept satisfied and the economies of scale do not favor small ISPs. ISPs have needed to beef up their infrastructure to accomodate ever increasing volumes of traffic as Internet Users download more & more stuff. First, the challenge was just to get few pictures to customers' browsers quickly using network media designed to carry plain text. Now the challenge is to get 'movies on demand' delivered to some customers without slowing down the browsers of others nearby. ISPs who can't keep up with the demands for the bandwidth lose customers.
Telephone Company and Long-Distance circuits are circuit-switched and engineered to carry 'real time' voice and data, so the customer has constant access to the full bandwidth provided. Other components, like an ISP's switches or routers, might make a 'bottleneck' and choke Internet traffic, but the Telco-provided circuits always deliver the full bandwidth. They don't work by 'packetizing' data, they work by 'time division multiplexing' so the data (or voice) flow is very predictable. They are very reliable and may run without interruption for the service life of the circuit.
DSL, Cable, FIOS, WiMax and other 'non-traditional' data circuits are attractively priced for residential and commercial service. But, the tariffs in many/most areas allow them to be 'oversold' so the customer doesn't get the full bandwidth advertised.
A DSL advertised as 'up to X megabits per second' may only deliver throughput of 80 or 100 kilobits per second when lots of customers are using the circuits. Xfinity, FIOS, Google Fiber, and other broadband connections can reliably supply the high speed internet connections needed to satisfy residential neighborhoods full of Netflix and Hulu subscribers -- my Hulu regularly gags, Netflix seldom does.
Business circuits are likely to be Traditional Telco circuits:
A T-1 connection provides 1.54 Mbps of bandwidth for perhaps $300 - $800 per month ($275 here in River City!)-- enough to keep dozens & dozens of web-browsers happy if they're not streaming video. A T-3 provides 43 Mbps for a thousand+ dollars per month. T-1 and T-3 lines can come into a business on copper wires or fiber cables.
For even more bandwidth, 'Optical Carrier' solutions are used. An 'OC-3' connection, via fiber, provides 155.52 Mbps, nearly 100 times as much, for something like $8,000 - $20,000+ (depending on the provider) per month. OC ratings go up to OC-192 for 9.952 Gbps and are only feasible for the largest backbone providers and other global organizations.
Verio provides a pricing summary for common digital circuits. Prices are negotiable and may be substantially higher or lower depending on the situation and market.
Unlike the Private WAN, where an organization leases exclusive use of the WAN media, The Internet runs over publicly shared media regulated by different tariffs than telephone traffic. Everybody's traffic is split up into packets and dispatched through routers, perhaps over multiple routes, to the destination, often thru equipment that is subject to the real privacy we expect in telephone exchanges. This brings us to technologies of Internet Security, VPN - Virtual Private Networking, SSL - Secure Socket Layer and TLS - Transport Layer Security are trusted to keep data like Credit Card #s or user ids and passwords safe when we buy on-line. More and more organizations are trusting these authentication and encryption technologies to keep their data secure so they can enjoy lower costs for their WANs. Without taking measures like these, data travelling on The Internet are in an 'open cart' that can be observed by anyone who can 'sniff the subnets' where the data travel.
No one organization or person governs or owns The Internet. The networks are provided by the large telecommunications carriers. The protocols, addressing, and standards are developed and maintained by several organizations.
There are a few kinds of addresses involved on the Internet: IPv4 and IPv6 Addresses, MAC Addresses, and Internet Domains.
Most networks are addressed by IPv4 in 2016. IPv6 is an emerging standard, has been usuable for about a decade but there is a lot of inertia in IPv4 which has something like 70% of Internet traffic.
A computer used on The Internet gets an IP address assigned to it either directly by a network manager or indirectly by DHCP - Dynamic Host Configuration Protocol. The IP address is used on The Internet similarly as a telephone number is used in the PSTN. (The computer's Ethernet address is assigned by the manufacturer as the Ethernet interface is manufactured.)
A 32-bit IPv4 address is usually shown as a 'quartet' of 3 digit decimal numbers in the range of 0 thru 255 like '184.108.40.206'. This can provide over 4 Billion unique Internet Addresses distributed more or less fairly across the globe. IPv6 has exponentially more addresses, will handle us for the foreseeable future.
'128.172' is a significant network around here since it 'belongs to' VCU and the 3rd and 4th quartets are managed by VCUNet.
The quartet is a 'human readable' form of the 32-bit binary number that is easy for a Router or computer to use, but hard for a human. For example: the IP address 220.127.116.11 in binary form is 10101000110101001110001011001100. It's difficult for people to recognize patterns in long strings of digits, but relatively easy when they're presented as 3-digit decimal values.
Usually, computers on a LAN are assigned a 'LAN IP Address' vs. a 'Routable IP Address'. Routable IP addresses should only be assigned to web, mail, or application servers intended to be accessed from The Internet.
There are several special ranges of IP addresses carved out of the public address space to assign to private networks. 192.168. is a 'Class C' network available to LAN administrators for 'internal' LAN addresses, which leaves the 3rd and 4th quartet to be managed by the LAN administrator. For larger, more complex, LANs Class A LAN Addresses are in the range of 10.0.0.0 thru 10.255.255.255 allowing for mapping IP to physical networks on a global scale. Class B LAN Addresses are 172.16.0.0 thru 172.31.255.255, a Class C LAN is 192.168.0.0 thru 192.168.255.255.
As a basic security measure in IPv4, LAN IP addresses are not referenced on The Internet side of the router or other gateway device. Machines on a LAN operating behind a gateway appear on The Internet as the IP address of the gateway. When a machine on the LAN requests a resource on The Internet, the gateway device substitutes it's web-facing IP address and a port# for the response. It keeps track of which port# is associated with which LAN IP address and delivers the response.
This leaves email and social media as the main points of vulnerability for many machines on LANs since TCP/IP is well-protected by this scheme.
You can see your Windows machine's LAN IP address by getting to the Windows command prompt and typing 'ipconfig /all'. Macs, open a terminal and type ipconfig. This shows both the IP address assigned on the LAN and the 'Physical' or MAC address of the machine.
You can discover your 'public IP address' by googling on 'what is my ip'.
For more about IP Addressing, start by looking at Webopedia's page about it or find Cisco's refs. IPv6 offers several advantages regarding security and quality of services for The Internet and is worthy of more detailed study...
The legacy version of IP (Internet Protocol) is IPv4, using an address scheme laid out in the 1970s. The future is IPv6 and it has been phased in for about a decade. The address space for IPv4, based on 32-bit binary numbers, is limited to about 4 Billion addresses and is nearly used up.
IPv6 provides an astronomical increase in the number of IP addressess! A 128-bit binary number can hold 340 Undecillion values, which is 340 Trillion Trillions. Not all the values may be used as IPv6 addresses, but the current IPv6 scheme provides thousands and thousands of addresses per person on Earth where IPv6 only provides about one.
Beyond the huge number of addresses, IPV6 provides features to handle security and services demanded for today's higher-speed networks. IPv4 was developed in a more trusting environment where there were only several thousands of us on the command line of servers attached to Security was lax or missing in many systems. IPv6 was developed following some years with millions of people on the command lines of servers and highly skilled in networking. A couple decades of experience with Crackers & Spammers has influenced IPv6 and it promises to help tighten up security and track down the bad guys.
ARIN - American Registry for Internet Numbers is one of five RIR-Regional Internet Registries regulated by IANA-Internet Assigned Numbers Authority who dole out the IP Addresses that are used to address computers. ARIN regulates Internet addresses in the US, Canada, Caribbean, and North Atlantic. As The Internet emerged in other regions RIRs were organizaed to regulate them. ISP,universities, enterprises and other large networks can get a whole block of addresses from an RIR, like 128.172 that provide more than 65,000 (255 * 255) individual IP addresses they can administer.
Large organizations, or smaller ones that operate global-scaled networks, and ISPs can apply to IANA for the AS - Autonomous System Numbers used to route traffic among the routers and servers that make up The Internet Backbone and the lower-tier providers.
The RIR assigns '24 blocks' of addresses to the ISPs, who use them internally and provide them to organizations and individuals who rent fixed IP addresses bundled with bandwidth required to have a presence on The Internet and WWW.
The 'last' of the IPV4 addresses were assigned to ISPs in about 2011. IPv6 is used more and more for new internet services and traffic.
ISOC - The Internet Society is a global society that champions public policy, facilitates open standards, and organizes events for discussion of Internet-related issues. They sponsor the IETF - Internet Engineering Task Force, which is an important force for making The Internet work better. IETF members work with the other organizations that regulate The Internet to influence standards and document them.
ICANN - Internet Corporation for Assigned Numbers and Names is responsible for the registry of Internet Domain Names. The 'Top Level Domain' is the last part of a domain name, like : .com, .org, .info, or .edu. Some of the TLDs are restricted, as .edu is only assigned to educational institutions, or .us requires you to check Yes if you are in the US. Most other TLDs are less restrictive and only require making a choice as to what suits yourself or your organization the best. The registration fee is paid to the domain registrar annually or for longer terms.
Domain Names within the TLDs are rented in terms of calendar years, and remain the 'property' of an organization as long as they pay the fee, afterwards the domain name comes available for others. Some organizations feel compelled to buy the domain in all the TLDs, so along with a distintive domain like WeBeWebbin.com, the owner is likely to get the domain in the TLDs of .info, .net, .us, .org, &c...
Internet domain names are obtained through 'domain registrars' licensed by ICANN, like godaddy.com, domaindiscover.com, or Network Solutions.
ICANN accredits a group of Domain Registrars who 'sell' or 'lease' the domain names registered by the DNSs. Most provide network administrators with tools for maintaining the IP address that's associated with a URL, setting up 'subdomains', setting up 'mail exchange' address, and making the changes needed to reflect the domain's uses. I'll do a demo with one of the web-based domain registrars in class.
The expanded system of Domain Registrars sprang up fairly recently following decades of monopoly by Internic, trading as Network Solutions. This monopoly was broken up in the late '90s, and since then lots of Domain Registrars have gone out of business trying to make money at registering domains. This caused grief for lots of network administrators unable to control their domains' IP Addresses until some benevolent, successful & surviving registrar took them over.
Take care in choosing a Domain Registrar. Most of those surviving are probably a good bet. GoDaddy.com is the best known but Network Solutions and other surviving domain registrars manage a large share of domain registrations.
Although IP addresses are numeric, Internet users rarely type in strings of numbers to get to resources on the web. We're used to our browser using a DNS - Domain Name Server to translate the domain name portion of the 'URL' (Universal Resource Locator) typed into a browser's Location window to the website's IP address. It all happens quite transparently for us humans, who remember the catchy domain names much easier than quartets of numbers.
Domain name servers are provided by domain registrars and ISPs, but some network managers provide them within their LAN or MAN for their networks' users. The domain name system can usually provide the IP address for even the most obscure of domains within a fraction of a second.
If our Domain Name Server is 'busy' we see our browser displaying 'Opening Page' for a long time before we get the 'Requesting Data From' message. To check to see if the DNS is 'broken', you can type an IP address directly into the Address window of the browser. If that takes you directly where you want to go it's a good time to enter another DNS's address into your 'internet properties.'
SSL Certificate Authorities provide the essential service of verifying that the person or organization offering services, taking personal data like Credit Card #s over the web, is who they purport to be and that data transmission is encrypted. When we go to a site that has a valid 'SSL Certificate' issued by a trusted CA, the little lock shows up on our browser and we can feel safe that our secure socket layer has engaged, our ebusiness is secure, and it's being conducted with the legitimate owner of the website.
The most widely recognized CAs are Verisign, now Symantec, & Thawte. They get the highest prices for SSL Certificates, $399 and $149 respectively. GoDaddy and a host of other less well-known CAs provide them for cheaper. GoDaddy is on sale for $79. Others provide them, but they may not be 'well recognized' and might scare customers off who get security warnings when approaching the site. For example, GoDaddy didn't become well recognized until sometime in 2015 and somebody approaching a site secured with a GoDaddy certificate with a Droid got a security warning.
Anybody can generate their own SSL Certificate to be used for whatever purpose by somebody who knows and trusts the issuer of the 'self-signed certificate'. SSL Certificates issued by 'trusted' certificate authorities are ordinary for world wide web servers like banks or merchants use. Privately issued certificates are common in B2B exchanges by EDI where it may be required to revoke or revise a trading relationship.
Here is a good reference about SSL and CAs.
B2B exchanges don't usually involve browsers. Computer-to-computer exchanges among merchants, banks, and credit card authorization networks use the same http and https protocols as browsers, but there is no browser involved in a server-to-server exchange. Sensitive data, like credit card number & health records, will not travel between servers unless a current, valid certificate from a trusted CA or trading partner is in place. SSL technology will not carry traffic if the protocols involved don't see everything perfectly aligned.
TLS-Transport Layer Security is more robust than SSL and is replacing SSL on many servers. Browsers still work the same way, showing https: and the little lock. Following some vulnerabilities in old SSL, the industry has turned to PKI - Public Key Infrastructure and TLS as security mechanisms. PKI and TLS technologies are collectively called SSL by most people today.
Early in The Internet, doomsayers were saying that it was getting too big to be useful. They pointed to studies on the inefficiency of large networks, and believed The Internet might need to be carved up into smaller Internets for this or that purpose. They hadn't realized that 'search engines' were about to be deployed to 'crawl the web' and index it.
Now, we don't even have to remember the URLs to find the resources we want on The Internet. All we need to know is what we want. We 'google on' a few words and instantly get a selection of web resources that match.
There are several powerful 'Search Engines' that provide the service of indexing websites so that we can find them by typing the name of the organization we're looking for, or even words that appear on the webpage or in it's meta tags. Google.com, yahoo.com, ask.com, and a host of other more specialized search engines help web users find what they need on the web.
The best strategy today for 'SEO-Search Engine Optimization' is honest 'semantic markup' with important references at the top of the html document, h1 & h2 tags, and as many 'honest' external links from authoritative web pages as can be made. Google and other search engines disregard tricks like sticking keywords behind graphics or with text the same color as the background, perhaps as a ploy to steal clicks from a competitor's website.
The search engines aren't running just to be helpful, they're part of a business model where organizations can 'bid on search terms' for placement in a list and pay advertising fees to the search company for 'hits' they generate. Also, we get targeted advertising based on the searches from several vectors including shopping, references to news, and social media.
The benefit of The Internet is that everybody is connected to everybody else quicker and cheaper than ever before. The risk of The Internet is that everybody is connected to everybody else. Managers must be proactive about securing IT assets, especially on The Internet.
Security for this environment, which started with little concern for security, is no accident and must be built into every OS & application, and these must be run by people who are vigilant about security.
Organizations are using The Internet in more and morer ways. They run 'web servers' that handle http, https, smtp, pop3, imap and other 'web services' of the TCP/IP suite of protocols.
They are also using The Internet to augment and replace portions of Private WANs that they may have been operating for decades. The media of the Private WANS (dedicated, leased lines) is in most cases very secure, requiring a physical wiretap to intercept data on the networks.
Since the 70s, predating commercial use of The Internet, most 'business-to-business' contact was via EDI - Electronic Data Interchange conducted exclusively on private VANs - Value Added Networks. Companies paid fees, about $100 per month minimum, to subscribe to EDI Mailbox services so they could exchange supply-chain and other business documents with their trading partners. There was a charge of about 50 cents per document exchanged. Sterling, GE Information Services, BP Tymnet, McDonnell Douglas and a score of other VANs thrived on revenue from securely handing EDI documents.
Since the 90s, most EDI documents are now delivered pretty much for free directly among the web servers of the suppliers, customers, insurance companies, and shippers who engage in this ecommerce. Many don't involve the VANs at all. 'Web Services' allow servers to communicate securely with one another.
Today, servers 'on The Internet' are only secure if there is someone watching them, pretty much all the time, and paying attention to what's in 'the logs' and on the network.. Vigilance is one of the costs of being on The Internet but the savings, connectivity, opportunity, and profits The Internet provides more than offset the costs of being secure. Private WANs and VANs are rarely warranted these days, where Virtual privacy provided by standards like https, ssl, & ssh is sufficient for most purposes.
The Internet provides none of the security inherent in the older private WANs & VANs. If a LAN is attached to The Internet the systems administrator's diligence involves application of standards for security, physically securing sensitive servers, watching the servers and routers for any sign of tampering via the LAN or The Internet, and keeping constantly aware of any 'vulnerabilities' found in the operating system and application software involved in web or network services.
If an organization doesn't make an employee, or employees, responsible for network security it should contract with one of a growing number of network security specialists to help keep their networks the safest. Since the '90s, responsibility for information security has been kicked up the organization to board and C-level executives. No longer something that can be delegated to an low-level network technican, responsibility for security is clearly from the top down.
Caution: The instructor demonstrates network security on a nearly idle server on a quiet network where it's easy to see probes and capture packets. In contrast, the network for any going concern is very, very busy and visual inspection of the logs is not enough vigilence when there are thousands of 'hits' per second whizzing by! Real-time response to network incursions is provided by stateful firewalls and other border appliances that are configured to watch network traffic and automatically block inappropriate activity while raising the alarm for the network manager. Barracuda, PortSentry, Cisco, IBM, and others provide proven solutions for real-time network security. (If somebody's attacking the network, the manager should know about it first! NOT after customers call to complain!)
The Old Internet was rigged to operate in a 'trusting atmosphere' where a couple thousand systems administrators knew each other, attended the same conferences, and trusted one another to keep their servers secure enough. This left a fabric of 'network vulnerabilities' as more and more computers were attached to The Internet through the 90s. One-by-one, these vulnerabilities have been found, and hopefully 'patched' _before_ they have been exploited. We'll never be 100% sure that _all_ the vulnerabilities have been found, so vigilance is required.
Now, instead of a couple thousand systems administrators providing internet services for schools and labratories, there are hundreds of thousands of them online at any moment. They're not all trustworthy. Many of them are constantly surveilling and probing The Internet for systems with known vulnerabilities and 'rooting' or otherwise exploiting vulnerable systems. These are the 'script kiddies' who can't code but can find the scripts they use to wreak havoc, and more talented bad guys who write them. Some of them are expert programmers and network specialists who are also probing the code for commonly used web services and looking for ways to exploit some error in programming that allows them access and compromise remote computers. These are the 'Crackers' who are using Hacker skills to gain access to and otherwise exploit systems.
Although a lot of hacking and cracking is by strangers via The Internet, some of the most severe hacking and theft of data or vandalism of systems is done by employees who have access to the LANs and routers. Customer lists, internal pricelists & contracts, and other confidential data are very valuable for some competitors.
Systems' owners and administrators need to be proactive about the security on their LANs, databases, and applications. Crackers who can employ 'social engineering' techniques to get the user id and password of a privileged user on a network can masquerade as that user and perhaps setup a 'false identity' that will ensure their continued access to an organization's system.
The Internet Storm Center analyzes security logs provided by thousands of internet systems administrators and reports Cracker's probes and attacks, helping to ward off the havoc wreaked by internet viruses and worms. Click on the Data link to see thier current feeds about port scanning and other nefarious activity on the web.
Where Private WAN administrators looked at the indicator lights and reports from their networking equipment and software to see that part of a network 'is down', perhaps because of a cable cut or telephone pole knocked over by a truck. Calls to the VAN or leased line provider would provide estimates of when the link would be back up.
Now, The Internet operates on circuits of the 'backbone providers' and sometimes when 'the internet seems slow' a visit to one of these sites will show where the problem exists: http://www.internetpulse.net/ or http://sd.ihr.daze.net/.
So far, this discussion has been about web server security. Windows desktop users should be no less vigilant if their computer is 'on The Internet.' Recent versions of Windows provide Windows Defender, but many network managers prefer software like Norton Anti-Virus, McAfee, Kaspersky or other. These bring profit to these organizations that are constantly vigilant for viruses and worms that attack the Windows operating system's vulnerabilities and either injure or disable the computer or spy on its user.
Microsoft and Apple provide their own security suites and many consider these adequate for personal internet security.
BlackIce, ZoneAlarm, and other 'port monitors' provide Windows users with active security while their computers are online, monitoring the computers 'internet ports' for and reporting probes by script kiddies and crackers who have gained access to your ISPs networks and are looking for vulnerable machines they can attack.
One form of 'attack' on a Windows PC involves an emailed virus that makes one of the 'internet ports' active when the email's recipient opens an attachment, then stands by to provide a script kiddie or cracker access to your computer. Once in your machine they might be looking thru a webcam that's attached, logging all the keystrokes typed, using it to launch 'spam', or using it in a 'distributed denial of service' (DDoS) attack.
Running anti-virus and port security software is a good idea for anybody who goes on The Internet and will greatly reduce the odds of an 'internet attack' on their computer. It's essential these days, whether the the connection is 'full time' via DSL or cable or part time via dialup. Some dialup networks have become infested with script kiddies running software that tells them when a vulnerable machine dials in to the network so they can pillage it, wreaking havoc, and swamping the ISP's tech support services. These episodes can affect network performance and leave users' machines vulnerable for weeks or months. The best ISPs are proactive, have NOC techs who 'tail the logs' (watch them using the Linux/Unix command 'tail') constantly, run software that reports inappropriate activity, and then take the time to track down the source of probes and attacks on their equipment and their customers.
'Firewalls' are hardware/software on a LAN's router, or server acting as a router, that helps keep the bad guys out of the computers on the networks they protect. Cisco, Netgear, and other manufacturers of internetworking equipment provide 'hardware firewalls' as separate units or components of routers.
A Linux server can serve as a firewall by running 'iptables' (very quick since it's implemented at the kernel level) and other software to limits access to the 'internet ports' used on the server and LAN and log probes and attempts to broach the firewall. Some software even 'black holes' IP addresses of those who probe the machine, so that the cracker or script kiddie sees no results at all from the 'port scans' used to seek vulnerable machines on networks where they can gain access.