Tuesday, November 14, 2006

network and system administration

5 articles ralated to network and system administration

1.) Network Management and MPLS
• By Stephen Morris.
• Article is provided courtesy of Prentice Hall PTR.
• Date: Nov 13, 2003.
Article Description
Stephen Morris shows you some basics of MPLS network management, including the major functional areas of FCAPS: fault, configuration, accounting, performance, and security.
Related Book

Network Management, MIBs and MPLS: Principles, Design and Implementation
$40.49 (Save 10%)
This article is adapted from Stephen B. Morris' book Network Management, MIBs and MPLS: Principles, Design and Implementation (Prentice Hall PTR, 2003, ISBN 0131011138).
Introduction
Multiprotocol label switching (MPLS) continues to grow in popularity with global service providers (SPs), particularly in Europe [1]. At the same time, the deployment of MPLS-based services—for example, RFC 2547 Internet Protocol (IP) VPN—is proving to be something of a challenge [2] to those SPs for the following reasons:
• Cost and difficulty of deploying and operating MPLS network management
• MPLS competes with and may potentially replace existing technologies
• Deployment of MPLS technology
• Legacy technology support
This article focuses on the first of these problems, that of managing MPLS technologies.
In many ways, MPLS provides the best of both IP and asynchronous transfer mode (ATM) by combining traffic engineering, subnetwork connections, and different quality of service (QoS) models. IP, on the other hand, provides just a best-effort datagram service. Bringing together these two domains (IP and the ATM-connection-oriented telecoms world) requires an integrated approach to network management. In this article, we'll look at some of the reasons why this process may be more difficult than expected. The benefits of effective MPLS network management can be realized if the technology is integrated into the SP workflows and business processes.
MPLS provides the possibility of a unified core network for both SPs and enterprises. In this scheme, legacy technologies such as ATM, frame relay, and Ethernet can be pushed out of the core network to the edges. The resulting core network is then packet-based using MPLS and some specified QoS mechanism such as DiffServ, IntServ, and so on. Having a single connection-oriented, QoS-based core technology provides a foundation for standard signaling protocols such as Resource Reservation Protocol with traffic engineering extensions (RSVP-TE) and Label Distribution Protocol (LDP). This can then facilitate rapid service deployment, improving the SP's return on investment (ROI). The deployed network management system (NMS) is a critical element in realizing ROI and is used to support the five major network management functional areas. In many cases, more than one system is needed to realize the overall NMS capability.
The five functional areas of network management for MPLS are known by the acronym FCAPS:
• Fault. Network devices generate data indicating problems or matters of interest to a network manager.
• Configuration. Modifies the network in some fashion, such as creating a label-switched path (LSP). Often called provisioning in the telecoms world.
• Accounting (or billing). Enables an operator to determine usage of network resources. End users may be billed or the data may be used for accounting analysis, such as ROI calculation.
• Performance. Determines whether the network is operating within required limits. This factor is increasingly critical as service-level agreements (SLAs) are used by SPs to differentiate their services. SLAs are being used within enterprise networks in the form of contracts between IT and the various departments. Performance analysis may also be used by network planners to decide whether infrastructure upgrades are required.
• Security. This area is increasingly critical with the growing number and level of sophistication of network attacks. The focus here is ensuring that network resources are protected from unauthorized access.
FCAPS can be seen as baseline capability, and deployed NMS products may well exceed this level. Many NMS offerings don't offer configuration capabilities, however; the network operator must use the individual device's element management system (EMS)—often a Telnet-based menu program that runs on devices such as routers, switches, hubs, etc. Where NMS products offer less than the full FCAPS, the end user may need to provide proprietary software to fill out the NMS capability. Many SPs employ large teams of technicians to carry out base-level device configuration; these tasks may arguably be better handled by software in the NMS/OSS layer.
NMS products may provide application programming interfaces (APIs)—often based on CORBA—for use by OSS components. OSS applications can then call into the NMS via the API to handle situations such as these:
• Retrieving all alarms on a given device
• Creating a virtual circuit (for example, an LSP) between two nodes
• Modifying the reserved bandwidth on a selected LSP because the associated enterprise customer has increased its subscription
• Increasing the bandwidth allocated to an LSP
The OSS exists to support the SP workflows and business processes. The required OSS capabilities may be fulfilled by the NMS, or the OSS can even directly use the underlying devices via SNMP, XML mechanisms, and so forth. The OSS has a higher-level view than the NMS of the managed objects deployed in the network.


2.) Managing Large Networks: Problems and Solutions
• By Stephen Morris.
• Sample Chapter is provided courtesy of Prentice Hall PTR.
• Date: Oct 17, 2003.
Article Description
This chapter focuses on major issues of managing large networks, including bringing the managed data to the code, scalability, the shortage of development skills for creating management systems, and the shortage of operational skills for running networks.
From the Book

Network Management, MIBs and MPLS: Principles, Design and Implementation
$40.49 (Save 10%)
Having looked at some of the nuts and bolts of network management technology, we now consider some of the problems of managing large networks. In many respects the large enterprise networks of today are reminiscent of the islands of automation that were common in manufacturing during the 1980s and 1990s. The challenge facing manufacturers was in linking together the islands of microprocessor-based controllers, PCs, minicomputers, and other components to allow end-to-end actions such as aggregated order entries leading to automated production runs. The hope was that the islands of automation could be joined so that the previously isolated intelligence could be leveraged to manufacture better products. Similar problems beset network operators at the beginning of the 21st century as traffic types and volumes continue to grow. In parallel with this, the range of deployed NMS is also growing. Multiple NMS adds to operational expense.
There is a strong need to reduce the cost of ownership and improve the return on investment (ROI) for network equipment. This is true not just during periods of economic downturn, but has become the norm as SLAs are applied to both enterprise and SP networks. NMS technology provides the network operator with some increasingly useful capabilities. One of these is a move away from tedious, error-prone, manually intensive operations to software-assisted, automated end-to-end operations.
Network operators must be able to execute automated end-to-end management operations on their networks [Telcordia]. An example of this is VLAN management in which an NMS GUI provides a visual picture—such as a cloud—of VLAN members (ports, MAC addresses, VLAN IDs). The NMS can also provide the ability to easily add, delete, and modify VLAN members as well as indicate any faults (e.g., link failures, warm starts) as and when they occur. Another example is enterprise WAN management in which ATM or FR virtual circuits are used to carry the traffic from branch offices into central sites. In this case, the enterprise network manager wants to be able to easily create, delete, modify, and view any faults on the virtual circuits (and the underlying nodes, links, and interfaces) to the remote sites. Other examples include storage (including SANs) management and video/audio conferencing equipment management. As we saw in Chapter 1, “Large Enterprise Networks,” the range of enterprise network services is growing all the time and so also is the associated management overhead.
The benefit of this type of end-to-end capability is a large reduction in the cost of managing enterprise networks by SLA fulfillment, less need for arcane NE know-how, smooth enterprise business processes, and happy end users. Open, vendor-independent NMS are needed for this, and later we look at ways in which software layering helps in designing and building such systems. Simple ideas such as always using default MIB values (seen in Chapter 1), pragmatic database design (matching default database and MIB values) and technology-sensitive menus also play an important part in providing NMS vendor-independence. The issue of presenting menu options appropriate to a given selected NE provides abstraction; for example, if the user wants to add a given NE interface to an IEEE 802.1Q VLAN, then (in order for the operation to be meaningful) that device must support this frame-tagging technology. The NMS should be able to figure this out and present the option only if the underlying hardware supports it. By presenting only appropriate options (rather than all possible options), the NMS reduces the amount of data the user must sift through to actually execute network management actions.
Automated, flow-through actions are required for as many network management operations as possible, including the following FCAPS areas:
• Provisioning
• Detecting faults
• Checking (and verifying) performance
• Billing/accounting
• Initiating repairs or network upgrades
• Maintaining the network inventory
Provisioning is a general term that relates to configuring network-resident objects, such as VLANs, VPNs, and virtual connections. It resolves down to the act of modifying agent MIB object instances, that is, SNMP setRequests. Provisioning usually involves both sets and gets. Later in this chapter we see this when we want to add a new entry to the MPLS tunnel table. We must read the instance value of the object mplsTunnelIndexNext before sending a setRequest to actually create the tunnel. Many NMS do not permit provisioning for a variety of reasons:
• Provisioning code is hard to implement because of the issue of timeouts (i.e., when many set messages are sent, one or more may time out).
• NE security settings are required to prevent unauthorized actions.
• There is a lack of support for transactions that span multiple SNMP sets (i.e., SNMP does not provide rollback, a mechanism for use when failure occurs in one of a related sequence of SNMP sets. The burden of providing lengthy transactions and/or rollback is on the NMS).
• Provisioning actions can alter network dynamics (i.e., pushing a lot of sets into the network adds traffic and may also affect the performance of the local agents).
If the NMS does not allow provisioning, then some other means must be found; usually, this is the EMS/CLI. SNMPv3 provides adequate security for NMS provisioning operations.
Fault detection is a crucial element of network management. NMS fault detection is most effective when it provides an end-to-end view; for example, if a VLAN link to the backbone network is broken (as in VLAN 2 in Chapter 1, Figure 1-4), then that VLAN GUI element (e.g., a network cloud) should change color instantly. The NMS user should then be able to drill down via the GUI to determine the exact nature of the problem. The NMS should give an indication of the problem as well as a possible resolution (as we've seen, this is often called root-cause analysis). The NMS should also cater to the case where the user is not looking at the NMS topology and should provide some other means of announcing the problem, for instance, by email, mobile phone short text message, or pager.
Performance management is increasingly important to enterprises that use service level agreements (SLAs). These are contractual specifications between IT and the enterprise users for service uptime, downtime, bandwidth/system/network availability, and so on.
Billing is important for those services that directly cost the enterprise money, such as the PSTN. It is important for appropriate billing to be generated for such services. Billing may even be applied to incoming calls because they consume enterprise network resources. Other elements of billing include departmental charges for remote logins to the network (external SP connections may be needed, for example, for remote-access VPN service) and other uses of the network, such as conference bridges. An important element of billing is verifying that network resources, such as congested PSTN/WAN trunks, are dimensioned correctly. In Chapter 1, we mentioned that branch offices are sometimes charged a flat rate for centralized corporate services (e.g., voice, LAN/WAN support). This is accounting rather than billing. In billing, money tends to be paid to some external organization, whereas in accounting, money may be merely transferred from one part of an organization to another. Many service providers offer services that are billed using a flat-rate model—for example, x dollars per month for an ATM link with bandwidth of y Mbps. Usage-based billing is increasingly attractive to customers because it allows for a pay-for-use or pay-as-you-grow model. It is likely that usage-based billing/accounting will increasingly be needed in enterprise NMS applications. This is particularly true as SLAs are adopted in enterprises.
Networks are dynamic entities, and repairs and upgrades are a constant concern for most enterprises. Any NE can become faulty, and switch/router interfaces can become congested. Repairs and upgrades need to be carried out and recorded, and the NMS is an effective means of achieving this.
All of the FCAPS applications combine to preserve and maintain the network inventory. An important aspect of any NMS is that the FCAPS applications are often inextricably interwoven; for example, a fault may be due to a specific link becoming congested, and this in turn may affect the performance of part of the network. We look at the important area of mediation in Chapter 6, “Network Management Software Components.”
It is usually difficult to efficiently create NMS FCAPS applications without a base of high-quality EMS facilities. This base takes the form of a well-implemented SNMP agent software with the standard MIB and (if necessary) well-designed private MIB extensions. Private MIB extensions are needed for cases where vendors have added additional features that differentiate their NEs from the competition.
All these sophisticated NMS features come at a price: NMS software is expensive and is often priced on a per-node basis, increasing the network cost base. Clearly, the bigger the network, the bigger the NMS price tag (however, the ratio of cost/bit may go down).
This chapter focuses on the following major issues and their proposed solutions:
• Bringing the managed data to the code
• Scalability
• The shortage of development skills for creating management systems
• The shortage of operational skills for running networks
Bringing the Managed Data to the Code
Bringing data and code together is a fundamental computing concept. It is central to the area of network management, and current trends in NE development bring it to center stage. Loading a locally hosted text file into an editor like Microsoft Notepad is a simple example: The editor is the code and the text file is the data. In this case, the code and data reside on the same machine, and bringing them together is a trivial task. Getting SNMP agent data to the manager code is not a trivial task in the distributed data model of network management because:
• Managed objects reside on many SNMP agent hosts.
• Copies of managed objects reside on SNMP management systems.
• Changes in agent data may have to be regularly reconciled with the management system copy.
Agent-hosted managed objects change in tandem with the dynamics of the host machine and the underlying network—for example, the ipInReceives object from Chapter 1, which changes value every time an IP packet is received. This and many other managed objects change value constantly, providing a means for modeling the underlying system and its place in the network. The same is true of all managed NEs. MIBs provide a foundation for the management data model. The management system must keep track of relevant object value changes and apply new changes as and when they are required. As mentioned in Chapter 1, the management system keeps track of the NEs by a combination of polling, issuing set messages, and listening for notifications. This is a classic problem of storing the same data in two different places and is illustrated in Figure 3-1, where a management system tracks the objects in a managed network using the SNMP messages we saw in Chapter 2, “SNMPv3 and Network Management.”
Figure 3-1. Components of an NMS.
Figure 3-1 illustrates a managed network, a central NMS server, a relational database, and several client users. The clients access the FCAPS services exported by the NMS, for example, viewing faults, provisioning, and security configuration. The NMS strives to keep up with changes in the NEs and to reflect these in the clients.
Even though SNMP agents form a major part of the management system infrastructure, they are physically remote from the management system. Agent data is created and maintained in a computational execution space removed from that of the management system. For example, the ipInReceives object is mapped into the tables maintained by the host TCP/IP protocol suite, and from there it gets its value.1 Therefore, get or set messages sent from a manager to an agent result in computation on the agent host. The manager merely collects the results of the agent response. The manager-agent interaction can be seen as a loose type of message-based remote procedure call (RPC). The merit of not using a true RPC mechanism is the lack of associated overhead.
This is at once the strength and the weakness of SNMP. The important point is that the problem of getting the agent data to the manager is always present, particularly as networks grow in size and complexity. (This problem is not restricted to SNMP. Web site authors have a similar problem when they want to embed Java or JavaScript in their pages. The Java code must be downloaded along with the HTML in an effort to marry the browser with the Web site code and data. Interestingly, in network management the process is reversed: The data is brought to the code.) So, should the management system simply request all of the agent data? This is possibly acceptable on small networks but not on heavily loaded, mission-critical enterprise and SP networks. For this reason, the management system struggles to maintain an accurate picture of the ever-changing network. This is a key network management concept.
If an ATM network operator prefers not to use signaled virtual circuits, then an extra monitoring burden is placed on the NMS. This is so because unsignaled connections do not recover from intermediate link or node failures. Such failures give rise to a race between the operator fixing the problem and the user noticing a service loss. These considerations lead us to an important principle concerning NMS technology: The quality of an NMS is inversely proportional to the gap between its picture of the network and the actual state of the underlying network—the smaller the gap, the better the NMS. An ideal NMS responds to network changes instantaneously. Real systems will always experience delays in updating themselves, and it is the goal of the designers and developers to minimize them.
As managed NEs become more complex, an extra burden is placed on the management system. The scale of this burden is explored in the next section.


3.) Relocation Challenges of the IT Department, Part 7: Installing an Audiovisual System
• By Greg Kirkland.
• Date: Oct 31, 2003.
Article Description
Presentations for clients and staff should have a professional look and sound. In Part 7 of his series on IT relocation strategies, Greg Kirkland describes his company's experience when selecting and installing a new audiovisual system to gain those professional results.
Related Book

Network Management: Concepts and Practice, A Hands-On Approach
$79.20 (Save 10%)
Setting Up for Professional Presentations
In the accounting business, we give a lot of presentations. Whether it's continuing professional education for staff or one-on-one consultation with clients, having and knowing how to use good audiovisual (AV) equipment is the key to presentations that look and sound professional.
As you know if you've been reading along in this series, my company, a large accounting firm, recently moved our Indianapolis headquarters to a new office building. Before our move, putting together an AV presentation involved multiple steps: find a screen, find a projector, check out a laptop, grab a power strip and extension cord, and get it all hooked up correctly in the right room before the meeting began. It was a logistical challenge, to say the least. Often we looked silly, setting up equipment in front of the client or vendor at the last minute. The most frustrating part was fumbling over the tripod screen—you know, the kind that grandpa used to use with his old slide projector? Fortunately, we no longer need that old screen.
Moving to a new building gave us the opportunity to upgrade many aspects of our accounting practice. One of the most dramatic improvements we made was to purchase quality AV equipment and install it in the rooms where we give most of our presentations. We elected to install build-in screens in the training room and one of the larger conference rooms. In the training room, we added an AV desk to control switching the gear. For the conference room (and other rooms), we bought and customized an AV cart with all of the gear on board. On top of the cart is the projector and speakers. Inside the cart is the switching device to control the built-in PC, guest PC, VCR, DVD, and auxiliary jack. For the built-in PC, we use our wireless network (see Part 4 of this series for more information about the network). Now we simply roll the cart to the room where we're planning a presentation, and plug it in.
NOTE
The cart has a built-in power strip, so just one power cord has to be plugged in. Even cooler, the power cord is on a retractable reel.
TIP
In case or fire, accident, theft, etc., it's important to make sure that your new audiovisual equipment is fully covered by your company's insurance policy. Some insurers offer special policy riders at special (possibly high) rates for very expensive or custom equipment



4.) Is it Geek City Yet? Philadelphia, City-Wide Wi-Fi, and the Digital Inclusion Project
• By Sheryl P. Simons.
• Date: Jul 8, 2005.
Article Description
When "no child left behind" becomes "no household without Internet access," how will cities be affected? Who foots the bill? And how does the mechanism of government keep from hopeless entanglement with the objectives of the telecom industry? Sheryl Simons presents a fascinating tale of one big city's quest for the future: Wireless Philadelphia.
Related Book

Inescapable Data: Harnessing the Power of Convergence
$26.99 (Save 10%)
With great fanfare on August 25, 2004, Philadelphia's Mayor John Street announced that the city would make wireless access available throughout the city's entire 135 square miles. Summer 2006 was the projected completion of the network that would eventually create 6,000 new jobs. Charged with this daunting task was Diane Neff, newly appointed information technology czar, who, along with Temple, Drexel, and LaSalle Universities, devised the initial blueprint for "Wireless Philadelphia."
Although this auspicious launch stirred the imaginations of many, inevitably the march toward completion slowed; by April 2005, headlines updating its progress appeared well inside the Philadelphia Inquirer, rather than on the front page. What happened next is a cautionary tale of big-city politics, statewide maneuverings, telecom industry reaction, and the emergence of an unintended diamond in the rough—The Digital Inclusion Project.
The Goal Line Is in the Air
Among its other attributes, Philadelphia is noted for sports, history, food, music, medicine, education, and tourism—but not cutting-edge technology. In fact, the last great claim to technology fame was the invention of the UNIVAC, a room-sized computer created by engineers at the University of Pennsylvania, and UNIVAC celebrated its 50th anniversary milestone several years ago.
Yet technological prowess is a must, not only to attract corporations and jobs, but to compete effectively in a global economy. The Mayor knows this. Governor Edward Rendell—former mayor of Philadelphia—knows this. And companies such as Comcast and Verizon, major players in the communications industry, know this. Selling connectivity through broadband or DSL services is a major revenue stream for such companies. So when the Mayor stated his goal of low-cost citywide government-sponsored access with numerous, strategically placed free "hotspots," those in the connectivity business reacted badly—very badly.
According to the Philadelphia Weekly, 60% of Philadelphia's population is currently without Internet service, [1] placing the city 33rd on the list of wired (or wireless) cities. [2] On the list of innovative economies, it ranks 18th. [3] The cost of commercial services such as Comcast, which range upward to $55 per month, makes access unaffordable for most. A survey by the Pew Internet & American Life Project revealed that 75% of families with incomes of $50,000 or above contracted for home-based broadband access. [4] But Wireless Philadelphia, with proposed antennas on nearby street and traffic light poles, would charge $16–$22 per month, with a sliding scale for low-income families.


5.) The Scope of Network Distributed Computing
• By Max K. Goff.
• Sample Chapter is provided courtesy of Prentice Hall PTR.
• Date: May 28, 2004.
Article Description
The scope of Network Distributed Computing (NDC) is quite impressive. This chapter presents an overview of some of the many relevant areas of NDC research and development today. If you're looking for a solid overview of all things NDC — from the Semantic Web, P2P, and Pervasive Computing to Distributed Databases, Filesystems, Media, and Storage — you've come to the right place.
From the Book

Network Distributed Computing: Fitscapes and Fallacies
$35.99 (Save 10%)
Since December 1969, when the ARPANET project created the first modern packet-switched network—the genesis of today's Internet—the challenge and promise of NDC has resulted in an explosion of investment, research, and software development. Ensuing efforts encompass nearly all aspects of computer science today.
The scope of NDC is quite impressive. No other single aspect of computer science research and development quite compares with the myriad problem spaces enjoined when computers communicate, swap data, and share processing responsibilities. This chapter presents an overview of some of the many relevant areas of NDC research and development today.
Each of these areas is a moving target, in that while progress is being made in each area and rapid improvement may sometimes be achieved, a complete examination or solution in any of these areas is not likely in the short term. In fact, to the extent that each represents a community of autonomous agents, general fitscape attributes apply. Other categories of NDC will emerge over time, as new technologies converge and evolve and as innovative technology adoption patterns continue to manifest themselves in consumer-driven economic fitscapes worldwide.
The categories here, which I call fitscapes, reflect many of the topical areas of the IEEE Computer Society's Distributed Systems Online journal, which itself is a constantly changing resource that tracks the branching processes so evident in the exploration of NDC today.[1] The categories are derived as well from other fitscapes: ongoing activities of the W3C, for example, and traditional areas of development that may be closely related but are nevertheless subtly different (for example, grid computing versus massively parallel computing).
Figure 3.1. NDC R&D fitscapes
The purpose of this map is not to present a canonical listing of relations and dependencies; indeed, only the most obvious ones are noted here. Every aspect of NDC is directly related in some way to almost every other aspect of NDC, thus making the map moot as an exercise, so don't bother trying to follow the relationships or memorize the dependencies. The point of the map is to illustrate the level of complexity inherent in our efforts to simply articulate the relationships among the areas of research in NDC, never mind the more profound complexities inherent within each.
Imagine the complexities you encounter in keeping track of all the influences and discoveries if, for example, your role is one of conceptualizing distributed agents. How can your work, dependent on NDC-related work in—at least—middleware, security, distributed databases, and possibly operating systems, proceed concurrently with work in those areas? How can you, with many other potential advances also dependent on you, confidently progress? Clearly, not everything can proceed in lock-step. By the same token, it may not be obvious, nor is it reasonable, to stage an ordered process whereby advances in subdisciplines of NDC research can proceed. A complex fitscape governs each subdiscipline from which a broader, much more complex, fitscape of overall NDC R&D emerges.
It may be reasonable, however, to estimate possible dependencies that one category of exploration might have on another over time. Work in some areas will certainly mature more quickly than others, driven by levels of investment, which in turn are driven by technology adoption patterns, and governed by the complexity of the computer sciences issues that must be solved.
The chart in Figure 3.2 offers conjecture with respect to a maturity order of the 24 categories, evolving such as to provide basic solutions upon which NDC developers can build. The y axis, degree of decoupling, captures component decomposition as well as the movement of intelligence closer to the "edge" of networks; the impact of Moore's law over time, upon which various categories of NDC development will build, is also implied here and will accommodate an even greater decoupling.
Figure 3.2. Evolution of NDC over time (pro forma)
Given an accelerating rate of innovation in the major technology trends cited earlier (some of which are themselves fitscapes of NDC), it is a given that any estimates of future developments, relationships, or dependencies among these should be viewed as speculative. The odds are perhaps not as high as those against walking into the Atlantis Hotel and Casino in Reno, Nevada, placing a $3 bet in the MegaBucks machine, pulling the handle, and winning the jackpot in one try. But futures in NDC are nevertheless speculative.
I will say more about the Atlantis Casino later in the context of real-world implementation, which, as you may know, is not always by design. Nor is implementation research, strictly speaking by design; implementations are less erudite and much dirtier than speculation, research, or theory would have us believe. But for now, an overview of many of the current NDC R&D fitscapes is in order.
Ubiquitous Computing
In a well-connected network, we can begin with any node and theoretically find our way to each of the others by following links along the way. Begin with an end in mind, and you'll find lofty ideals touted by an inspired fitscape. The teleological vector of technology,[2] at least that which is heir to Turing's mind child, is ubiquitous computing. Pervasive computing would make information available everywhere; ubiquitous computing would require information everywhere. There is a subtle but certain difference, one that will provide NDC challenges for years beyond the near-future, pervasive-computing world that we might soon imagine.
Buildings need to be smart, down to the rivet. Electrical systems need to be smart, down to the light bulb. Monetary systems need to be smart, down to the penny. And all those systems and more need to be connected and available down to the network if the ultimate in ephemeralization is ever to be approached. Are continuing productivity increases necessary? Goff's axiom may apply here as well. What do we call economies that do not grow? The essence of economic growth is increasing productivity. Unless we are prepared both to forgo economic organizational assumptions altogether and begin anew, as it were, with other approaches (which may be even more painful to consider than economic stagnation) and to decline given current assumptions, we cannot turn away from the path of ephemeralization. There is no other direction, therefore, than eagerly toward ubiquitous computing.
Many authors do not distinguish between "pervasive" and "ubiquitous" when it comes to computing visions; even Mark Weiser used the terms synonymously. But I think it's important to be cognizant of the differences and argue that we will enjoy the fruits of one even as we continue to pursue the other. Indeed, we are beginning to see early signs of pervasive computing today. Any city in which I can easily find an "information field," in which dynamic network connections can be enjoined via a mobile computing device, is one in which pervasive computing potential has emerged. Arguably, any place where an I-mode phone can function is a place of pervasive computing. But until all possible computing applications are explored and every niche for network intelligence fully exploited, ubiquitous computing will remain the unseen terminal of a teleological vector.
Once computers disappear and dynamic, ad hoc ensembles of software swarm about like beneficent organisms to serve our every whim, utilizing resources with previously unimaginable efficiencies, a miraculous invisible network may then emerge which is as unfathomable to our early 21st-century minds as a wireless Internet-connected device would have been to a pre-Copernican vision. The Network Age is the age of magic. NDC developers, by virtue of the myriad fitscapes in which we all play, are the magicians of this new age. Ubiquitous computing is our shared Nirvana—whether we realize it or not.

No comments: