********************************************************
NOTICE
********************************************************
This document was converted from
WordPerfect to ASCII Text format.
Content from the original version of the document such as
headers, footers, footnotes, endnotes, graphics, and page numbers
will not show up in this text version.
All text attributes such as bold, italic, underlining, etc. from the
original document will not show up in this text version.
Features of the original document layout such as
columns, tables, line and letter spacing, pagination, and margins
will not be preserved in the text version.
If you need the complete document, download the
WordPerfect version or Adobe Acrobat version, if available.
*****************************************************************
Federal Communications Commission
Office of Plans and Policy
1919 M Street NW
Washington, DC 20554
OPP Working Paper Series
29 Digital Tornado:
The Internet and
Telecommunications
Policy
March 1997
Note: The graphics associated with this document are not included in this WordPerfect version. An electronic copy
of this document that includes all of the associated graphics is available via the Internet at...
http://www.fcc.gov/Bureaus/OPP/working_papers/oppwp29.pdf
Kevin Werbach*
*The analysis and conclusions of this Working Paper are those of the author, and do not necessarily represent the
views of other Commission staff, individual FCC Commissioners, or the Commission.
The FCC Office of Plans and Policy's Working Paper Series presents staff analysis and
research in various states. These papers are intended to stimulate discussion and critical comment
within the FCC, as well as outside the agency, on issues in telecommunications policy. Titles
may include preliminary work and progress reports, as well as completed research. The analyses
and conclusions in the Working Paper Series are those of the authors and do not necessarily
reflect the view of other members of the Office of Plans and Policy, other Commission Staff, or
the Commission itself. Given the preliminary character of some titles, it is advisable to check
with authors before quoting or referencing these working papers in other publications.
This document is available on the FCC's World Wide Web site at .
Copies may also be purchased from International Transcription Services, Inc., 1919 M Street,
NW, Room 246, Washington, DC 20554, (202) 857-3800. Copies are also available from the
National Technical Information Service, 5285 Fort Royal Road, Springfield, VA 22161 (703)
487-4650.
Digital Tornado: The Internet and
Telecommunications Policy
Kevin Werbach*
Counsel for New Technology Policy
Office of Plans and Policy
Federal Communications Commission
Washington, DC 20554
March 1997
OPP Working Paper No. 29
* Many people at the FCC provided advice, comments, and other assistance in the development of
this working paper. In particular, I would like to thank Robert Pepper, Elliot Maxwell, Greg
Rosston, Richard Metzger, David Sieradzki, and Karen Rose for reviewing earlier drafts, and
Chairman Reed Hundt for his leadership on Internet issues. The analysis and coclusions of this
paper do not necessarily represent the view of other FCC staff or the Commission.CONTENTS
Executive Summary. . . . . . . . . . . . . . . . . . . . . . . . i
A. Background . . . . . . . . . . . . . . . . . . . . . . i
B. Summary of Contents. . . . . . . . . . . . . . . . . iii
C. The Government Role. . . . . . . . . . . . . . . . . . v
I. Introduction -- The Endless Spiral of Connectivity. . . . . 1
A. How the Internet is Unique . . . . . . . . . . . . . . 1
B. The Feedback Loop. . . . . . . . . . . . . . . . . . . 3
C. Threats to the Continued Spiral. . . . . . . . . . . . 7
D. How Government Should Act. . . . . . . . . . . . . . . 8
II. What is the Internet? . . . . . . . . . . . . . . . . . . .10
A. General Description. . . . . . . . . . . . . . . . . .10
B. An Extremely Brief History of the Net. . . . . . . . .13
C. How the Internet Works . . . . . . . . . . . . . . . .17
1. Basic Characteristics . . . . . . . . . . . . . .17
2. Addressing. . . . . . . . . . . . . . . . . . . .18
3. Services Provided Over the Internet . . . . . . .19
4. Governance and Management . . . . . . . . . . . .20
D. Development of the Internet Market . . . . . . . . . .21
1. The Internet Today. . . . . . . . . . . . . . . .21
2. Internet Trends . . . . . . . . . . . . . . . . .22
III. Category Difficulties . . . . . . . . . . . . . . . . . . .26
A. FCC Authority Generally . . . . . . . . . . . . . . .26
B. Telephony. . . . . . . . . . . . . . . . . . . . . . .30
1. Legal Framework . . . . . . . . . . . . . . . . 30
a. Carrier Obligations.. . . . . . . . . . . . . . .30
b. Basic vs. Enhanced Services . . . . . . . . . . .31
2. Implications. . . . . . . . . . . . . . . . . . .33
a. Section 251 Interconnection Obligations . . . . .33
b. Section 254 Universal Service Obligations . . . .35
c. Internet Telephony. . . . . . . . . . . . . . . .36
C. Broadcasting and Cable . . . . . . . . . . . . . . . .41
D. Relationship to Content. . . . . . . . . . . . . . . .43
E. Administrative Issues . . . . . . . . . . . . . . . .45
F. Toward a Rational Approach . . . . . . . . . . . . . .46
IV. Pricing and Usage 48
A. Current Pricing Structure . . . . . . . . . . . . . .48
B. Network Economics . . . . . . . . . . . . . . . . . .52
C. Implications for Local Exchange Carriers . . . . . . .54
1. Pricing Issues. . . . . . . . . . . . . . . . . .56
2. Switch Congestion . . . . . . . . . . . . . . . .58
3. Responses to Switch Congestion. . . . . . . . . .61
a. Pricing Changes . . . . . . . . . . . . . . . . .62
b. Technical Solutions . . . . . . . . . . . . . . .66
4. State Tariffing Issues. . . . . . . . . . . . . .71
5. Competitive Dynamics. . . . . . . . . . . . . . .71
V. Availability of Bandwidth . . . . . . . . . . . . . . . . .73
A. Deployment and Pricing of High-Speed Access Technologies . .74
B. The ISDN Case Study. . . . . . . . . . . . . . . . . .76
C. Universal Service and Advanced Access Technologies . .78
VI. Conclusion. . . . . . . . . . . . . . . . . . . . . . . . .82
A. The Internet and Competition in Telecommunications . .82
B. The Right Side of History. . . . . . . . . . . . . . .84
Appendix A: Internet Architecture Diagram
DIAGRAMS
Figure 1 The Internet Spiral4
Figure 2 Conceptual Overview of the Internet11
Figure 3 NSFNET Architecture14
Figure 4 Internet Growth Projections.23
Figure 5 What is the Correct Analogy?27
Figure 6 Internet vs. Conventional Telephony37
Figure 7 Current Dial-Up Internet Access Pricing49
Figure 8 Typical Dial-Up Internet Architecture.55
Figure 9 LEC Internet Usage Studies .59
Figure 10 Some Solutions to Switch Congestion68
Figure 11 Major End-User Internet Access Technologies75
Internet ArchitectureAppendix A
Executive Summary
A. Background
The Internet, from its roots a quarter-century ago as a military and academic research
tool, has become a global resource for millions of people. As it continues to grow, the
Internet will generate tremendous benefits for the economy and society. At the same time,
the Internet poses significant and difficult questions for policy makers. This working paper
examines some of these emerging issues at the intersection of technology, law, economics,
and public policy.
The United States federal government has long been involved in the development of the
Internet. Through research grants, and by virtue of its status as the largest institutional user
of computer services in the country, the federal government played a central role in bringing
what we now call the Internet into being. Just as important, the federal government has
consistently acted to keep the Internet free of unnecessary regulation and government
influence. As the Internet has matured and has grown to support a wide variety of
commercial activity, the federal government has transitioned important technical and
management functions to the private sector. In the area of telecommunications policy, the
Federal Communications Commission (FCC) has explicitly refused to regulate most online
information services under the rules that apply to telephone companies.
Limited government intervention is a major reason why the Internet has grown so
rapidly in the United States. The federal government's efforts to avoid burdening the Internet
with regulation should be looked upon as a major success, and should be continued. The
Telecommunications Act of 1996 (1996 Act) adopts such a position. The 1996 Act states that
it is the policy of the United States "to preserve the vibrant and competitive free market that
presently exists for the Internet and other interactive computer services, unfettered by Federal
or State regulation," and the FCC has a responsibility to implement that statute. The draft
"Framework for Global Electronic Commerce" developed by the White House with the
involvement of more than a dozen federal agencies, similarly emphasizes the need to avoid
unnecessary government interference with the Internet.
This working paper addresses three overlapping telecommunications policy areas that
relate to the Internet: law, economics, and public policy. Legal questions arise from the
difficulty in applying existing regulatory classifications to Internet-based services. Economic
questions arise from the effects of Internet usage on the telecommunications infrastructure,
and the effects of the telecommunications infrastructure on the Internet. Public policy
questions arise from the need to maximize the public benefits that the Internet brings to
society.
The Internet is a fluid, complex entity. It was designed to route around obstacles, such
as failures at central points of the network, and it may respond in unexpected ways to
pressures placed on it. It has developed largely without any central plan, especially in the
past several years as the U.S. government has reduced its management role. It overcomes any
boundaries that can be drawn, whether rooted in size, geography, or law. Because the
Internet represents an ever-growing interconnected network, no one entity can control or
speak for the entire system. The technology of the Internet allows new types of services to
be layered on top of existing protocols, often without the involvement or even the knowledge
of network providers that transmit those services. Numerous users can share physical
facilities, and the mix of traffic through any point changes constantly through the actions of a
distributed network of thousands of routers.
The chaotic nature of the Internet may be troubling for governments, which tend to
value stability and certainty. However, the uncertainty of the Internet is a strength, not a
weakness. With decentralization comes flexibility, and with flexibility comes dynamism.
Order may emerge from the complex interactions of many uncoordinated entities, without the
need for cumbersome and rigid centralized hierarchies. Because it is not tied to traditional
models or regulatory environments, the Internet holds the potential to dramatically change the
communications landscape. The Internet creates new forms of competition, valuable services
for end users, and benefits to the economy. Government policy approaches toward the
Internet should therefore start from two basic principles: avoid unnecessary regulation, and
question the applicability of traditional rules.
Beyond these overarching themes, some more specific policy goals can be identified.
For the FCC in particular, these include the following.
Promote competition in voice, video, and interactive services.
In passing the 1996 Act, Congress expressed its intent to implement a "pro-competitive
deregulatory national communications policy." The Internet provides both a space for
innovative new services, as well as potential competition for existing communications
technologies. The FCC's role will be to ensure that the playing field is level, and that
efficiency and market forces drive competition.
Facilitate network investment and technological innovation.
The Internet encourages the deployment of new technologies that will benefit consumers
and produce jobs. The Commission should not attempt to pick winners, but should
allow the marketplace to decide whether specific technologies become successful. By
eliminating regulatory roadblocks and other disincentives to investment, the FCC should
encourage both incumbents and new entrants to develop innovative solutions that
transcend the capabilities of the existing network.
Allow all citizens to benefit from advanced technologies.
The communications revolution should benefit all Americans. In an age of new and
exciting forms of interactive communications, the FCC should ensure that entities such
as schools and libraries are not left behind. However, the mechanisms used to achieve
this goal should be consistent with the FCC's broader policies of competition and
deregulation.
B. Summary of Contents
This working paper reviews some of the major Internet-related issues that have already
come before the Commission, as well as those that may come before the FCC in the near
future.
This paper is not intended to be a comprehensive overview of every Internet topic that
has implications for the FCC. I have focused on issues where I believe the Internet raises the
most immediate questions for telecommunications policy, and especially those that have
already been raised in FCC proceedings. Beyond those discussed in this paper, there are
several other topics of great importance to the development of the Internet that may have
implications for the FCC. These include: Internet governance (such as the allocation of
domain names), intellectual property, network reliability, privacy, spectrum policy, standards,
and security. By omitting these issues, I do not suggest that they are of less importance to
the government or the private sector. The underlying policy recommendations of this paper
are applicable to all Internet issues that come before a government agency such as the FCC,
although specific subjects may require individualized consideration.
Because this paper is about the role of the FCC, it focuses almost entirely on the United
States. The FCC's decisions depend on the specific legal and economic structures that govern
the communications industry in this country. Likewise, the United States experiences more
acutely many of the challenges the Internet generates, because this country has by far the
largest percentage of the Internet's infrastructure and traffic. The Internet, however, is a
global network. The essential characteristics that make the Internet so valuable, and also so
difficult to understand in the context of traditional telecommunications policy, are relevant
worldwide. Some Internet issues may best be addressed in international fora, and this paper
does not suggest that all the issues described should be resolved by the United States
government alone.
With these caveats in mind, the paper seeks to develop a consistent public policy
approach for issues involving the Internet and telecommunications policy.
Section I provides a framework for understanding the dynamism of the Internet, and the
fundamental forces that propel it. This section propounds the notion of the Internet as
feedback loop, a constantly expanding spiral that creates the conditions for its further growth.
The Internet spiral is driven by four factors. First, "deep convergence," which represents the
impact of digital technology in breaking down barriers between different services and
networks. Second, the interaction of Moore's Law (progressively higher computing power at
a given cost) with Metcalfe's Law (progressively more value to being connected to a
network), combined with increasing network bandwidth, leads to plummeting costs and
soaring performance for the Internet's underlying facilities. Third, through "the magnetism of
money and minds," the market rewards innovation by attracting both the people and the
financing necessary for further innovation. Fourth, unfettered competition pressures
companies to take advantage of market opportunities and to utilize more efficient
technologies.
Envisioning the Internet as a feedback loop leads to three recommendations for
government policy. First, government should seek scalability, not just stability. Government
policy should be forward-looking, recognizing that the Internet will continue to grow and
evolve, and should not attempt to impose on the Internet the familiar limitations of traditional
communications technologies. Second, government should swim with the current. In other
words, government should harness the tremendous potential of the Internet to help achieve
public policy goals. The challenge is to meet the exploding demand for bandwidth, not to
restrain it. Third, government should promote the Network, not networks. Rather than
focusing on individual companies or industries, government should create a climate that
maximizes social welfare.
Section II identifies the salient characteristics of the Internet. To understand how the
Internet affects and is affected by regulatory decisions, it is important to understand how
services are provided over the Internet, and to distinguish the Internet from other
communications technologies. This section also provides a brief history of the Internet, to
place the analysis of the current Internet in a proper context.
Section III examines whether existing FCC regulatory and statutory requirements should
apply to services provided over the Internet. The Commission has not yet confronted most of
these legal questions directly, although it has expressed reservations about applying traditional
rules to the Internet. However, the continued growth of the Internet and the development of
new, hybrid services make it likely that the FCC will need to resolve some of these issues.
The FCC's current division between "basic" and "enhanced" services, and the statutory
definitions of entities such as "telecommunications carriers" and "broadcasters," provide only
limited guidance. The paper recommends that government exercise caution in imposing pre-
existing statutory and regulatory classifications on Internet-based services. The FCC should
begin by identifying Internet services that clearly lie outside the scope of traditional
regulatory requirements, so as to minimize market uncertainty while it confronts the more
difficult categorization issues.
Section IV looks at the economics of Internet usage. The growth of the Internet
pressures not only the current regulatory regime, but also the physical networks that carry
Internet traffic. The FCC oversees the most of the underlying communications facilities upon
which the Internet depends, including the public switched telephone network. FCC decisions
on the pricing of traditional telecommunications services significantly impact the Internet,
even as the growth in Internet usage itself affects the voice network. The debate in this
context should focus on the future of the network. The FCC should strive to give companies
market-efficient incentives to build high-capacity, high-performance networks that are
optimized for data transport. This approach will allow the operation of the market and
technological development to resolve difficulties such as congestion and limited bandwidth.
Section V considers the extent to which users can take advantage of the Internet. The
FCC has for decades promoted "universal service" in telecommunications, and the emergence
of the Internet requires a reassessment of how that responsibility should be interpreted today.
The value of the Internet largely depends on the level of bandwidth that can be delivered to
end users. Many different technologies are being developed to permit higher-speed
connections than are currently affordable for most consumers. In addition, certain institutions,
such as schools and libraries, as well as users who would otherwise be unable to access the
Internet, should be able to benefit from the Global Information Infrastructure.
Section VI concludes by linking the Internet-specific issues with the FCC's overarching
efforts to facilitate competition in all communications markets. Competition is a theme that
runs throughout this paper. The technological shifts associated with the Internet dovetail with
the communications industry's transition from regulated monopolies to a world of overlapping
competitive firms. In the end, successfully opening the communications sector to competition
will likely be the greatest contribution that government can make to the development of the
Internet.
C. The Government Role
This working paper is intended to explore issues and to facilitate discussion, not to
propose specific government actions. Many proponents of the Internet's development are
wary of any government actions directed toward the Internet. Government, however, has
been intimately involved with the Internet since the network's beginnings. Government
decisions -- such as the FCC's directive that Internet service providers not be subject to
interstate access charges, and the widespread requirement by state regulators that local calls
be available at flat monthly rates -- continue to shape Internet development. Moreover, policy
decisions are best made with knowledge and comprehension of their potential implications.
The goal of this paper, therefore, is to promote greater understanding, on the part of
both government and the private sector, of the unique policy issues the Internet raises for the
FCC and similar agencies. The discussion of a topic is not a suggestion that government
regulation in that area is necessary or desirable. On the contrary, a fundamental position of
this paper is that government should work to avoid unnecessary interference with the
Internet's development.
Government may influence the evolution of the Internet in many ways, including
directly regulating, participating in technical standards development, providing funding,
restricting anti-competitive behavior by dominant firms, facilitating industry cooperation
otherwise prohibited by antitrust laws, promoting new technologies, encouraging cooperation
between private parties, representing the United States in international intergovernmental
bodies, and large-scale purchasing of services. The FCC and other government entities may
also play a useful role simply by raising the profile of issues and stimulating debate. A better
understanding of the relationship between the Internet and telecommunications policy will
facilitate intelligent decision-making about when and to what extent any of these government
actions are appropriate.I.Introduction: The Endless Spiral of Connectivity
Government officials, pundits, and market researchers often compare the Internet to
established communications technologies such as telephony and broadcasting. These efforts
are understandable. "Traditional" technologies have well-defined usage characteristics, growth
patterns, and market behavior. Moreover, the Internet physically "piggybacks" on other
networks, in particular the wireline telephone infrastructure.
Drawing analogies between the Internet and traditional media makes it easier to decide
whether existing bodies of law or regulation apply to new Internet-based services. Thus, for
example, the debate over the constitutionality of the Communications Decency Act (CDA),
which seeks to restrict the transmission of indecent material over the Internet, has often boiled
down to a conflict of analogies. Opponents of the CDA have compared the Internet to a
telephone network, while supporters often describe the Internet as similar to broadcasting.
Because telephone carriers are generally not legally responsible for the content routed over
their networks, but broadcasters may be subject to fines for transmitting inappropriate
material, the choice of analogy can predetermine the legal outcome.
Although such analogies are appealing, most break down upon closer analysis of the
unique characteristics of the Internet. The Internet is substitutable for all existing media. In
other words, the Internet potentially poses a competitive threat for every provider of
telephony, broadcasting, and data communications services. At the same time, Internet-related
businesses are substantial customers of existing telephony, broadcasting, and data companies.
The Internet creates alternate distribution channels for pre-existing content, but more
importantly, it permits delivery of new and hybrid forms of content. The Internet is one of
many applications that utilize the existing telephone network. However, from another
perspective, the telephone, broadcasting, and cable networks are simply nodes of the larger
network that is the Internet.
Thus, the Internet is fundamentally different from other communications technologies.
In most cases, simply mapping the rules that apply to other services onto the Internet will
produce outcomes that are confusing, perverse, or worse. Any attempt to understand the
relationship between the Internet and telecommunications policy must therefore begin with the
distinguishing aspects of the Internet.
A. How the Internet is Unique
The distinctiveness of the Internet derives in large part from its technical architecture,
which is described in greater detail in Section II. The Internet functions as a series of layers,
as increasingly complex and specific components are superimposed on but independent from
other components. The technical protocols that form the foundation of the Internet are open
and flexible, so that virtually any form of network can connect to and share data with other
networks through the Internet. As a result, the services provided through the Internet (such as
the World Wide Web) are decoupled from the underlying infrastructure to a much greater
extent than with other media. Moreover, new services (such as Internet telephony) can be
introduced without necessitating changes in transmission protocols, or in the thousands of
routers spread throughout the network.
The architecture of the Internet also breaks down traditional geographic notions, such as
the discrete locations of senders and receivers. The Internet uses a connectionless, "adaptive"
routing system, which means that a dedicated end-to-end channel need not be established for
each communication. Instead, traffic is split into "packets" that are routed dynamically
between multiple points based on the most efficient route at any given moment. Many
different communications can share the same physical facilities simultaneously. In addition,
any "host" computer connected directly to the Internet can communicate with any other host.
A further distinguishing characteristic of the Internet is its fractal nature. Fractals are
derived from the branch of mathematics known as chaos or complexity theory. Fractals
exhibit "self-similarity"; in other words, a roughly similar pattern emerges at any chosen level
of detail. Internet traffic patterns most clearly demonstrate the Internet's fractal tendencies.
For traditional communications networks (including the telephone network), engineers have
over many years developed sophisticated statistical models to predict aggregate usage
patterns. Researchers have shown that usage of the Internet follows not the traditional
"poisson" pattern, but rather a fractal distribution. In other words, the frequency of Internet
connections, the distribution between short and long calls, and the pattern of data transmitted
through a point in the network tend to look similarly chaotic regardless of the time scale.
The fractal nature of the Internet confounds regulatory and economic models established
for other technologies. However, as chaos theorists have shown, fractals have valuable
attributes. In a fractal entity, order emerges from below rather than being dictated from
above. The fact that the Internet does not have an easily-identifiable hierarchy or any clear
organizational structure does not mean that all behavior is random. Many small,
uncoordinated interactions may produce an aggregate whole that is remarkably persistent and
adaptable.
Finally, the Internet has thus far not been regulated to the same extent as other media.
The Communications Act of 1934 (Communications Act), which created the Federal
Communications Commission to oversee telephony and radio broadcasting, is more than sixty
years old. By contrast, Internet service providers, and other companies in the Internet
industry, have never been required to gain regulatory approval for their actions.
B. The Feedback Loop
If the Internet is not like any other established communications technology, what then is
it? On one level, the Internet is whatever anyone wants it to be. It is plastic, decentralized,
and constantly evolving network. Any simple concept to describe the Internet will necessarily
be incomplete and misleading. Such templates are useful, however, to promote greater
understanding of aspects of the Internet that may not otherwise be obvious.
For purposes of this paper, I believe it is valuable to understand the Internet as a
feedback loop. A feedback loop occurs when the output of a system is directed back into the
system as an input. Because the system constantly produces fuel for its own further
expansion, a feedback loop can generate explosive growth. As the system expands, it
produces more of the conditions that allow it to expand further. All networks are feedback
loops, because they increase in value as more people are connected. The Internet, however,
is driven by a particularly powerful set of self-reinforcing conditions.
FIGURE 1 -- THE INTERNET SPIRAL Figure 1 describes some of the interrelated factors that build upon each other to foster
the growth of the Internet. Some "supply" factors (such as the availability of higher-capacity
networks) permit an expansion of demand (for example, by allowing bandwidth-intensive
services such as high-resolution video transmission). Like a digital tornado, the vortex
continues, as the new level of demand creates the need for additional capacity, and so forth.
The Internet feedback loop is a fundamentally positive force, because it means that more and
more services will be available at lower and lower prices. So long as effective self-correcting
mechanisms exist, the Internet will overcome obstacles to its future growth.
Understanding the underpinnings of the Internet feedback loop is necessary to craft
policies that facilitate, and do not hinder, its continuation. There are four primary factors that
support the growth of the Internet:
Digitalization and "Deep Convergence"
As described above, the Internet exhibits characteristics of several media that had
previously been distinct. Networks carry three types of information -- voice, video, and data
-- and those categories are further subdivided into areas such as pre-recorded vs. live or real-
time presentation, and still vs. moving images. Historically, these different forms of
information have used different delivery vehicles. The telephone network delivered voice,
private corporate networks delivered data, and broadcast networks delivered video. Each
service was tightly coupled to a specific form of infrastructure -- the telephone network used
copper wires to reach subscribers, broadcast television used the airwaves, cable television
used coaxial cable, and so forth.
"Convergence" means that those lines are blurring. However, convergence is often
understood in a shallow manner, as simply the opportunity for owners of one type of delivery
system to compete with another type of delivery system, or as the opportunity for content
owners to deliver their content using different technologies. In reality, convergence is
something far more fundamental. "Deep convergence" is driven by a powerful technological
trend -- digitalization. Digitalization means that all of the formerly distinct content types are
reduced to a stream of binary ones and zeroes, which can be carried by any delivery
platform. In practical terms, this means not only that specific boundaries -- between a
telephone network and a cable system, for example -- are blurred, but also that the very
exercise of drawing any such boundaries must be fundamentally reconsidered or abandoned.
Digitalization has been occurring for decades. The long-distance telephone network in
the United States is now almost entirely comprised of digital switches and fiber optic
transmission links. These digital facilities, however, have been optimized to transport a single
service -- voice. The Internet, by contrast, can transmit any form of data. Internet protocols
are sufficiently flexible to overcome the boundaries between voice and other services.
Innovators can develop new services and immediately load them onto the existing Internet
infrastructure. Convergence creates new markets, and new efficiencies, because particular
services are no longer locked into specific forms of infrastructure.
Moore's Law and Metcalfe's Law
As George Gilder has most clearly articulated, the two technological "laws" that most
impact the growth of the Internet are Moore's Law and Metcalfe's Law. Moore's Law
holds that the maximum processing power of a microchip, at a given price, doubles roughly
every eighteen months. In other words, computers become faster at an explosive rate, or
conversely, the price of a given level of computing power decreases at that same dramatic
rate. Metcalfe's Law says that the value of a network is equivalent to the square of the
number of nodes. In other words, as networks grow, the utility of being connected to the
network not only grows, but does so exponentially.
Moore's Law and Metcalfe's Law intersect on the Internet. Both the computers through
which users access the Internet, and the routers that transmit data within the Internet, are
subject to the price/performance curve described by Moore's Law. At the same time,
advances in data transmission technology have expanded the capacity of the Internet's
backbone networks. As the bandwidth available through the network continues to grow,
Moore's Law states that the price of obtaining a given level of bandwidth continues to drop,
while Metcalfe's Law dictates that the value of a connection increases exponentially. The
ratio of the cost of Internet access to the value it provides plummets over time. And as it
plummets, connectivity and higher-bandwidth connections become that much more important,
generating more usage and more capital to upgrade the network.
The Magnetism of Money and Minds
Moore's Law and Metcalfe's Law describe the technological forces that push the growth
of the Internet, but there are also business forces that exert a powerful influence. In a
capitalist economy, the "invisible hand" of the market dynamically redirects capital where it is
most highly valued, without any direct outside intervention. Companies that demonstrate
superior potential for generating future revenues more easily attract investment, and for public
companies, see their stock prices rise. Other companies in the same industry sector often see
increases in their stock prices as well, as investors seek to repeat the pattern of the first
company and to capitalize on economic trends.
As money flows into a "hot" sector, so do talented people seeking to obtain some of that
money by founding or working at a company in that sector. The presence of so many top
minds further attracts capital, reflecting a synergistic process I call "the magnetism of money
and minds." This trend promotes the availability of financing to spur the future growth of the
Internet.
Competition
Competition enables both the dynamic allocation of capital and talent, as well as the
constant innovation in technology that leads to deep convergence and falling prices. In a
competitive market, companies must constantly invest and innovate, or risk losing out to
competitors. Intel CEO Andy Grove has observed that in the computer industry there are
only two kinds of companies: the quick and the dead. Even those companies with strong
positions must always look over their shoulder, because customer loyalty vanishes in the face
of superior alternatives.
The benefits of competition are evident in the computer industry, where companies must
constantly improve their products to remain successful. Competition in the Internet context
means that many different providers of hardware, software, and services vie for customers. In
a competitive market, providers that can offer superior service or prices are more likely to
succeed. Technological innovations that lower costs or allow new service options will be
valuable to providers and consumers alike.
C. Threats to the Continued Spiral
If the Internet truly operates like a feedback loop, why is government intervention
necessary?
There are many ways the Internet spiral could be derailed. Any of the underlying
drivers of Internet growth could be undermined. Moving toward proprietary standards or
closed networks would reduce the degree to which new services could leverage the existing
infrastructure. The absence of competition in the Internet service provider market, or the
telecommunications infrastructure market, could reduce incentives for innovation. Excessive
or misguided government intervention could distort the operation of the marketplace, and lead
companies to expend valuable resources manipulating the regulatory process.
Insufficient government involvement may also, however, have negative consequences.
Some issues may require a degree of central coordination, even if only to establish the initial
terms of a distributed, locally-controlled system. A "tragedy of the commons" situation may
arise when all players find it in their own self-interest to consume limited common resources.
The end result, in the absence of collective action, may be an outcome that no one favors. In
addition, the failure of the federal government to identify Internet-related areas that should not
be subject to regulation leaves open opportunities for state, local, or international bodies to
regulate excessively and/or inconsistently.
D. How Government Should Act
The novel aspects of the Internet require government policies that are sensitive to both
the challenges and the opportunities of cyberspace. Three principles should guide such
government decision-making:
Scalability, not just Stability
Rather than seeking to restrain the growth of the Internet, government should encourage
it. As long as the underpinnings of the network support further expansion, and self-correcting
mechanisms can operate freely, the Internet should be able to overcome obstacles to further
development. Additional capital and innovation will be drawn to any challenge due to the
prospect of high returns. In addition, a focus on scalability directs the attention of policy
makers to the future of the network, rather than its current configuration. Given the rapid rate
at which the Internet is changing, such a forward-looking perspective is essential. The
"growth" of the Internet means more than an increase in the number of users. It also means
that the network will evolve and change, becoming an ever more ubiquitous part of society.
Nevertheless, stability remains important. The Internet must achieve a sufficient level of
reliability to gain the trust of consumers and businesses. However, even such stability
requires an architecture that is built to scale upward. Otherwise, periods of calm will
inevitably be followed by crashes as the Internet continues to grow.
Swim with the Current
The economic and technological pressures that drive the growth of the Internet should
not be obstacles for government. Rather, government should identify ways to use those
pressures to support the goals that government hopes to achieve. In telecommunications, this
means using the pricing signals of the market to create incentives for efficiency. In a
competitive market, prices are based on costs, and the firm that can provide a service for the
lowest cost is likely to succeed. Such competitive pressures operate far more effectively, with
lower administrative costs, than direct government mandates.
Similarly, government should look for mechanisms that use the Internet itself to rectify
problems and create opportunities for future growth. For example, new access technologies
may reduce network congestion, as long as companies have proper incentives to deploy those
technologies. Filtering systems may address concerns about inappropriate content.
Competition from Internet services may pressure monopolies or outdated regulatory structures.
Government agencies should also use the Internet themselves to receive and disseminate
information to the public.
The Network, not networks
The Internet is a network, but so are AT&T, TCI, and NBC. The FCC's goal should
not be to foster the development of any one of those networks individually, but to maximize
the public benefits that flow from the Network that encompasses all of those networks and
many more. With the growth of competition and the elimination of traditional regulatory,
technological, and economic boundaries, networks are more likely than ever to be
interdependent, and a policy that benefits one network may have a detrimental effect on
others. For example, a mandate that Internet service providers be entitled to connect to the
telephone network for free might stimulate Internet use, but telephone companies might be
forced to increase their rates or offer lower quality service to recover the increased cost of
supporting such connections.
Although government should support the growth of the Internet, this support need not
involve explicit subsidies that are not independently justified as a matter of public policy and
economics. Instead, government should create a truly level playing field, where competition
is maximized and regulation minimized.
II. WHAT IS THE INTERNET?
Although the Internet has been the subject of tremendous media, corporate, and public
interest in recent years, most people have only a vague notion of how the Internet actually
works. It is often easier to identify what the Internet is not than to explain in non-technical
terms what the Internet is. This uncertainty presents a significant challenge for policy-
makers, and especially for governmental entities such as the FCC that must clearly define the
scope of their actions.
A. General Description
The Internet is an interconnected global computer network of tens of thousands of
packet-switched networks using the Internet protocol (IP).
The Internet is a network of networks. For purposes of understanding how the Internet
works, three basic types of entities can be identified: end users, Internet service providers, and
backbone providers. Figure 2 shows the general relationships between these entities; a more
detailed Internet architecture diagram is provided as Appendix A. End users access and send
information either through individual connections or through organizations such as universities
and businesses. End users in this context include both those who use the Internet primarily to
receive information, and content creators who use theFIGURE 2- CONCEPTUAL OVERVIEW OF THE INTERNETInternet to distribute information to other end users. Internet service providers (ISPs), such
as Netcom, PSI, and America Online, connect those end users to Internet backbone
networks. Backbone providers, such as MCI, UUNet, and Sprint, route traffic between
ISPs, and interconnect with other backbone providers.
This tripartite division highlights the different functionalities involved in providing
Internet connectivity. The actual architecture of the Internet is far more complex. Backbone
providers typically also serve as ISPs; for example, MCI offers dial-up and dedicated Internet
access to end users, but also connects other ISPs to its nationwide backbone. End users such
as large businesses may connect directly to backbone networks, or to access points where
backbone networks exchange traffic. ISPs and backbone providers typically have multiple
points of interconnection, and the inter-relationships between these providers are changing
over time. It is important to remember that the Internet has no "center" and that individual
transmissions may be routed through multiple different providers due to a number of factors.
End users may access the Internet though several different types of connections, and
unlike the voice network, divisions between "local service" providers and "long-distance"
providers are not always clear. Most residential and small business users have dial-up
connections, which use analog modems to send data over the plain old telephone service
(POTS) lines of local exchange carriers (LECs) to ISPs. Larger users often have dedicated
connections using high-speed ISDN, frame relay or T-1 lines, between a local area network at
the customer's premises and the Internet. Although the vast majority of Internet access today
originates over telephone lines, other types of communications companies, such as cable
companies, terrestrial wireless, and satellite providers, are also beginning to enter the Internet
access market.
At present, there is no generally-applicable federal statutory definition of the Internet.
The 1996 Act, in the limited context of offensive material transmitted interactive computer
networks, defined the Internet as "the international computer network of both Federal and
non-Federal interoperable packet switched data networks."
B. An Extremely Brief History of the Net
The roots of the current Internet can be traced to ARPANET, a network developed in
the late 1960s with funding from the Advanced Research Projects Administration (ARPA) of
the United States Department of Defense. ARPANET linked together computers at major
universities and defense contractors, allowing researchers at those institutions to exchange
data. As ARPANET grew during the 1970s and early 1980s, several similar networks were
established, primarily between universities. The TCP/IP protocol was adopted as a standard
to allow these networks, comprised of many different types of computers, to interconnect.
In the mid-1980s, the National Science Foundation (NSF) funded the establishment of
NSFNET, a TCP/IP network that initially connected six NSF-funded national supercomputing
centers at a data rate of 56 kilobits per second (kbps). NSF subsequently awarded a contract
to a partnership of Merit (one of the existing research networks), IBM, MCI, and the State of
Michigan to upgrade NSFNET to T-1 speed (1.544 megabits per second (Mbps)), and to
interconnect several additional research networks. The new NSFNET "backbone," completed
in 1988, initially connected thirteen regional networks. As shown in Figure 3, individual
sites such as universities could connect to one of these regional networks, which then
connected to NSFNET, so that the entire network was linked together in a hierarchical
structure. Connections to the federally-subsidized NSFNET were generally free for the
regional networks, but the regional networks generally charged smaller networks a flat
monthly fee for their connections.
FIGURE 3 -- NSFNET ARCHITECTURE
The military portion of ARPANET was integrated into the Defense Data Network in the
early 1980s, and the civilian ARPANET was taken out of service in 1990, but by that time
NSFNET had supplanted ARPANET as a national backbone for an "Internet" of worldwide
interconnected networks. In the late 1980s and early 1990s, NSFNET usage grew
dramatically, jumping from 85 million packets in January 1988 to 37 billion packets in
September 1993. The capacity of the NSFNET backbone was upgraded to handle this
additional demand, eventually reaching T-3 (45 Mbps) speed.
In 1992, the NSF announced its intention to phase out federal support for the Internet
backbone, and encouraged commercial entities to set up private backbones. Alternative
backbones had already begun to develop because NSFNET's "acceptable use" policy, rooted
in its academic and military background, ostensibly did not allow for the transport of
commercial data. In the 1990s, the Internet has expanded decisively beyond universities and
scientific sites to include businesses and individual users connecting through commercial ISPs
and consumer online services.
Federal support for the NSFNET backbone ended on April 30, 1995. The NSF has,
however, continued to provide funding to facilitate the transition of the Internet to a privately-
operated network. The NSF supported the development of three priority Network Access
Points (NAPs), in Northern California, Chicago, and New York, at which backbone providers
could exchange traffic with each other, as well as a "routing arbiter" to facilitate traffic
routing at these NAPs. The NSF funded the vBNS (Very High-Speed Backbone Network
Service), a non-commercial research-oriented backbone operating at 155 megabits per second.
The NSF provides transitional funding to the regional research and educational networks, as
these networks are now required to pay commercial backbone providers rather than receiving
free interconnection to NSFNET. Finally, the NSF also remains involved in certain Internet
management functions, through activities such as its cooperative agreement with SAIC
Network Solutions Inc. to manage aspects of Internet domain name registration.
Since the termination of federal funding for the NSFNET backbone, the Internet has
continued to evolve. Many of the largest private backbone providers have negotiated bilateral
"peering" arrangements to exchange traffic with each other, in addition to multilateral
exchange points such as the NAPs. Several new companies have built nationwide backbones.
Despite this increase in capacity, usage has increased even faster, leading to concerns about
congestion. The research and education community, with the support of the White House and
several federal agencies, recently announced the "Internet II" or "next-generation Internet"
initiative to establish a new high-speed Internet backbone dedicated to non-commercial uses.
Another important trend in recent years has been the growth of "intranets" and other
corporate applications. Intranets are internal corporate networks that use the TCP/IP protocol
of the Internet. These networks are either completely separate from the public Internet, or are
connected through "firewalls" that allow corporate users to access the Internet but prevent
outside users from accessing information on the corporate network. Corporate users are often
ignored in discussions about the number of households with Internet access. However, these
users represent a substantial portion of Internet traffic. In addition, intranets generate a
tremendous amount of revenue, because companies tend to be willing to pay more than
individual users in order to receive a level of service that they value.
Perhaps surprisingly, the Internet s growth rate has actually been quite stable for some
time, with the number of hosts roughly doubling every year. The rate appears to have
accelerated in recent years only because the numbers have gotten so large, and the Internet
has entered into popular consciousness.
C. How the Internet Works
1. Basic Characteristics
Just as hundreds of millions of people who make telephone calls every day have little
conception of how their voice travels almost instantaneously to a distant location, most
Internet users have only a vague understanding of how the Internet operates. The fundamental
operational characteristics of the Internet are that it is a distributed, interoperable, packet-
switched network.
A distributed network has no one central repository of information or control, but is
comprised of an interconnected web of "host" computers, each of which can be accessed from
virtually any point on the network. Thus, an Internet user can obtain information from a host
computer in another state or another country just as easily as obtaining information from
across the street, and there is hierarchy through which the information must flow or be
monitored. Instead, routers throughout the network regulate the flow of data at each
connection point. By contrast, in a centralized network, all users connect to single location.
The distributed nature of the Internet gives it robust survivability characteristics, because there
is no one point of failure for the network, but it makes measurement and governance difficult.
An interoperable network uses open protocols so that many different types of networks
and facilities can be transparently linked together, and allows multiple services to be provided
to different users over the same network. The Internet can run over virtually any type of
facility that can transmit data, including copper and fiber optic circuits of telephone
companies, coaxial cable of cable companies, and various types of wireless connections. The
Internet also interconnects users of thousands of different local and regional networks, using
many different types of computers. The interoperability of the Internet is made possible by
the TCP/IP protocol, which defines a common structure for Internet data and for the routing
of that data through the network.
A packet-switched network means that data transmitted over the network is split up into
small chunks, or "packets." Unlike "circuit-switched" networks such as the public switched
telephone network (PSTN), a packet-switched network is "connectionless." In other words,
a dedicated end-to-end transmission path does (or circuit) not need to be opened for each
transmission. Rather, each router calculates the best routing for a packet at a particular
moment in time, given current traffic patterns, and sends the packet to the next router. Thus,
even two packets from the same message may not travel the same physical path through the
network. This mechanism is referred to as "dynamic routing." When packets arrive at the
destination point, they must be reassembled, and packets that do not arrive for whatever
reason must generally be re-sent. This system allows network resources to be used more
efficiently, as many different communications can be routed simultaneously over the same
transmission facilities. On the other hand, the inability of the sending computer under such a
"best effort" routing system to ensure that sufficient bandwidth will be available between the
two points creates difficulties for services that require constant transmission rates, such as
streaming video and voice applications.
2. Addressing
When an end user sends information over the Internet, the data is first broken up into
packets. Each of these packets includes a header which indicates the point from which the
data originates and the point to which it is being sent, as well as other information. TCP/IP
defines locations on the Internet through the use of "IP numbers." IP numbers include four
address blocks consisting of numbers between 0 and 256, separated by periods (e.g.
165.135.0.254). Internet users generally do not need to specify the IP number of the
destination site, because IP numbers can be represented by alphanumeric "domain names"
such as "fcc.gov" or "ibm.com." "Domain name servers" throughout the network contain
tables that cross reference these domain names with their underlying IP numbers. Thus, for
example, when an Internet user sends email to someone at "microsoft.com," the network will
convert the destination into its corresponding IP number and use that for routing purposes.
Some top-level domains (such as ".uk" for Britain) are country-specific; others (such as
".com") are "generic" and have no geographical designation. The domain name system was
originally run by the United States Department of Defense, through private contractors. In
1993, responsibility for non-governmental registration of generic domains was transferred to
the NSF. The NSF established a cooperative agreement with Network Solutions Inc. (NSI),
under which NSI handles registration under these domains. NSI currently charges $50 per
year to register a domain name; a portion of this money goes to NSI to recover their
administrative costs, and a portion goes into an "Internet intellectual infrastructure fund." The
cooperative agreement is scheduled to end in mid-1998. Country-specific domains outside the
United States are generally handled by registration entities within those countries.
The existing registration process for generic top-level domains has generated substantial
controversy. Some parties have objected to what they consider to be NSI's monopoly control
over a valuable resource, especially since an entity in the United States is responsible for
assigning addresses with international ramifications. There have been several lawsuits raising
intellectual property questions, as domain names may overlap with existing trademark rights
throughout the world. Several proposals have been made to expand the space of generic top-
level domains. The International Ad Hoc Committee (IAHC), comprised of representatives
from the Internet Society, International Telecommunications Union (ITU), the World
Intellectual Property Organization (WIPO), and other groups, has issued a wide-ranging
proposal to restructure generic top-level domain name system. However, the authority and
ability of the IAHC to implement such changes remains unclear.
3. Services Provided Over the Internet
The actual services provided to end users through the Internet are defined not through
the routing mechanisms of TCP/IP, but depend instead on higher-level application protocols,
such as hypertext transport protocol (HTTP); file transfer protocol (FTP); network news
transport protocol (NNTP), and simple mail transfer protocol (SMTP). Because these
protocols are not embedded in the Internet itself, a new application-layer protocol can be
operated over the Internet through as little as one server computer that transmits the data in
the proper format, and one client computer that can receive and interpret the data. The utility
of a service to users, however, increases as the number of servers that provide that service
increases.
By the late 1980s, the primary Internet services included electronic mail or "email,"
Telnet, FTP, and Usenet news. Email, which is probably the most widely-used Internet
service, allows users to send text-based messages to each other using a common addressing
system. Telnet allows Internet users to "log into" other proprietary networks, such as library
card catalogs, through the Internet, and to retrieve data as though they were directly accessing
those networks. FTP allows users to "download" files from a remote host computer onto their
own system. Usenet "newsgroups" enable users to post and review messages on specific
topics.
Despite the continued popularity of some of these services, in particular news and email,
the service that has catalyzed the recent explosion in Internet usage is the World Wide Web.
The Web has two primary features that make it a powerful, "full service" method of accessing
information through the Internet. First, Web clients, or "browsers," can combine text and
graphical material, and can incorporate all of the other major Internet services such as FTP,
email, and news into one standard interface. Second, the Web incorporates a "hypertext"
system that allows individual Web "pages" to provide direct "links" to other Web pages, files,
and other types of information. Thus, full-scale user interfaces and complex services such as
online shopping, continuously-updated news information, and interactive games can be
provided through the Internet over a non-proprietary system. The Web thus forms the
foundation for virtually all of the new Internet-based services that are now being developed.
4. Governance and Management
There is no one entity or organization that governs the Internet. Each facilities-based
network provider that is interconnected with the global Internet controls operational aspects of
their own network. With the demise of the NSFNET backbone, no one can even be sure
about the exact amount of traffic that passes across the Internet, because each backbone
provider can only account for their own traffic and there is no central mechanism for these
providers to aggregate their data. Nonetheless, the Internet could not function as a pure
anarchy. Certain functions, such as domain name routing and the definition of the TCP/IP
protocol, must be coordinated, or traffic would never be able to pass seamlessly between
different networks. With tens of thousands of different networks involved, it would be
impossible to ensure technical compatibility if each network had to coordinate such issues
with all others.
These coordinating functions have traditionally been performed not by government
agencies, but by an array of quasi-governmental, intergovernmental, and non-governmental
bodies. The United States government, in many cases, has handed over responsibilities to
these bodies through contractual or other arrangements. In other cases, entities have simply
emerged to address areas of need.
The broadest of these organizations is the Internet Society (ISOC), a non-profit
professional society founded in 1992. ISOC organizes working groups and conferences, and
coordinates some of the efforts of other Internet administrative bodies. Internet standards and
protocols are developed primarily by the Internet Engineering Task Force (IETF), an open
international body mostly comprised of volunteers. The work of the IETF is coordinated by
the Internet Engineering Steering Group (IESG), and the Internet Architecture Board (IAB),
which are affiliated with ISOC. The Internet Assigned Numbers Authority (IANA) handles
Internet addressing matters under a contract between the Department of Defense and the
Information Sciences Institute at the University of Southern California.
The legal authority of any of these bodies is unclear. Most of the underlying
architecture of the Internet was developed under the auspices, directly or indirectly, of the
United States government. The government has not, however, defined whether it retains
authority over Internet management functions, or whether these responsibilities have been
delegated to the private sector. The degree to which any existing body can lay claim to
representing "the Internet community" is also unclear. Membership in the existing Internet
governance entities is drawn primarily from the research and technical communities, although
commercial activity is far more important to the Internet today than it was when most of
these groups were established.
D. Development of the Internet Market
1. The Internet Today
As of January 1997 there were over sixteen million host computers on the Internet, more
than ten times the number of hosts in January 1992. Several studies have produced different
estimates of the number of people with Internet access, but the numbers are clearly substantial
and growing. A recent Intelliquest study pegged the number of subscribers in the United
States at 47 million, and Nielsen Media Research concluded that 50.6 million adults in the
United States and Canada accessed the Internet at least once during December 1996 --
compared to 18.7 million in spring 1996. Although the United States is still home to the
largest proportion of Internet users and traffic, more than 175 countries are now connected to
the Internet.
According to a study by Hambrecht & Quist, the Internet market exceeded one billion
dollars in 1995, and is expected to grow to some 23 billion dollars in the year 2000. This
market is comprised of several segments, including network services (such as ISPs); hardware
(such as routers, modems, and computers); software (such as server software and other
applications); enabling services (such as directory and tracking services); expertise (such as
system integrators and business consultants); and content providers (including online
entertainment, information, and shopping). The Internet access or "network services" portion
of the market is of particular interest to the FCC, because it is this aspect of the Internet that
impacts most directly on telecommunications facilities regulated by the Commission. There
are now some 3,000 Internet access providers in the United States, ranging from small start-
ups to established players such as Netcom and AT&T to consumer online services such as
America Online.
2. Internet Trends
Perhaps the most confident prediction that can be made about the Internet is that it will
continue to grow. The Internet roughly doubled in users during 1995, and this trend appears
to be continuing. Figure 4 shows one projection of the growth in residential and business
users over the remainder of the decade. Estimates suggest as many as half a billion people
will use the Internet by the year 2000.
As the Internet grows, methods of accessing the Internet will also expand and fuel
further growth. Today, most users access the Internet through either universities, corporate
sites, dedicated ISPs, or consumer online services. Telephone companies, whose financial
resources and network facilities dwarf those of most existing ISPs, have only just begun to
provide Internet access to businesses and residential customers. Cable companies are also
testing Internet access services over their coaxial cable networks, and satellite providers have
begun to roll out Internet access services. Several different forms of wireless Internet access
are also being deployed. FIGURE 4 -- INTERNET GROWTH PROJECTIONS
At the same time as these new access technologies are being developed, new Internet
clients are also entering the marketplace. Low-cost Internet devices such as WebTV and its
competitors allow users to access Internet services through an ordinary television for a unit
cost of approximately $300, far less than most personal computers. Various other devices,
including "network computers" (NCs) for business users, and Internet-capable video game
stations, promise to reduce the up-front costs of Internet access far below what it is now.
These clients promise to expand greatly the range of potential Internet users. Moreover, as
Internet connectivity becomes embedded into ordinary devices (much as computer chips now
form the brains of everything from automobiles to microwave ovens), the Internet "market"
will expand even more.
Bandwidth will continue to increase to meet this new demand, both within the Internet
backbones and out to individual users. There is a tremendous level of pent-up demand for
bandwidth in the user community today. Most users today are limited to the maximum speed
of analog phone lines, which appears to be close to the 28.8 or 33.6 kbps supported by
current analog modems, but new technologies promise tremendous gains in the bandwidth
available to the home. In addition, the backbone circuits of the Internet are now being
upgraded to OC-12 (622 Mbps) speeds, with far greater speeds on the horizon. With more
bandwidth will come more services, such as full-motion video applications. Virtually every
one of the challenges identified in this paper will become more acute as bandwidth and usage
increase, and as the current limitations of the Internet are overcome. Thus, even though some
of the questions that the Internet poses are of limited practical significance today, policy-
makers should not wait to consider the implications of the Internet.
Throughout the history of the Internet, seemingly insurmountable obstacles have been
overcome. Few people would have expected a network designed for several dozen
educational and research institutions to scale to a commercial, educational, and entertainment
conduit for tens of millions of users, especially with no means of central coordination and
administration. Governments should recognize that the Internet is different from traditional
media such as telephony and broadcasting, although lessons can be learned from experience in
dealing with those technologies. At the same time, the Internet has always been, and will
continue to be influenced by the decisions of large institutions and governments. The
challenge will be to ensure that those decisions reinforce the traditional strengths of the
Internet, and tap into the Internet's own capability for reinvention and problem-solving.
III. CATEGORY DIFFICULTIES
The FCC has never directly exercised regulatory jurisdiction over Internet-based
services. However, the rapid development of the Internet raises the question of whether the
language of the Communications Act of 1934 (as amended by the Telecommunications Act of
1996), or existing FCC regulations, cover particular services offered over the Internet.
Governments act by drawing lines, such as the jurisdictional lines that identify which
governmental entity has authority over some activity, or the service classifications that
differentiate which body of law should be applied in a particular case. Governments
traditionally determine the treatment of new services by drawing analogies to existing
services. For example, the FCC regulates long-distance telephony, but does not regulate dial-
up remote access to corporate data networks. ISPs almost exclusively receive calls from their
subscribers, but so do retailers taking catalog orders or radio stations holding call-in
promotions. Figure 5 shows some how dial-up access to the Internet resembles, but differs
from, other types of connections.
There are reasons to believe that a simple process of drawing analogies to familiar
services will not be appropriate for the Internet. The Internet is simultaneously local,
national, and global, and is almost infinitely plastic in terms of the services it can support.
As a result, it confounds any attempt at classification. Failure to consider such category
difficulties is, however, itself a form of line drawing. As long as some communications
services are subject to regulatory constraints, legal boundaries will be necessary. New
approaches may therefore be necessary to avoid inefficient or burdensome results from
existing legal and regulatory categories.
A. FCC Authority Generally
The Communications Act provides little direct guidance as to whether the Commission
has authority to regulate Internet-based services. Section 223 concerns access by minors to
obscene, harassing, and indecent material over the Internet and other interactive computer
networks, and sections 254, 706, and 714 address mechanisms to promote the availability of
advanced telecommunications services, possibly including Internet access. Section 230 states
a policy goal "to preserve the vibrant and competitive free market that presently exists for the
Internet and other interactive computer services, unfettered by Federal or State regulation."
None of these sections, however, specifically addresses the FCC's jurisdiction.FIGURE 5 -- WHAT IS THE CORRECT ANALOGY?
In fact, nothing in the Act expressly limits the FCC's authority to regulate services and
facilities connected with the Internet, to the extent that they are covered by more general
language in any section of the Act. Although some early versions of the bill that became the
1996 Act contained language prohibiting "economic regulation" or "content or other
regulation" of the Internet by the FCC, such language does not appear in the final version of
the Act. Moreover, it is not clear what such a prohibition would mean even if it were
adopted. The Communications Act directs the FCC to regulate "interstate and foreign
commerce in communication by wire and radio," and the FCC and state public utility
commissions indisputably regulate the rates and conditions under which ISPs purchase
services and facilities from telephone companies. Would a prohibition on FCC "regulation" of
the Internet invalidate limits on the rates LECs can charge to ISPs? Would such language
prevent the FCC from mandating discounted Internet access for schools and libraries? Such
language would likely result in confusion at best.
Given the absence of clear statutory guidance, the Commission must determine whether
or not it has the authority or the obligation to exercise regulatory jurisdiction over specific
Internet-based activities. The Commission may also decide whether to forebear from
regulating certain Internet-based services. Forbearance allows the Commission to decline to
adopt rules that would otherwise be required by statute. Under section 401 of the 1996 Act,
the Commission must forbear if regulation would not be necessary to prevent anticompetitive
practices and to protect consumers, and forbearance would be consistent with the public
interest. Finally, the Commission could consider whether to preempt state regulation of
Internet services that would be inconsistent with achievement of federal goals.
The Commission has struggled with such questions before as new technologies emerged.
For example, prior to the passage of federal legislation in the 1980s, the Communications Act
had no provisions that would directly cover cable television. The Commission concluded
that, because of the competitive implications of cable for the regulated broadcasting industry,
jurisdiction over cable television was "reasonably ancillary" to the Commission s established
authority. Section 303 of the Communications Act of 1934 states broadly that:
the commission from time to time, as public convenience, interest, or necessity
requires shall ... [m]ake such rules and regulations and prescribe such restrictions
and conditions, not inconsistent with law, as may be necessary to carry out the
provisions of this Act....
This language gives the Commission broad authority to use its expertise to address novel
situations. The Internet, however, is not cable television, and the FCC today is moving
rapidly to deregulate existing services rather than to expand the scope of its regulatory ambit.
Nonetheless, it would be difficult to claim that the Internet does not, at some level, involve
interstate communications, or that the Internet will not at some point (if it does not already)
have a significant competitive impact on existing providers of regulated communications
services. Moreover, the only way to wholly exclude the Internet from regulation would be to
develop a precise definition of what is and is not an "Internet" service, now and in the future,
which is exactly what the Internet makes it difficult to do.
The FCC's theoretical jurisdiction over the Internet is quite expansive, because the
Internet relies on communications facilities and services over which the FCC has longstanding
and broad authority. Such a conclusion, however, provides little or no guidance in answering
the question about how the Commission should act towards Internet-based services and
companies. For example, the Commission's existing framework for "enhanced services"
provided through the telephone network, developed in the Computer II proceeding, states that
the FCC has authority to regulate these services, but that regulation would not serve the
public interest.
Those who oppose "regulation of the Internet," generally do not wish to make the
Internet a zone in which all government authority, such as prohibitions on theft and fraud, or
guarantees of property rights, cease to exist. Rather, the debate is about whether new legal
constructs are needed to address Internet-based transactions, and whether existing constructs
meant for different situations should be applied to the Internet. In other words, would a
particular type of service, offered by a particular type of company, be subject to particular
requirements and prohibitions?
The Commission can and should greatly limit the extent to which its actions interfere
with the functioning of the Internet services market. Communications regulation has
traditionally been justified by the presence of dominant firms, by overwhelming public
interest imperatives, or by the inherent invasiveness of broadcast media. Most of these
justifications simply do not exist in the Internet realm.
B. Telephony
1. Legal Framework
a. Carrier Obligations
Title II of the Act generally regulates the activities of two overlapping classes of
entities: communications common carriers and telecommunications service providers. Under
the 1934 Act, common carriers (such as telephone companies) must be certificated and file
tariffs setting forth a schedule of their charges in order to provide service to the public.
Common carriers are prohibited from unreasonably denying requested services, or from
unreasonably discriminating in their terms and conditions of service, and are subject to
various other requirements and fees.
The 1996 Act adds a related category, "telecommunications" service, defined as follows:
The term "telecommunications" mean the transmission, between or among points
specified by the user, of information of the user's choosing, without change in the form
or content of the information as sent and received.
The term "telecommunications carrier" means any provider of telecommunications
services.... A telecommunications carrier shall be treated as a common carrier under this
act only to the extent that it is engaged in providing telecommunications services....
The term "telecommunications service" means the offering of telecommunications for a
fee directly to the public, or to such classes of users as to be effectively available to the
public, regardless of the facilities used.
To what degree do Internet-based services meet the three-pronged definition of
"telecommunications?" For example, the sender of an email message selects the person to
receive the information and chooses the information to be transmitted, with no alteration
(other than protocol conversion and other administrative overheads of the network) of the
information sent and received. Real-time "Internet relay chat" and "Internet telephony" are
even easier to fit within the statutory definition. If some Internet services fall within the
definition of "telecommunications," however, who are the "carriers" that should be subject to
regulation? Would it be possible to regulate some services and not others, such as Usenet
newsgroups, which do not seem to satisfy the three-pronged test?
Ultimately, such micro-level exercises in statutory interpretation can lead to results that
appear strange or worse. Common sense suggests that Congress did not intend to treat any
company that facilitates the transmission of email as a local telephone company, subject to
the full panoply of public-utility-derived regulation that applies to such companies.
Nonetheless, the language of the statute cannot be ignored.
b. Basic vs. Enhanced Services
Beginning with the Computer II proceeding in the 1970s, the Commission has
distinguished between "basic" and "enhanced" communications services. Basic services are
standard voice transmission offerings, while enhanced services are defined as:
...services, offered over common carrier transmission facilities used in interstate
communications, which employ computer processing applications that act on the format,
content, code, protocol or similar aspects of the subscriber's transmitted information;
provide the subscriber additional, different, or restructured information; or involve
subscriber interaction with stored information.
Specific enhanced services include protocol processing, alarm monitoring, voice messaging,
and electronic publishing, as well as the provision of access to data networks such as
commercial online services and the Internet.
The basic/enhanced framework has two primary purposes. First, it defines a class of
enhanced service providers (ESPs), that use the telephone network but are not subject to
regulation under Title II of the Communications Act. Although the FCC may have
jurisdiction to regulate ESPs, such regulation would be unnecessary and harmful to the
development of the competitive enhanced services industry. Second, it provides a framework
to ensure that when incumbent LECs (in particular the regional Bell Operating Companies
(BOCs)) offer enhanced services, they do not use their control over bottleneck basic services
to disadvantage competing ESPs. The 1996 Act incorporates something similar to the
basic/enhanced dichotomy in its distinction between telecommunications and "information"
services.
The Internet in its current form did not exist at the time the FCC created the
basic/enhanced distinction. However, in Computer II and in subsequent orders, the
Commission has addressed the implications of packet-switching technologies for this
framework. In Computer II, the Commission described basic communications services as
providing "pure transmission capability over a communications path that is virtually
transparent in terms of its interaction with customer-supplied information." The use of
packet switching and error control techniques "that facilitate the economical, reliable
movement of [such] information [do] not alter the nature of the basic service." Thus, for
example, in subsequent decisions the Commission has determined that packet-switched
networks following X.25 protocols, and frame relay service offerings, provide a basic
transport service.
Although some underlying packet-switched transport functions are considered to be basic
services, Internet access has always been treated as an enhanced service. ISPs have never
been subject to regulation by the FCC under Title II of the Communications Act. In addition,
BOCs have been required to file comparable efficient interconnection (CEI) plans when they
themselves offer Internet access, to ensure that they do not disadvantage competing ISPs.
ISPs engage in various information processing functions, such as authentication, email storage
and retrieval, Web page hosting, and domain name server lookups. Many ISPs, especially
online services such as America Online, offer access to local content through databases,
message boards, and chat areas. These functions involve substantial computer processing and
interaction with customer-supplied information, and therefore fall squarely within the
definition of enhanced services.
2. Implications
The legal and regulatory categories described above have significant consequences.
Because of the unique characteristics of the Internet, as described in this paper, such general
frameworks may produce unintended results when applied to Internet-based services.
Discussions of the status of ISPs or specific Internet services should not be based solely on
abstract legal analysis, but rather should take into account the real-world implications of such
decisions.
a. Section 251 Interconnection Obligations
Sections 251 and 252 of the 1996 Act mandate that incumbent LECs take various steps
to open their local networks to competition. Under these sections, incumbent LECs must
make interconnection, unbundled network elements, and wholesale services available to such
new entrants at reasonable rates. However, under the terms of section 251, these services
are available only to "requesting telecommunications carriers." In the Local Competition
Order, which implemented section 251, the Commission concluded that providers fell within
this definition only to the extent that they provided telecommunications directly to the public.
Thus, companies that provide both information and telecommunications services are able to
request interconnection, unbundled network elements, and resale under section 251, but
companies that provide information services only are not. The Commission did not state
more specifically how it would define the two categories for this purpose, although it did
conclude that companies that provided both telecommunications and information services
should be considered telecommunications service providers in this context.
Because, under Section 251(c)(3), LECs must permit purchasers of their unbundled
network elements to combine such elements in order to provide a telecommunications
service, Internet access providers may be able to design their networks more efficiently and
economically by using unbundled elements in this manner. In order to do so, however, such
companies must overcome the "telecommunications carrier" restriction in the Act. One means
of doing so would be to classify themselves as providers of telecommunications service, and
thus be subject to the requirement that they interconnect with all other carriers and
potentially other regulatory provisions governing telecommunications carriers. Some ISPs are
already considering this course.
Alternatively, Internet access providers could enter into an arrangement with a
telecommunications carrier, such as an IXC, which could purchase the unbundled elements
and in effect resell them to the ISP. MFS Worldcom, which provides telecommunications
service but owns a major ISP, UUNet, is already exploring this latter course, purchasing
unbundled loops and using them to offer high-speed ISDN and xDSL Internet access to
corporate customers through UUNet. The FCC's Local Competition order expressly stated
that incumbent LECs could not restrict the services that competitors could provide over
unbundled network elements.
Other possible mechanisms under which Internet access providers could make use of the
unbundling provisions of the 1996 Act would likely require additional action by the FCC to
clarify the legal framework. For example, ISPs could negotiate directly with LECs to lease
network elements they needed to offer high-speed data services, outside of the framework of
section 251. Such arrangements could be embodied in experimental or contract tariffs,
subject to Commission approval. Because section 251 would always be available as a
fallback that the ISPs could use to gain access to similar facilities, as described in the
previous paragraph, the FCC would not need to scrutinize closely the rates LECs charged
under such arrangements. At this time, however, there is no legal basis for LECs and ISPs to
negotiate such agreements outside of section 251.
Another theoretically possible route would be through the Commission's open network
architecture (ONA) process, which was designed prior to the passage of the 1996 Act to give
enhanced service providers access to elements of local networks. However, ONA has been
criticized by many ESPs as being cumbersome an ineffective for achieving true network
unbundling. ONA was also designed facilitate unbundling of software functionality within
LEC switches, rather than physical network elements.
The interconnection provisions of the 1996 Act also require that pricing for "transport
and termination of traffic" between telecommunications carriers be based on reciprocal
compensation. In other words, when a user on one carrier's network makes a local call to a
user on a second carrier's network, the first carrier must pay the second carrier for
terminating that call. Reciprocal compensation arrangements operate on the assumption that
traffic between two networks will be relatively balanced, because on average users receive
about as many calls as they make. In the case of an Internet service provider, this assumption
breaks down. ISPs exclusively receive calls from their subscribers over LEC networks.
Therefore, if an ISP were considered a telecommunications carrier under section 251, LECs
would presumably be required to pay that ISP for terminating traffic on the ISP's network.
This result would represent the opposite of the current flow of funds, in which ISPs pay LECs
for connecting to the LEC network to receive calls.
b. Section 254 Universal Service Obligations
Under section 254, all "telecommunications carriers" that provide "interstate
telecommunications services" must contribute to mechanisms established to preserve and
advance universal service. The Commission may require "any other provider of interstate
telecommunications" to also contribute to such mechanisms, "if the public interest so
requires." Thus, to the extent that, as discussed above, Internet access providers or others are
considered to be both "telecommunications carriers" and providers of "interstate
telecommunications services," the Act requires these entities to participate in whatever federal
universal service funding mechanism the Commission ultimately adopts.
Pursuant to section 254, the Commission convened a federal-state joint board to
recommend an explicit and nondiscriminatory funding mechanism for universal service. In its
recommendations, the joint board concluded that information and enhanced service providers
not be required to contribute to the universal service funding mechanism. The joint board
also concluded that Internet access services provided to schools and libraries should be
entitled to universal service subsidies under section 254(h).
The joint board recommendations, however, leave open several questions. As with the
interconnection rules, the precise definition of "telecommunications" and "information"
services as applied to various types of Internet-based service providers remains unclear. The
decision that information service providers are not required to contribute to universal service
funding, but can receive universal service subsidies under section 254(h) raises issues of
competitive equity when such companies are competing with traditional telecommunications
carriers to provide connectivity to schools and libraries. Finally, although as the joint board
concluded, it would be unreasonable to require ISPs to segregate their revenues between
"content" and "conduit" services, the universal service framework is designed only to
subsidize connections, not proprietary content.
c. Internet Telephony
Several companies now offer software that allows for real-time voice conversations over
the Internet (Internet telephony or "voice on the Net" (VON)). These services work by
converting voices into data which can be compressed and split into packets, which are sent
over the Internet like any other packets and reassembled as audio output on the at the
receiving end. Most Internet telephony software today requires both users to use computers
that are connected to the Internet at the time of the call, but some recently announced services
will allow the receiving party, or even both parties, to use an ordinary POTS telephone.
FIGURE 6 -- INTERNET VS. CONVENTIONAL TELEPHONY
Internet telephony consultant Jeff Pulver estimates that approximately 55,000 - 60,000
people now use Internet telephony products on a weekly basis, although usage has been
increasing rapidly and a much larger number of people have access to Internet telephony
software. Netscape and Microsoft, the manufacturers of the leading Web browser software,
have released versions of their software that incorporate Internet telephony.
The FCC has not attempted to regulate the companies that provide the software and
hardware for Internet telephony, or the access providers that transmit their data, as common
carriers or telecommunications service providers. In March 1996, America's Carriers
Telecommunication Association (ACTA), a trade association primarily comprised of small and
medium-size interexchange carriers, filed a petition with the FCC asking the Commission to
regulate Internet telephony. ACTA argues that providers of software that enables real-time
voice communications over the Internet should be treated as common carriers and subject to
the regulatory requirements of Title II. The Commission has sought comment on ACTA's
request. Other countries are considering similar issues.
The ACTA petition raises the fundamental question of whether a service provided over
the Internet that appears functionally similar to a traditionally-regulated service should be
subject to existing regulatory requirements. The petition argues that VON providers should
be considered as fundamentally analogous to switchless long-distance resellers, and thus
should pay the same rates to LECs for use of local networks to originate and terminate
interstate calls. Under this analysis, shown in Figure 6, the current pricing structure allows
VON providers to charge an effective usage charge of zero, while long-distance carriers must
pass on roughly six cents per minute in access charges for every interstate call.
ACTA's view, however, oversimplifies the comparison between VON and long-distance
voice telephony. There are many differences, beginning with quality of service. Current
Internet telephony products do not provide comparable sound quality to traditional long-
distance service. Most existing systems require both parties to be connected to the Internet
through a personal computer at the time of the call, and the sound quality of Internet
telephony products tends to be appreciably worse than circuit-switched voice telephony. At
this time, Internet telephony is in most cases not a comparable substitute for long-distance
voice service.
However, distinctions in quality and ease of use should not be the sole basis for
regulatory decisions. Cellular telephony typically provides poorer sound quality than wireline
service, but this fact does not affect the classification of cellular as a telecommunications
service. Moreover, service providers are working to improve sound quality and ease of use,
and several providers have begun to deploy "gateways" that allow Internet telephony
conversations to be terminated or even originated on an ordinary telephone. When such
gateways are used, however, the pricing structure changes. Gateway providers must pay for
hardware at points of presence to route voice traffic between the Internet and the voice
network, and must also pay local exchange carriers to terminate or originate calls over voice
lines. Thus, gateway providers plan to charge per-minute rates for their Internet telephony
services, rather than the "free" calling available through current computer-computer Internet
telephony products.
Even these current products, however, do not really provide for "free" calling. Service
providers and users still must pay for their connections to the local phone network, and for
their connections to the Internet. If these services are priced in an inefficient manner, the
issue is not one related to Internet telephony, but is a broader question about the pricing for
Internet access and enhanced services that use local exchange networks. The issue of pricing
for Internet access is discussed in detail in the following section. The fact that some Internet
packets now encode voice rather than data does not alter the fundamental economics and
technical characteristics of network traffic. If anything, a shift toward usage of the Internet
for voice telephony might result in usage patterns that looked more like those of circuit-
switched voice calling. The issue of how exactly Internet telephony affects network usage,
and how pricing affects usage of Internet telephony, is not at all settled. Local calling
throughout virtually all of the United States is priced on a flat-rated basis, yet people do not
tend to stay on the phone all day.
Internet telephony is also technically different from long-distance voice calling. A
circuit-switched voice call uses an entire 56 kbps channel for every call. By contrast, Internet
telephony uses digital compression techniques that can encode voice transmissions in as little
as 4 kbps. Internet telephony is also packet switched, which means that it does not tie up a
call path for the portion of the call carried over the packet-switched Internet. Of course,
when a packet-switched Internet telephony call is originated through a modem over a dial-up
circuit-switched connection to an ISP, the potential efficiency benefits of packet-switched
voice transmission may not be realized. In some cases, the long-distance and international
voice transmission networks, which are in most cases digital today, may actually do a better
job of compression than Internet telephony products. All of these possibilities, however,
reinforce the notion that the cost comparison between Internet and circuit-switched voice
telephony is not obvious, and is highly contingent on network arrangements that are evolving
rapidly.
Finally, as a practical and policy matter, regulation of Internet telephony would be
problematic. It would be virtually impossible, for example, for the FCC to regulate as
carriers those companies that merely sell software to end users, or to require the ISPs
segregate voice and data packets passing through their networks for regulatory purposes.
Rather, VON software could more appropriately be compared to unregulated customer
premises equipment (CPE), like telephone handsets, which facilitate calling but do not
themselves carry calls from one party to another. Moreover, although ACTA claims that
Internet telephony unfairly deprives interexchange carriers of revenues, others argue that these
services provide valuable competition to incumbent carriers. The existing systems of access
charges and international accounting rates, to which long-distance carriers are subject, are
both inefficient artifacts of monopoly regulatory regimes. If circuit-switched long-distance
carriers are paying excessive and inefficient rates as a result, the best answer is to reform
those rates rather than attempting to impose them on other parties.
The FCC should consider whether to exercise its preemption authority in connection
with Internet telephony. ACTA has submitted a petition, similar to its FCC filing, to the
Florida Public Service Commission. In addition, the Nebraska Public Service Commission
staff recently concluded that an Internet telephony gateways service operated by a Nebraska
ISP was required to obtain a license as a telecommunications carrier. If federal rules
governing Internet telephony are problematic, state regulations seem even harder to justify.
As discussed below in section D, there is a good argument that Internet services should be
treated as inherently interstate. The possibility that fifty separate state Commissions could
choose to regulate providers of Internet telephony services within their state (however that
would be defined), already may be exerting a chilling influence on the Internet telephony
market. Netscape, in its comments on the ACTA petition, argued that the Commission
should assert exclusive federal jurisdiction and preempt states from regulating Internet
telephony.
C. Broadcasting and Cable
The provision of real-time, or "streaming" audio and video services over the Internet
raises the question of whether some Internet-based services qualify as "broadcasting" subject
to Title III of the Communications Act. "Broadcasting" is defined in the Act as:
(153)(6) Broadcasting. -- The term "broadcasting" means the dissemination of radio
communications intended to be received by the public, directly or by the intermediary of
relay stations.
"Internet radio" services exist today that transmit continuous, real-time audio over the
Internet. Many other sites now offer a selection of real-time audio clips that users can choose
to listen to, such as news, weather forecasts, and music. Users must access theses sites,
generally through a World Wide Web browser, and must have the proper software and
hardware to receive and play streaming audio. Although analog modem bandwidth is largely
insufficient to support real-time video transmissions over the Internet, such services are
already available for users with higher-bandwidth connections. For example, software known
as CU-See Me has been available for some time that allows real-time video conferencing over
the Internet, and a other products such as VDOLive will allow real time simultaneous video
and audio conferencing. Live video of several events has been broadcast over the MBONE, a
service that allows certain users with high-speed connections to receive real-time video feeds
through the Internet.
The Commission has never considered whether any of the rules that relate to radio and
television broadcasters should also apply to analogous Internet-based services. The vast
majority of Internet traffic today travels over wire facilities, rather than the radio spectrum.
As a policy matter, however, a continuous, live, generally-available music broadcast over the
Internet may appear similar to a traditional radio broadcast, and the same arguments may be
made about streaming video applications. The Commission will need to consider the
underlying policy principles that, in the language of the Act and in FCC decisions, have
formed the basis for regulation of the television and radio broadcast industries. One
significant different may be the fact that radio and television broadcasts are subject to the
inherent scarcity of the usable electromagnetic spectrum, whereas such transmissions over the
Internet are simply a different type of data packets, indistinguishable at any moment from
other types of traffic passing through the network.
Similar issues arise in the context of cable television regulations under Title VI of the
Communications Act. The Act defines "video programming" as "programming provided by,
or generally considered comparable to programming provided by, a television broadcast
station." A "cable service" means "the one-way transmission to subscriber of ... video
programming." A "cable system" is "a facility, consisting of a set of closed transmission
paths ... that is designed to provide cable service," but not "a facility that serves only to
retransmit the signals of one or more television broadcast stations; ... a facility that serves
subscribers without using any public right-of-way; ... [or] a facility of a common carrier" that
does not provide video programming directly to subscribers (except solely to provide
interactive on-demand services) or serve as an open video system under section 653 of the
Act.
To what extent is real-time video transmitted over the Internet "comparable" to
broadcast television? The technology of the current Internet limits video transmission, even
for users with relatively high-speed access, to relatively low-quality images. Most Internet
users today are able to connect to the network at only 14.4 kbps or 28.8 kbps, which supports
only rudimentary video images that can easily be distinguished from broadcast television
images. These limitations are not permanent, however. As compression technology develops
and end-user access speeds increases, Internet video applications will provide service that
increasingly resembles the quality of television broadcast stations. In addition, the number of
entities providing real-time video over the Internet is today relatively small, but is certain to
increase rapidly over time as bandwidth increases. It seems inevitable that, at some point,
consumers will be able to view images that are virtually indistinguishable in quality and
equally varied in selection to those provided by television broadcasters. At what point will
the threshold of "comparability" be crossed?
A determination about whether Internet-based video applications constitute "video
programming" under the Act would not necessarily mean that these services would legally be
treated as cable systems. Section 602(7)(B) of the Act states that facilities that do not use
any public right-of-way are not considered cable systems. The Internet uses public right-of-
way to the extent that it runs over the existing telephone network, and in the future over
existing cable company facilities. The provision of video services over the Internet, however,
generally requires no additional use of public right-of-way beyond that necessary to provide
basic Internet connectivity, or to provide existing telephony or cable services. An additional
definitional issue is the extent to which Internet video services provided by common carriers
such as telephone companies are considered "interactive on-demand services," and therefore
not treated as cable systems, since many Internet-based video concepts require the user to
select a specific "program" to view. An Internet-based video service might be considered an
"open video system," since the Internet itself is an inherently open platform that allows
capacity to be shared among all entities with broadcast capabilities. Finally, certain providers
of Internet-based video services could be classified as "multichannel video programming
distributors" (MVPDs) under 602(11) of the Act. MVPDs are entities that "make[] available
for purchase, by subscribers or customers, multiple channels of video programming."
Policy-makers must consider the policy rationales behind Title VI of the Act, and
whether they apply to Internet-based video delivery systems with the same force. It does
appear, however, that at some point the Internet may have significant competitive effects. A
recent survey suggested that 61% of Internet users watch less television in order to spend
more time online. The FCC's 1995 Cable Competition Report notes the possibility that the
Internet will affect the video marketplace, "perhaps significantly," but concludes that it is too
early to assess the impact of the Internet on this market. At some point, if the Internet
continues to grow and greater bandwidth is widely available to end users, the Internet may
have even more significant competitive effects on the video marketplace. Moreover, with the
deployment of Internet access over cable facilities and digital cable set-top boxes, however,
the Internet may exert an influence over the cable market not only as a competitor, but as a
component of cable service as well.
The fact that the Internet may affect competition in the video marketplace is not itself a
justification for additional regulation. If the Internet enhances competition, it may in fact
justify reducing regulation on all video service providers. Moreover, existing regulations for
broadcasters and cable operators were never designed with Internet services in mind, and
could produce strange results if applied blindly to companies that enable streaming audio or
video transmissions over the Internet.
D. Relationship to Content
The FCC has made no effort to regulate the content of services transmitted over the
Internet. Nonetheless, the Commission does address content-related issues in broadcasting
(such as indecency and educational programming) and to a limited extent in telephony
(principally relating to dial-a-porn services).
The 1996 Act also more directly addresses Internet content with its so-called
"Communications Decency Act" provisions. These provisions criminalize the knowing
transmission using the Internet or other interactive computer services of indecent material to
children under the age of 18. The statute further states that "[i]t is a defense to a
prosecution" to show that a person has "requir[ed] use of a verified credit card, debit account,
adult access code, or adult personal identification number" or otherwise "has taken, in good
faith, reasonable, effective, and appropriate actions under the circumstances to restrict or
prevent access by minors" to indecent material. Although the primary focus of this section
of the Act is on criminal liability, the Act provides that the Commission may describe
additional measures "which are reasonable, effective, and appropriate to restrict access to
prohibited communications." At the same time, however, the Act places substantial limits
on the Commission's authority in this area:
Nothing in this section authorizes the Commission to enforce, or is intended to
provide the Commission with the authority to approve, sanction, or permit, the use
of such measures. The Commission shall have no enforcement authority over the
failure to utilize such measures. The Commission shall not endorse specific
products relating to such measures.
The Commission has not taken any action in response to this section of the Act, and
enforcement of these provisions is currently enjoined by a federal court, pending appeal to the
Supreme Court.
In most cases, the Commission's existing content rules would apply to Internet services
only to the extent that the Commission treated these services as broadcasting. Some activities
now conducted over the Internet would likely be prohibited if transmitted over television or
radio networks. For example, existing rules proscribe broadcasting of advertisements for
cigarettes and gambling services, but such companies have created sites on Web.
The decentralized nature of the Internet may doom any attempt to regulate content in
order to prevent access to undesirable material. Many different kinds of entities and
individual provide services through the Internet, and limited assumptions about providers or
recipients of information may prove unworkable. Creators of online content may have
differing levels of control over how the material they send or make available over interactive
computer networks such as the Internet can be accessed. Finally, the Internet is international
in scope, while the jurisdiction of governments that may seek to regulate Internet content is
limited to a single nation, creating both legal and practical difficulties. If content is hosted on
a server outside the United States, where the information provided is perfectly legal, can U.S.
law be extended to the provider of that content?
In general, the FCC should seek to avoid regulation of Internet content. The legal
rationales for FCC regulation of content in other media -- such as scarcity of transmission
capacity and invasiveness -- do not necessarily apply to the Internet. Moreover, the Internet
provides new mechanisms to solve the very problems it creates. Several companies now
provide filtering software that allows users -- such as parents -- to block access to
inappropriate Internet sites. Government regulation of content raises important constitutional
issues involving freedom of speech, and thus should not be undertaken lightly.
E. Administrative Issues
Unlike the voice network, which has evolved under the federal-state framework of the
Communications Act of 1934, the Internet has no built-in jurisdictional divisions. More
important, because the Internet is a dynamically routed, packet-switched network, only the
origination point of an Internet connection can be identified with clarity. Users generally do
not open Internet connections to "call" a discrete recipient, but access various Internet sites
during the course of a single connection. A voice call originates and terminates at two
discrete points, and therefore calls can readily be assigned into jurisdictional categories such
as local, intraLATA toll, interLATA intrastate, interLATA interstate, intraLATA interstate,
and international. The requirement that users dial ten digits instead of seven for calls outside
their area code provides some indication of the categorization of a particular call. Similarly, a
cable system has a defined boundary, and a broadcast signal, although propagating
indefinitely, must have a defined origination point.
For an Internet connection, by contrast, the user may have no idea where the sites he is
viewing are located. One Internet "call" may connect the user to information both across the
street and on the other side of the world. Furthermore, dynamic routing means that packets
may take different routes across the Internet to reach the same site, so even the location of
the site the user is contacting does not provide sufficient information to identify the routing of
the call for jurisdictional purposes. Internet routers have also not been designed to record
sufficient data about packets to support jurisdictional segregation of traffic.
Any regulatory system that applies different rules to different types of Internet services
would require, however, some method of identifying and/or segregating Internet traffic. For
example, if Internet telephony is subject to Title II of the Communications Act, but basic
Internet data connectivity is not, some system would be required to determine whether or not
Internet access providers are carrying telephony traffic. Internet protocols currently do not
differentiate between different types of packets in a manner that would allow this type of
monitoring, and the overhead of such a system could be considerable. Moreover, the
definition of what constitutes an "Internet phone call" is not obvious, and changing
technology may render any "bright lines" obsolete very rapidly.
Internet connections may be also used for many different purposes. Some uses of the
Internet -- such as voice telephony -- may fall more clearly within a plausible reading of the
Communications Act. However, service providers that carry such services may not even
know what type of data packets are passing through their networks at any given moment.
These characteristics pose difficulties for virtually every type of regulation. For
example, jurisdictional divisions are the basis not only of the regulatory status of companies
themselves, but also the decisions as to which rates regulated telephone companies can charge
to unregulated entities. Federal, state, and local governments use such distinctions as the
basis for deciding whether they have franchising or taxation authority over companies. The
problem is magnified because the Internet is international. Different countries may have
completely different laws governing issues such as acceptable content, intellectual property,
and privacy, and virtually any company that touches the global Internet could arguably be
subject to all of them. Moreover, any domestic regulatory regime must consider the treatment
traffic that originates outside the United States and therefore the jurisdiction of the FCC.
F. Toward a Rational Approach
The primary goal of this paper is to identify issues, not to offer specific policy
recommendations. It is important to remember that, despite the tremendous attention given to
the Internet in the past few years, it remains orders of magnitude smaller in terms of usage
and revenues than the voice telephone network in the United States. Many of the questions
raised here will answer themselves as service providers fine-tune their business models and as
the communications industry evolves. Once competition is sufficiently well-developed,
regulation may become largely unnecessary. At some point, companies will be disciplined
more strongly by market forces than by the dictates of regulators. Nonetheless, some
thoughts about how to address the categorization challenges raised in this section are
appropriate.
So long as some services are regulated, a line-drawing process must take place. When
Internet services are involved, this line drawing will be inherently messy and imprecise.
However, even the premise that Internet services should not be regulated requires a precise
assessment of what constitutes an "Internet" service. With the increasing prevalence of hybrid
services, joint ventures, and alternative technologies, such distinctions will always be difficult.
No matter how sophisticated the regulator, companies in the marketplace will devise clever
means of avoiding regulatory restrictions. No matter how well-intentioned the regulator,
government intervention in the private sector can have unexpected and unfortunate
consequences.
Thus, government should apply blunt instruments that achieve underlying goals, rather
than struggling for an elegant or precise solution that will cover every case. Wherever
possible, market forces should be harnessed to take the place of direct regulatory intervention.
Although new services like Internet telephony and streaming video may create legal
headaches, these developments are positive ones that government should encourage. Such
new technologies are valuable both because of the new options they represent for consumers,
but also because of the potential competitive pressure they may exert on incumbent providers.
The first task of government policy towards these new Internet-based services should
therefore be to identify those areas where regulation is clearly not appropriate. By
distinguishing these "easy cases," government can provide greater certainty to the private
sector that regulation will not be extended to the theoretical boundaries of statutory authority.
For example, when a company such as Vocaltec sells retail software that allows end users to
make voice phone calls through the Internet, and nothing more, it makes little sense to
classify that company as a telecommunications carrier subject to federal and state regulation.
Such software providers merely enable end users to utilize a functionality through the
network, much like companies that sell fax machines. They do not themselves transport
telecommunications traffic. Similarly, an ISP should not be classified as a
telecommunications carrier simply because some of its users choose to use Internet telephony
software to engage in voice calls. By stating that such companies are not subject to the
Communications Act, the FCC could eliminate fear and uncertainty, while still leaving room
to analyze the harder questions.
The next step should be to identify relatively simple and flexible structures that achieve
underlying policy goals. The initial assumption ought to be that new Internet-based services
should not be subject to the regulatory constraints of traditional services. Government policy
should be sensitive to the fact that technology is changing rapidly, and that the Internet
landscape a few years in the future may look very different than it does today. Market forces
may lead to the creation of differentiated classes of service, with users paying higher rates for
higher quality, thus de facto distinguishing between different types of service offerings,
without any intervention by the government.
The analytical process must work in both directions. Government should think not only
about the regulatory treatment of new services, but about the implications of those new
services for the regulatory treatment of existing services. If a competitive imbalance exists
because a new technology is not subject to the same regulatory constraints as a competing
older technology, the answer should be reduced regulation of the older technology. Of
course, such deregulation should be dependent on the existence of sufficient competition to
police the actions of incumbents. The ultimate objective, however, should be less regulation
for all, rather than more regulation for some.IV. PRICING AND USAGE
The FCC does not regulate the prices charged by ISPs or Internet backbone providers.
However, the vast majority of users connect to the Internet over facilities of existing
telecommunications carriers. Those telecommunications carriers are subject to varying levels
of regulation at both the federal and the state level. Thus, regulatory decisions exercise a
profound influence over the economics of the Internet market. Economics will drive the
development of both the Internet and of other communications technologies. Consequently,
the pricing structure for Internet access, and its interrelationship to the public switched
telephone network, are of central importance.
A. Current Internet Access Pricing
To access the Internet, a user must pay an ISP, and any applicable charges to connect to
that ISP. Most ISPs charge a flat, monthly fee, although some assess a per-hour charge above
a certain monthly threshold. The vast majority of users reach their ISPs today through the
telephone network. The phone call to reach an ISP is usually a considered a local call,
because the ISP has established a point of presence (POP) in that local calling area. Local
telephone service for residential users is typically a flat, monthly fee (in contrast to long-
distance service which is typically billed by the minute).
Thus, in the typical scenario for dial-up Internet access, as shown in Figure 7, an
Internet user "sees" a monthly telephone connection charge, a monthly charge from the ISP,
and a usage charge of zero. By contrast, a subscriber making a long-distance telephone call
today sees a monthly local connection charge from a LEC, plus a usage charge from an
interexchange carrier (IXC) for each minute of long-distance calling.
FIGURE 7 -- CURRENT INTERNET ACCESS PRICING
There are three fundamental reasons why most Internet users do not pay usage charges:
(1) residential local service tends to be flat-rated, and ISPs have located their POPs to
maximize the number of subscribers who can reach them with a local call; (2) Internet
backbone providers tend to charge non time-sensitive rates to each other and to ISPs; and (3)
ISPs typically connect to LECs through business lines that have no usage charges for
receiving calls.
Because Internet access is understood to be an enhanced service under FCC rules, ISPs
are treated as end users, rather than carriers, for purposes of the FCC's interstate access
charge rules. This distinction, created when the FCC established the access charge system in
1983, is often referred to as the "ESP exemption." Thus, when ISPs purchase lines from
LECs, the ISPs buy those lines under the same tariffs that any business customer would use --
typically voice grade measured business lines (1MBs) or 23 channel ISDN primary rate
interface (PRI). Although these services generally involve a per-minute usage charge in
addition to a monthly fee, the usage charge is assessed only for outbound calls. ISPs,
however, exclusively use these lines to receive calls from their customers, and thus effectively
pay flat monthly rates.
By contrast, IXCs that interconnect with LECs are considered carriers, and thus are
required to pay interstate access charges for the services they purchase. Most of the access
charges that carriers pay are usage-sensitive in both directions. Thus, IXCs are assessed per-
minute charges for both originating and terminating calls. As the Commission concluded in
the Local Competition Order, the rate levels of access charges appear to significantly exceed
the incremental cost of providing these services. The Commission in December 1996
launched a comprehensive proceeding to reform access charges in a manner consistent with
economic efficiency and the development of local competition.
The FCC's originally explained its decision to treat ESPs as users rather than carriers as
a temporary response to concerns about "rate shocks" if ESPs were immediately forced to pay
access charges. In 1987, the FCC proposed to require ESPs to pay interstate access
charges, on the theory that ESPs used LEC networks in the same manner as IXCs, but this
proposal was withdrawn after intense opposition. In closing the 1987 docket, however, the
FCC explained that "this is not an appropriate time to assess interstate access charges on the
enhanced services industry," implying that it still viewed the treatment of ESPs as a
temporary accommodation. In the FCC rules, however, there is no "exemption" or "waiver;"
only carriers are subject to access charges, and ESPs are defined separately from carriers.
The Access Reform NPRM took up the question of whether enhanced service providers
should be subject to access charges as currently constituted, and tentatively concluded that
they should not. The Commission argued that, given the inefficiencies of the existing
access charge system, "[w]e see no reason to extend this regime to an additional class of
users, especially given the potentially detrimental effects on the growth of the still-evolving
information services industry." At the same time, the Commission issued a Notice of
Inquiry (NOI) seeking comment more broadly on actions relating to Internet and interstate
information service providers.
B. Network Economics
In recent years, there has been extensive academic literature on the economics of the
Internet. Much of the economic debate concerns the implications of various pricing models
for Internet usage. Pricing generates incentives that affect usage patterns, and that also affect
the manner in which service providers construct their networks. The FCC and state
commissions, through their regulatory authority over the rates charged by local phone
companies and other mechanisms, exercise great influence over the pricing of Internet access.
Therefore, the underlying economics of the Internet, and of networks generally, are of great
importance for any discussion of the relationship of the FCC to the Internet.
The value of networks to each user increases as additional users are connected. For
example, electronic mail is a much more useful service when it can reach fifty million people
worldwide than when it can only be used to send messages to a few hundred people on a
single company's network. The same logic applies to the voice telephone network, and is an
important underpinning of the FCC's public policy goal of universal service.
However, this increasing value also can lead to congestion. Network congestion is an
example of the "tragedy of the commons:" each user may find it beneficial to increase his or
her usage, but the sum total of all usage may overwhelm the capacity of the network. With
the number of users and host computers connected to the Internet roughly doubling each year,
and traffic on the Internet increasing at an even greater rate, the potential for congestion is
increasing rapidly. The growth of the Internet, and evidence of performance degradation, has
led some observers to predict that the network will soon collapse, although so far the
Internet has defied all predictions of its impending doom.
Two types of Internet-related congestion should be distinguished: congestion of the
Internet backbones, and congestion of the public switched telephone network when used to
access the Internet. These categories are often conflated, and from an end user standpoint the
point of congestion matters less than the delays created by the congestion. However, there
are two fundamental differences. First, prices that carriers charge for use of local exchange
facilities are regulated, while those that Internet backbone providers charge are not. This
regulatory distinction is based on the reality that today there is generally only one LEC that
an ISP can use in a given area, while there are many competing Internet backbone providers.
Second, the PSTN generally uses circuit switching, while the Internet is packet switched. The
congestion patterns and pricing issues for the PSTN, which carriers both voice and Internet
traffic, are therefore different than those in the Internet backbone world.
Congestion of the Internet backbones results largely from the shared, decentralized
nature of the Internet. Because the Internet interconnects thousands of different networks,
each of which only controls the traffic passing over its own portion of the network, there is
no centralized mechanism to ensure that usage at one point on the network does not create
congestion at another point. Because the Internet is a packet-switched network, additional
usage, up to a certain point, only adds additional delay for packets to reach their destination,
rather than preventing a transmission circuit from being opened. This delay may not cause
difficulties for some services such as email, but could be fatal for real-time services such as
video conferencing and Internet telephony. At a certain point, moreover, routers may be
overwhelmed by congestion, causing localized temporary disruptions known as
"brownouts."
Backbone providers have responded to this congestion by increasing capacity. Most of
the largest backbones now operate at 155 Mbps (OC-3) speeds, and MCI has upgraded its
backbone to OC-12 (622 Mbps) speed. Backbone providers are also developing pricing
structures, technical solutions, and business arrangements to provide more robust and reliable
service for applications that require it, and for users willing to pay higher fees. Some
network providers, such as the @Home cable Internet service, are relying on "caching"
multiple copies of frequently-accessed documents to ease the congestion burden. In
addition, hardware vendors are working to improve the speed and interoperability of their
Internet routers and switches.
Congestion on Internet facilities may also be alleviated by the development and
implementation of technical protocols, such as HTTP version 1.1, IP version 6, IP
multicasting, and RSVP, that facilitate more coordinated and efficient use of bandwidth.
These technologies may allow for more differentiated levels of service quality, with
associated differentiation in pricing. The pricing of backbone services may affect end user
charges for Internet access, even if ISPs continue to pay flat rates to LECs.
Internet backbone congestion raises many serious technical, economic, and coordination
issues. Higher-bandwidth access to the Internet will be meaningless if backbone networks
cannot provide sufficient end-to-end transmission speeds. Moreover, the expansion of
bandwidth available to end users will only increase the congestion pressure on the rest of the
Internet. However, Internet backbone providers are not regulated by the FCC in the same
manner as LECs. This paper concentrates primarily on the congestion and pricing issues that
affect the public switched telephone network, because it is in that area that decisions by the
FCC and other regulatory entities will have the greatest significance.
C. Implications for Local Exchange Carriers
Most residential subscribers reach their ISPs through dial-up connections to LEC
networks. Figure 8 shows the typical scenario for a dial-up user. A modem at the customer
premises is connected to a local loop, which is connected to a switch at a LEC central office.
ISPs also purchase connections to the LEC network. In most cases, ISPs either buy analog
lines under business user tariffs (referred to as "1MBs") or 23-channel primary rate ISDN
(PRI) service. When a call comes into an ISP, it is received through a modem bank or a
remote access server, and the data is sent out through routers over the packet-switched
Internet. Both subscribers and ISPs share usage of LEC switches with other customers.
FIGURE 8 -- TYPICAL DIAL-UP INTERNET ACCESS ARCHITECTURE 1. Pricing Issues
Ever since 1983, when the FCC first decided that ESPs would not be subject to
interstate access charges, parties have challenged the "ESP exemption" as an inefficient
temporary subsidy that unfairly deprives LECs of revenues. The FCC has itself come close
to endorsing this view in the past, most notably in the infamous "modem tax" proposal in
1987. Nonetheless, the current pricing structure for enhanced services has stayed in place
for fourteen years. The telecommunications landscape has changed tremendously in that time,
with the emergence of the Internet being among the most significant developments. The
Access Reform NPRM proposes to leave the current pricing structure for ISPs in place for
now. In the companion NOI, the Commission seeks comment on, among other issues, how
these changes should affect the pricing structure applicable to ISPs.
Access charges are designed to recover the LECs' interstate revenue requirements for
the underlying facilities. These revenue requirements were derived from rate-of-return,
accounting cost mechanisms designed to recover the embedded costs of monopoly LECs.
Since 1990, large LECs have been subject to price cap regulation of their access services,
which has allowed rate levels to diverge to some degree from embedded costs, but LEC
access charges are still not based on any calculation of forward-looking cost. Another aspect
of the revenue requirement that distorts rate levels is the fact that the jurisdictional
separations system apportions costs between the interstate and intrastate jurisdictions in a
manner that does not accurately reflect cost-causation. Finally, the interstate access charge
regime includes various forms of cost-shifting and averaging. For example, the carrier
common line charge (CCLC) is a per-minute charge that is assessed on all LEC access
customers, but it recovers costs associated with end user subscriber lines.
IXCs, ISPs, and others have long argued that access charges are substantially higher
than they would be in a competitive market, and the Commission essentially adopted this
view in the Access Reform and Local Competition proceedings. The argument for requiring
ESPs to pay access charges has generally been premised on the notion that ESPs impose
similar costs on the network to providers of interstate voice telephony, and that ESPs should
therefore pay the same rates for these services. For example, the FCC's 1987 proposal stated
that:
Enhanced service providers, like facilities-based interexchange carriers and resellers, use
the local network to provide interstate services. To the extent that they are exempt from
access charges, the other users of exchange access pay a disproportionate share of the
costs of the local exchange that access charges are designed to cover.
ESPs have rejected this analysis, and have claimed that they do not need or use many of the
features and functions of the network that IXCs require to set up voice calls. In addition,
ESPs have argued that imposition of interstate access charges would cause tremendous
damage to the enhanced services industry with the corresponding benefit of only a tiny
reduction in charges to other users. ESPs, and particularly the Internet access industry, have
also emphasized the public interest benefits of spurring growth in Internet access and other
enhanced services. According to a March 1995 white paper by the Commercial Internet
Exchange (CIX), a trade association of ISPs, "ESPs have enjoyed this status because of the
public policy need to foster an on-line nation."
The development of Internet telephony provides an additional argument that at least
some enhanced services use LEC networks in a manner similar to IXCs. Voice telephony
over the Internet may operate as a direct substitute for telephony service provided by IXCs
over their voice networks. Today, however, Internet telephony does not provide the same
quality and convenience as traditional voice telephony. Commercial Internet telephony
products are also a relatively new phenomenon, and as a result of these factors the number of
users of Internet telephony is minuscule compared to users of the voice network. These
characteristics may change in the future, especially if Internet telephony continues to be
available at significantly cheaper rates than conventional telephony. As discussed in the
previous section, the architecture of Internet telephony services differs from circuit-switched
voice telephony in ways other than quality and ease of use. The real questions concern the
economic implications of Internet services that use the public switched network.
The current pricing structure of wireline service in the United States operates on the
principle of "sender pays" for transactions between users and carriers. The fact that a
subscriber only pays for making phone calls, not for receiving them, does not mean that the
LEC does not incur costs for the subscriber to receive a call; it only means that those costs
are recovered indirectly through the rates users pay for outbound calls and monthly service
This rule generally holds true even if the subscriber is a member of a distinct user category
with a different cost causation pattern. For example, a customer service center operated by a
computer company receives many times more calls from customers than it originates, but the
call usage is charged -- if at all -- to the customers. The 1996 Act essentially adopts a
"sender pays" rule for interconnection between carriers in the form of "transport and
termination" charges; originating telecommunications carriers must pay whenever they hand
off local traffic to another carrier.
2. Switch Congestion
Several LECs and others now argue that the current pricing structure for Internet access
contributes to the congestion of LEC networks. Switch congestion can arise at three points in
LEC networks -- the switch at which the ISP connects to the LEC (the terminating switch),
the interoffice switching and transport network, and the originating end user switch. The
point of greatest congestion is the switch serving the ISP, because many different users call
into the ISP simultaneously.
LECs have engineered and sized their networks based on assumptions about voice
traffic. In particular, several decades of data collection and research by AT&T, Bellcore, and
others has shown that an average voice call lasts 3-5 minutes, and that the distribution
between long and short calls follows a well-established curve. Because very few people stay
on the line for very long periods of time, there is no need for LEC switches to support all
users of the switch being connected simultaneously. Instead, LEC switches are generally
divided into "line units" or "line concentrators" with concentration ratios of typically between
4:1 and 8:1 (see Figure 8). In other words, there are between four and eight users for every
call path going through the switch. Call blockage on the voice network tends to be negligible
because a significant percentage of users are unlikely to be connected simultaneously.
The distribution of Internet calls differs significantly from voice calls. In particular,
Internet users tend to stay on the line substantially longer than voice users. As shown in
Figure 9, several LECs and Bellcore have submitted studies to the Commission documenting
the difference between Internet and voice usage patterns. ISPs, although challenging the
methodologies and conclusions of the studies, generally acknowledge that Internet calls tend
to be longer than voice calls.
FIGURE 9 -- LEC INTERNET USAGE STUDIES
Because LEC networks have not been designed for these longer usage patterns, heavy
Internet usage can result in switches being unable to handle the load ("switch congestion").
Internet connections tie up a end-to-end call path through the PSTN for the duration of the
call. When the average hold time of calls through a switch increases significantly, the
likelihood of all available call paths through the switch being in simultaneous use also goes
up. If a particular line unit has an 8:1 concentration ratio, only one eighth of the subscriber
lines into that line unit need to be connected at one time in order to block all further calls.
Because of the relatively short average duration of voice calls, the primary limiting
factor on the capacity of current digital switched for voice calls is the computer processing
power required to set up additional calls. Computer processing power can be expanded
relatively easily and cheaply, because modern switch central processing units are designed as
modular systems that can be upgraded with additional memory and processing capacity. On
the other hand, LECs argue, Internet usage puts pressure not on the call setup capacity of the
switch, but on the number of transmission paths that are concurrently open through the
switch.
ISPs dispute the extent to which switch congestion currently represents a serious
problem. A study by Economics and Technology, Inc. (ETI), commissioned by the Internet
Access Coalition, argues that the growth of Internet traffic poses no threat to the integrity of
the voice network. According to the study, incidents of congestion have been localized, are
easily corrected, and are primarily attributable to inadequate planning and inefficient
engineering by the LECs. The ETI study also concludes that LECs received approximately
$1.4 billion of revenue from additional residential subscriber lines used for online access in
1995, and that this number far exceeds even the LECs' own estimates of the costs of network
upgrades to ameliorate congestion. Other opponents of imposing usage charges on ISPs
point to the fact that LEC state business line tariffs are designed to recover the costs LECs
incur for usage of their network, and that if flat-rated charges are compensatory for local
service it is illogical to argue that they are non-compensatory for Internet access. Long voice
calls, these advocates claim, impose the same costs on the network as long Internet
connections, but LECs are still able to provide local service at flat monthly rates.
The Network Reliability and Interoperability Council, an industry group that tracks
reliability of the public switched telephone network and makes recommendations to the FCC,
has stated that Internet usage has not yet resulted in any outages above the NRIC's outage
reporting threshold. Internet usage, however, continues to grow rapidly. Low-cost Internet
access devices such as Web TVs and network computers (NCs) that have recently come on
the market are likely to fuel substantial Internet traffic growth in the next several months.
A distinction should also be made between the larger class of ESPs -- which include
companies such as voice mail providers, alarm monitoring companies, credit card validation
services, and internal corporate data networks -- and Internet or online service providers.
Current FCC rules refer only to ESPs, but the arguments LECs are now making about switch
congestion are directed specifically at the small subset of ESPs that provide Internet access.
The fact that Internet usage may be placing new demands on LEC networks is not necessarily
a reason to impose usage charges on enhanced service companies other than ISPs. There may
be arguments that other ESPs should pay usage charges, because they generate costs for LECs
that otherwise cannot be recovered. In fact, the previous debates about the ESP exemption
occurred before there was any significant amount of commercial or residential Internet usage.
If the Commission wishes to consider the LEC arguments about switch congestion, however,
the discussion should only apply to pricing of services for ISPs, not all ESPs.
3. Responses to Switch Congestion
Addressing switch congestion is ultimately a matter of money. No one argues that
LECs cannot upgrade their networks to remove and prevent blockages. There is, however,
disagreement about the costs of such upgrades, and whether changes in pricing structures
would send efficient economic signals. The ultimate question is whether LECs have
appropriate incentives to upgrade their networks in the most efficient manner, and ISPs have
appropriate incentives to use the most efficient available method of access. Most parties
agree that, as a technical matter, packet-switched Internet traffic could be transported more
efficiently through a packet-switched network, rather than tying up the circuit-switched PSTN.
However, different technical solutions will likely be most appropriate in different regions,
depending on factors such as the infrastructure and business plans of the incumbent LEC, the
competitive landscape, and the level of Internet traffic in a specific area. The goal of policy
makers should be to create incentives that encourage these efficient results, rather than
choosing any one solution.
Several technical, economic, and regulatory responses to switch congestion have been
proposed.
a. Pricing Changes
Some LECs argue that, because of the possibility of switch congestion, the Commission
should allow them to assess per-minute usage charges for ISPs to receive calls from their
subscribers. From the LEC perspective, usage pricing would have two salutary effects. First,
because ISPs (and presumably their customers, as ISPs themselves shifted to measured rates
to recover their costs) would be paying more for longer calls, they would have incentives not
to over-use the network. In other words, users would stay connected only as long as they
found the additional connect time worth the cost, and ISPs might have stronger incentives to
migrate away from their current practice of purchasing tremendous numbers of analog
business lines. Second, and more importantly, with usage pricing, LECs would receive more
revenue from longer Internet calls than from shorter ones. Thus, the LECs argue, their
revenues would more closely match their costs, which also increase with longer connection
times due to the necessary network upgrades to prevent switch congestion.
The notion of "usage charges" should be distinguished from current interstate access
charges. The discussion of this topic is often framed in terms of whether to "lift the ESP
exemption," or "impose access charges on ISPs." However, even if one agrees with the LEC
arguments, this does not lead to the conclusion that ISPs should pay today's access charges.
Current access charges far exceed the economic cost of providing access services, and in
many ways are structured in an economically inefficient manner. It would make no sense,
under the guise of creating a more efficient rate structure for ISPs, to impose the existing
access charge system that all parties agree is inefficient. In addition, access charges have
been structured based on the features and service bundles used by IXCs to handle voice calls,
which may be different than those ISPs would choose. The Commission based its tentative
conclusion in the Access Reform NPRM that ISPs should not be subject to current access
charges on these sorts of arguments.
The real question is whether ISPs should pay some new cost-based usage charges. ISPs
should pay the same charges as IXCs only if those charges are appropriate for a competitive
market and for the manner that ISPs use LEC networks. The Commission could also
establish a separate set of interstate charges for ISPs distinct from those assessed on IXCs, or
even distinct from those for other ESPs with different cost causing characteristics. Finally, if
the FCC does not change the current classification of ISPs as end users, LECs could alter
their state tariffs to impose usage charges on ISPs. Such a result could involve tariffs based
on the characteristics of ISP traffic (many long hold-time incoming calls) that in effect
required ISPs to pay usage-sensitive rates, or alternate tariffs that ISPs could select
voluntarily. The Commission might be required to respond to changes in state-level pricing
for ISPs to the extent that such changes affected BOC open network architecture (ONA)
plans.
LECs could also fashion and seek FCC approval of experimental or contract tariffs for
services they offer to ISPs. Current FCC rules limit LECs' ability to offer individualized
prices, because of concerns about discrimination and predatory pricing. However, a
specialized federal tariff offering geared to ISPs, especially if developed in conjunction with
some large ISP customers, might prove to be a useful experiment.
The term "usage charges" itself requires some further qualifications. The cost of shared
telecommunications facilities is generally driven not by total usage, but by usage at peak
periods. The marginal cost of a off-peak call is often close to zero. For this reason, long-
distance pricing in the United States has traditionally operated on a multi-level model, with
calls during daytime hours (when usage is heaviest) priced highest, and calls during night and
weekend hours priced lowest. Internet usage can cause switch congestion because usage at
peak periods may exceed the capacity of a switch, and because a higher percentage of Internet
users engage in extremely long calls (i.e. more than two hours). A relatively small number of
long calls may make a significant contribution to the degree of congestion on a switch.
Therefore, a pricing structure that incorporated usage charges only for a small percentage of
users (for example, those that were connected more than 200 hours per month), might reduce
switch congestion without affecting the vast majority of Internet users.
There are many difficulties with peak-load pricing schemes. Users may respond to the
pricing structure by shifting their calling patterns to avoid the peak charges, thereby shifting
the peak. Customers have also shown a strong preference for simple pricing systems, and
especially for those that offer a flat rate for unlimited usage, even if the flat rate would
actually result in a higher bill for their particular level of usage. Nonetheless, more thought
should be given to pricing structures other than straight per-minute charges. Such alternatives
may eventually prove unworkable or undesirable, but they should stimulate creative thinking
and a more precise understanding of the causative relationship between specific usage patterns
and congestion.
At first blush, the economic argument that ISPs should pay usage charges to LECs
seems compelling. Switching and transport capacity in the PSTN are scarce resources, and
heavier use of those resources results in higher costs to the service provider. To the extent
that these costs cannot be recovered from the discrete groups responsible for the heavier
usage, rates for all users will increase. This result seems undesirable as a matter of equity
(why should all users pay more so that a portion of users can access the Internet) and as a
matter of efficiency (why should Internet users be given the misleading signal that unlimited
Internet usage is "free" for the network). At a more general level, if a minute of Internet
usage looks the same to the phone network as a minute of voice usage, economic cost-
causation principles suggest that they be subject to the same pricing structure.
The reductive argument for usage charges is problematic on several levels. Most
fundamentally, it assumes that other aspects of telecommunications pricing follow similar
principles of economic efficiency. The same argument about the perils of flat-rated pricing
could be applied to residential local phone service in the United States. There are many
classes of users that can be identified as making or receiving unusually large volumes of
calls, such as teenagers and customer support centers for businesses, yet separate rate
structures have not been established for these groups. All tariffed prices for telephone
services involve some level of averaging, so the mere fact that some users vary from the
average does not by itself suggest that they should pay different rates.
The argument for usage pricing also assumes that ISPs use the existing, circuit-switched
network, rather than the alternative access technologies described in the next section. An
unanswered question, therefore, is whether LECs would have incentives use the additional
revenues generated by usage pricing to further expand the circuit-switched network, or to
invest in more efficient, "data-friendly" alternatives. To address this concern, if an alternate
rate structure for ISPs were developed, it could be conditioned in some way on LEC
commitments to build out data networks in a specific time frame.
LECs also receive substantial additional revenues as a result of Internet usage. In
particular, second line deployment, which was stable for some time, has increased
dramatically over the last few years. A major reason why many subscribers are ordering
second lines is to support modem connections without tying up a primary voice phone line.
LECs recognize that second lines after often used for data connection, and many LECs are
specifically marketing second lines to consumers for this reason. Recent LEC earnings
statements emphasize second line growth as a major contributor to LEC bottom lines. Many
homes are wired to support at least two lines without any additional infrastructure, so second
lines often cost LECs little to install and generate very high profit margins.
The ETI study estimated that in 1995, BOCs generated revenue from additional
residential lines for online access that was six times the amount Bellcore claimed would be
required to upgrade their networks to handle additional Internet traffic. On the other hand,
residential lines in some states are deliberately priced low by state commissions, on the
assumption that LECs will "make up" the revenue through toll usage and "vertical features"
such as call waiting. In cases where LECs must actually install additional copper pairs,
therefore, the profitability of second lines will depend heavily on decisions of state regulators.
Any discussion of costs imposed on LECs by Internet traffic should attempt to take into
account second line growth and other countervailing revenues.
The metric of usage charges may also be significant. Access charges are metered by
minute, and the billing and accounting systems of the PSTN are generally designed to
measure traffic on a time-sensitive ("per-minute") basis. An appreciable component of carrier
costs goes to support this billing and accounting infrastructure. By contrast, the Internet has
developed under flat-rated pricing and interconnection between backbone providers on the
basis of "bill-and-keep" (no settlements), and thus has none of these accounting mechanisms
or costs.
Once traffic is converted into packet form and routed through the Internet, the notion of
a "minute of use" evaporates. The Internet is a connectionless network. Packet data does not
monopolize a set transmission path for a given period of time, it filters through the network
through multiple routes at varying rates. Thus, packet traffic is more appropriately described
in usage levels based on bandwidth (such as bits per second) rather than time. Concerns
about usage pricing for Internet access tend to involve objections to time-sensitive pricing, not
to bandwidth-sensitive pricing (e.g. a 1.544 megabits per second T-1 circuit being more
expensive than a voice grade connection). Internet backbone providers and ISPs are now
discussing technical and logistical aspects of "bandwidth reservation" systems, under which a
user of a service such as streaming video could pay a higher price for guaranteed bandwidth.
Such differential pricing would likely be voluntary for users, as users that did not wish to pay
additional charges could always switch to a different ISP.
The LEC networks have been designed to support metering and accounting of traffic for
billing purposes, and LECs today charge usage-sensitive access charges to interexchange
carriers that interconnect with LECs for the provision of long-distance telephony. It would
therefore not be administratively difficult for LECs to measure the total amount of traffic, in
minutes of usage, passing between the LEC and an ISP, and to charge the ISP according to
its level of usage. Because ISP costs would vary with the level of usage their customers
generated, such a pricing system would create incentives for ISPs to move to some form of
usage-based end-user pricing. Such a system might impose additional costs on ISPs usage on
a per-minute basis, since ISPs have not generally developed the same type of billing
infrastructure of the LECs.
Changing the pricing structure applicable to ISPs could have other, more subtle effects.
Under the existing system, ISP usage is considered jurisdictionally intrastate, while IXC usage
is jurisdictionally interstate. The imposition of access charges or other federally-mandated
usage charges on ISPs could result in ISP usage being reclassified as interstate. This shift
would affect the operation of the separations system, which allocates revenues between the
federal and state jurisdiction. Such large revenue shifts would also affect the price cap
system that governs interstate rates charged by incumbent LECs, which begins with revenues
derived from separations.
The purpose of this section is not to suggest that some form of usage charges for ISPs
will necessarily always be the wrong answer as a matter of public policy. Rather, the point is
that the question is complex, and must be viewed in the context of several different factors.
More comprehensive data on Internet usage, congestion levels, and network costs will be
crucial to an effective discussion of switch congestion and pricing structures for ISPs.
Although Internet access is usually priced at a flat monthly fee for "unlimited usage,"
most large ISPs automatically disconnect users after long periods of inactivity in order to
avoid tying up the ISPs' equipment, such as modem banks. Thus, few users may actually
"nail up" lines for 24 hours a day. Software does exist to fool these ISP systems, and as
services such as America Online experience congestion problems due to insufficient numbers
of modems, users may be more likely to keep a connection open once they have actually
gotten through. As this example shows, ISP usage patterns are affected by many factors. The
FCC's Notice of Inquiry is designed to gather data to form a better foundation for policy
development. The Commission also held a public forum on Access to Bandwidth on January
23, 1997, which addressed many of the questions raised in this paper.
b. Technical Solutions
The imposition of usage charges on ISPs would not, by itself, solve the problem of
switch congestion. At best, this action would give LECs additional revenue to pay for
network upgrades. Congestion will continue to occur, however, so long as users continue to
use the circuit-switched voice network to connect to the packet-based Internet. Usage charges
might also depress Internet usage, which would reduce congestion but could also stifle the
growth of innovative new Internet-based services. The real challenge is to find ways to take
that data traffic off the PSTN, preferably before it reaches the first LEC switch.
The best answer to the current switch congestion problem will be to remove Internet
traffic, or at least heavy Internet users, from the existing circuit switched network. LECs and
ISPs agree that such a network upgrade would best address their concerns. The two sides
differ, however, on the question of how the Commission s decision about usage charges for
ISPs will effect the deployment of these new technologies. LECs argue that they have no
incentive to invest in upgrading their networks when they recover no additional revenues from
ISPs for supporting heavy Internet use, especially given the uncertainties about cost recovery
in a world of unbundled network elements as required under the 1996 Act. ISPs argue that
LECs will have no incentives to invest in long-term network upgrades if they recover metered
charges that allow, and even encourage, them to keep investing (and profiting) in the existing
circuit-switched network.
There are several methods to address switch congestion. These can loosely be grouped
into four categories:
Network Expansion and Aggregation
The most straightforward response to switch congestion is to expand the capacity of the
exiting network wherever it is being stressed. These steps include load balancing (shifting
circuits among sub-units of a switch to better distribute heavy traffic directed a single source
such as an ISP), transferring ISP traffic to a larger central office with greater switching
capacity, adding additional capacity to switches, reducing the concentration ratio of switches
with heavy Internet usage, adding additional interoffice trunking, and ultimately purchasing
additional switches. A slightly more fundamental alternative involves "modem pooling" --
persuading ISPs to lease large banks of modems operated by LECs in a central location, so
that Internet traffic can be more efficiently aggregated at high-capacity points of the network.
A similar approach involves setting up a single number that ISP customers in multiple areas
could call into. LECs are experimenting with or implementing all these responses today, but
ultimately they still involve routing data traffic through at least one circuit switch at the
originating end.
FIGURE 10 -- SOME SOLUTIONS TO SWITCH CONGESTION
Workarounds
Most LECs have existing tariffed service offerings that route data through data networks
using frame relay or switched multimegabit data service (SMDS) rather than analog modem
connections to a local switch. However, ISPs have rarely taken these services, because they
believe they will increase their costs over their current practice of purchasing large numbers
of business lines. ISPs have also expressed concerns about ceding control over user access to
LECs. Such reluctance may be due to inefficiencies in state tariffing of such data services,
which may not have been designed with current Internet usage patterns in mind. An alternate
form of workaround, which would not necessarily require ISPs to change their current access
arrangements, involves upgrades to LEC switching or signaling networks. Virtually every
major equipment vendor, including Lucent, Nortel, and DSC, has announced or is developing
a solution to screen data traffic and pull it off the voice network onto a packet-based data
network, either before the first LEC switch or at some point in the interoffice network.
Alternate Access Technologies
A third set of answers involves alternate access technologies to replace the analog
modems that most users now employ for Internet access. ISDN, which is available today in
virtually all LEC central offices but is only used by a handful of residential customers, uses
the network in a more efficient manner than analog modems, and also provides up to 128
kilobits per second of bandwidth. ISDN line units are generally non-blocking; in other
words, ISDN is provisioned so that every line into a switch module has a corresponding path
through the switch. However, ISDN is a circuit-based technology, and thus usage will
continue to strain the PSTN. Other new technologies, such as digital subscriber line (xDSL),
which provides up to 6 megabits per second of downstream throughput over ordinary copper
lines, promise to avoid this constraint. xDSL modems can be connected directly to a
packet network, thus avoiding switch congestion at the same time as they increase bandwidth
available to end users. However, although prices are dropping rapidly, xDSL modems are
currently very expensive relative to analog modems, and a substantial (but not clearly
defined) percentage of LEC loops may not be able to support xDSL without additional
conditioning.
In the long term, the LEC industry has already begun planning to migrate its networks
from their existing circuit switched architecture to an architecture based on asynchronous
transfer mode (ATM) switching. ATM is designed to achieve some of the reliability and
quality of service benefits of circuit-switched technologies, along with some of the bandwidth
efficiency and speed of packet-switching. ATM is now widely used in Internet backbones
and corporate networks, but no ATM switches yet have the necessary features and functions
to replace existing LEC end office switches. In addition, a technical debate is now underway
in the Internet community about the effectiveness of ATM as a data switching platform.
LECs do not expect to even begin this transition for several years, and the transition itself is
likely to take years to complete. Replacing existing end office switches will involve enormous
costs. Although this network upgrade may provide a long-term solution, some more near-
term action will be necessary as Internet usage continues to increase.
Alternate Network Providers
Many cable companies are in the process of deploying cable modems, which typically
provide a maximum theoretical bandwidth of 10 megabits per second, although some newer
cable modems offer only 1.2 megabits per second maximum bandwidth in order to reduce
costs. Cable modems are an always connected, packet-based system, so they do not result
in switch congestion when used over a two-way cable system. However, cable companies
have experienced technical difficulties deploying cable modems, as well as upgrading their
networks and operations support systems to handle Internet traffic and the associated customer
support. These difficulties are aggravated by the highly leveraged position of most cable
companies, which constrains their access to capital.
In order to deploy cable modems more cheaply and quickly, cable operators are now
considering use of "one way" devices over unimproved cable plant. These one-way cable
modems use the high-speed cable network for receiving data from the Internet, and a
telephone line for upstream transmissions. Although this architecture reduces costs for the
cable operator, it potentially increases the congestion of LEC networks, due to the long
holding times. In addition, due to the reciprocal compensation requirements of the 1996 Act,
cable networks that operate as competitive local exchange carriers may be entitled to
compensation for "terminating" LEC traffic over these connections.
Wireless systems are another promising means to break the bandwidth gridlock. Some
companies, such as Metricom, already offer wireless Internet access at speeds comparable to
analog POTS lines, typically through municipal 900 Mhz spread spectrum systems. Other
wireless technologies, such as local multipoint distribution service (LMDS) and multipoint
microwave distribution service (MMDS) are being tested specifically for Internet access
applications. Wireless access provides not only a competitive alternative to LECs, but
potentially a means for LECs to offload some of their Internet traffic while keeping their
existing customers. Pacific Bell recently signed a wireless resale agreement with the wireless
provider Winstar, in part to offload Internet traffic from Pacific s switches. Finally, satellites
may provide an alternative for some Internet access. Hughes recently began offering its 400
kilobits per second DirectPC service, although customers are required to purchase a satellite
dish and the system requires use of an analog telephone line for the upstream channel. Thus,
like one-way cable modems, the DirectPC service will not necessarily alleviate congestion of
LEC networks, but may, in fact, increase it.
4. State Tariffing Issues
The revenue effects of Internet usage today depend to a significant extent on the
structure state tariffs. Internet usage generates less revenue for LECs in states where flat
local service rates have been set low, with compensating revenues in the form of per-minute
intrastate toll charges. Because ISPs only receive local calls, they do not incur these usage
charges. By contrast, in states where flat charges make up a higher percentage of LEC
revenues, ISPs will have a less significant revenue effect. ISP usage is also affected by the
relative pricing of services such as ISDN Primary Rate Interface (PRI), frame relay, and
fractional T-1 connections, which are alternatives to analog business lines. The prices for
these services, and the price difference on a per-voice-channel basis between the options
available to ISPs, varies widely across different states. In many cases, tariffs for these and
other data services are based on assumptions that do not reflect the realities of the Internet
access market today. The scope of local calling areas also affects the architecture of Internet
access services. In states with larger unmeasured local calling areas, ISPs need fewer POPs
in order to serve the same customers through a local call.
5. Competitive Dynamics
To the extent that competitors, such as IXCs, cable, or wireless providers, are able to
offer voice or data services to customers in competition with the LECs, there will be pressure
on the LECs to lower their rates or otherwise take action to retain their customers. To the
extent that such competition is driven by the underlying efficiencies and business strategies of
companies using different technologies, such competition will benefit consumers. On the
other hand, to the extent that competitors are able to gain market share primarily as a
byproduct of regulatory restrictions on the LECs, such competitive entry may have
detrimental consequences. For example, some high-speed data architectures proposed by the
cable and satellite industry only provide for downstream transmission. Unimproved cable
systems, which were designed solely for the delivery of video programming into consumers'
homes and not for interactive services, have this characteristic. Cable companies may choose
to use their infrastructure to deliver high-bandwidth downstream services to users, and use
LEC telephone lines for upstream transmission to a local headend. LECs argue that such
systems represent a regulatory anomaly that gives cable companies an unreasonable
competitive advantage in delivering broadband services to residential users at rates that are in
effect subsidized by the LECs.
Competitive alternatives to LEC facilities may also reduce the burdens on LECs. If
cable companies and others enhance their networks to provide two-way service and attract
Internet access customers on the basis of their ability to provide higher bandwidth at lower
cost, they may reduce or reverse the recent increase in Internet access through LEC networks.
Such competition could reduce LEC revenues, because LECs would not receive any payments
from Internet users that switch to cable or other providers, but the burden on LEC networks
would also be reduced. An additional competitive dimension of Internet access pricing
concerns the effects of imposition of access charges on ESPs. By raising the cost for most
users of connecting to the Internet through LEC facilities, such a decision would likely
increase the number of users who find alternative providers, such as cable, to be more cost-
effective than the LECs. Although these alternatives today represent only a limited threat to
incumbent LECs, the possibility of such shifts should at least increase the pressure on LECs
to price services to ISPs efficiently.V. Availability of Bandwidth
The Internet is only useful to people if they are able to access it, and the value of the
Internet is, to an increasing extent, dependent on the level of bandwidth available to end
users. Thus, issues of service availability and affordability, especially with regard to services
that provide higher bandwidth than analog POTS lines, will be central to the development of
the Internet as a mass-market phenomenon that benefits all Americans.
The Commission has historically played a major role in promoting "universal service,"
which has been understood as the availability of some basic level of telephone service to all
Americans. Some universal service mechanisms, such as the Universal Service Fund (which
provides assistance to high-cost LECs) and the Telecommunications Relay Service Fund
(which underwrites services that allow people with hearing impairments to use
telecommunications facilities), are explicit. Other support for universal service has
traditionally been provided through implicit subsidy flows, in which regulators have allowed
certain rates to be set at levels far in excess of cost so that rates in high-cost or underserved
areas can be set at levels deemed affordable.
The 1996 Act directs the Commission to preserve and extend universal service, but to
do so in a manner consistent with the development of competition. In addition to the general
language regarding universal service funding, the 1996 Act contains several provisions
dealing specifically with availability of advanced communications services. In particular,
Section 254 (which promotes universal service) and Section 706 (which discusses incentives
for deployment of advanced telecommunications services) state:
(254)(b)(2) Access to Advanced Services.-- Access to advanced telecommunications and
information services should be provided in all regions of the Nation.
(706)(a) The Commission ... shall encourage the deployment on a reasonable and timely
basis of advanced telecommunications capability to all Americans (including, in
particular, elementary and secondary schools and classrooms)....
In discharging these responsibilities the FCC must address two inter-related issues: the
deployment and pricing of high-speed access technologies, and the availability of existing
services to rural and low-income communities as well as schools, libraries, and others. A
major aspect of the Commission's role will be to foster the development of market-based
solutions that make access to the Internet and other interactive services widespread and
affordable. Beyond the specific universal service mandates of the 1996 Act, the
Commission's primary focus should be to remove barriers to availability of high-bandwidth
technologies, and to bring parties together to develop solutions, rather than to mandate
particular deployment patterns.
Universal service policies benefit the Internet because they expand the scope of the
network. If more people can access the Internet, the value of connectivity will increase, and
demand for Internet-related hardware, software, and services will be stimulated.
A. Deployment and Pricing of High-Speed Access Technologies
Most residential Internet access today uses ordinary analog POTS lines. Although
POTS connections have fueled the explosive growth of residential Internet access in recent
years, the low bandwidth available on these lines substantially limits the services that can be
delivered to users, and reduces the value of the Internet experience as users have to wait for
information to be received. Several technologies that are either commercially available today
or in development promise to remove these limitations.
Figure 11 lists some of the major technologies that may deliver high-bandwidth Internet
access to end users. In almost every case, the actual throughput available to subscribers will
depend on the particular infrastructure and customer premises equipment used, in addition to
factors such as the location of the subscriber. The technologies listed are those which appear
likely to be able to deliver substantially greater bandwidth to a significant number of
subscribers over the next 2-4 years. Other systems, such as those that extend fiber optic
circuits to a small cluster of homes or event each individual home, may eventually supplant
all these alternatives. Given current deployment plans and the expenses involved, however,
widespread implementation of such systems appears to be significantly farther in the future. Figure 11 -- Major End-User Internet Access Technologies
Technology
Downstream
Upstream
Summary
POTS
(analog voice
telephony)
28.8 - 33.6 kbps
(56 kbps in 1997)
28.8 - 33.6 kpbs
94% of homes have POTS service;
requires no additional telco investment
and only a computer and (inexpensive)
analog modem at the user premises.
ISDN
56 - 128 kbps
(230 kbps under
development)
56 - 128 kbps
(230 kbps under
development)
Approximately 70% of access lines
are now capable of supporting ISDN,
but less than 5% of Internet
subscribers use ISDN. New pricing,
standardization, and marketing efforts
may increase penetration in 1997.
xDSL
384 kbps (SDSL)
384 kbps (SDSL)
Significant deployment of SDSL and
HDSL today for corporate networks
and T1 service. Commercial ADSL
deployment by most telcos planned to
begin in 1997. Actual deliverable
bandwidth, especially for ADSL,
depends heavily on loop conditions.
768 kbps (HDSL)
768 kbps (HDSL)
1.5 - 8 Mbps
(ADSL)
12 - 500 kbps
(ADSL)
Cable Modems
1.2 - 27 Mbps
(shared capacity)
128 kbps - 10
Mbps (shared
capacity) or POTS
line used for
upstream
Several companies are deploying
infrastructure (e.g. @Home, Comcast,
Time-Warner), with commercial
availability in late 1996 or early 1997.
Many technical questions remain.
Wireless
28.8 kbps
(900 Mhz)
28.8 kbps
(900 Mhz)
These are only some of the
technologies under development that
could provide wireless Internet access
(NII/Supernet band and 2.3 Ghz
auction may also open spectrum for
this application). Actual bandwidth
will depend on environmental factors
as well as details of deployment.
1.5 Mbps (LMDS)
1.5 Mbps (LMDS)
1.5 Mbps (MMDS)
1.5 Mbps (MMDS)
Satellite
400 kbps
(DirectPC)
POTS line used for
upstream
Several other systems under
development.
B. The ISDN Case Study
ISDN is by far the most well-established and widely available higher-bandwidth access
technology. ISDN uses existing twisted pair copper phone lines to transmit data at up to 128
kbps. Unlike analog modems, ISDN creates an end-to-end digital connection path, which
also facilitates faster call setup times and additional options using a built-in out-of-band "D"
channel. In order to support ISDN, local exchange carriers must install digital line cards in
their central office switches, and subscribers must purchase new "digital modems" to operate
at their premises. Beyond this investment, however, ISDN does not require any significant
reconfiguration of LEC networks in order to support higher bandwidth than analog
transmission. ISDN technology has been commercially available for well over a decade, and
approximately 70% of existing local access lines in the United States are now configured to
support ISDN.
At the present time, however, despite growing interest in ISDN as an Internet access
technology, only a relatively small number of customers have ISDN lines in service.
According to one study, approximately 1.4% of modem users connected to the Internet using
ISDN in early 1996. One barrier to more widespread deployment of ISDN has been the
lack of standardization and the large number of site-specific parameters that must be
configured when an end user wishes to purchase an ISDN line. Users must often determine a
host of arcane configuration options, and telephone company personnel must be trained in the
various pricing and configuration options, in order for ISDN to be installed. Several steps are
now being taken to address these provisioning problems, including "one stop shopping"
efforts by vendors such as Motorola and Microsoft that provide customers with a central point
for ordering and obtaining information, and efforts by standards bodies and the local exchange
industry to simplify the process of installing ISDN. Vendors such as AT&T, 3Com, and
US Robotics have also launched efforts to make ISDN easier to install.
Many parties have argued that pricing is another barrier that has constrained ISDN
deployment. Rates charged by local exchange carriers for ISDN service are regulated by state
public utilities commissions, and these rates vary greatly from carrier to carrier. A March
1996 survey of ISDN tariffs showed a variation among major carriers between approximately
$30 per month and over $300 per month for equivalent usage levels. Some ISDN
supporters argue that even rates at the low end of this spectrum far exceed the incremental
cost to telephony companies of supporting ISDN service. In many states ISDN is tariffed
only as a business service, although residential ISDN offerings are increasingly available. In
addition to the monthly rates, virtually all local exchange carriers now charge some per-
minute fees for ISDN usage above a designated threshold, or charge a higher monthly rate for
a higher threshold or unlimited calling. Carriers argue that these usage-sensitive charges,
especially for peak-period usage, are essential to avoid overuse of network capacity, but
consumer groups and others claim that the costs of providing ISDN service are essentially
fixed, and do not vary substantially based on usage.
An additional component of ISDN pricing is the federal subscriber line charge (SLC).
Although the vast majority of ISDN rates are encompassed by the monthly rates and usage
charges regulated by state commissions, ISDN users are also subject to the SLC, which
recovers some of the interstate allocated costs of subscriber loops. For residential customers,
the SLC is currently capped at $3.50 per line per month, and for multi-line businesses, the
cap is $6.00 per month. Because ISDN is a derived channel technology that, in addition to
providing greater data bandwidth, also allows multiple voice channels, the question has arisen
as to whether multiple SLCs should be assessed on each ISDN connection. The FCC has
requested comment on this question in the Access Reform NPRM, and has temporarily
refrained from imposing more than one SLC.
As Internet usage and demand for higher bandwidth to the home has accelerated, many
LECs have proposed new pricing structures for ISDN. In some cases, such as Bell Atlantic s
April 1996 proposal, these new structures involve rate decreases. In others, such as Pacific
Bell s January 1996 request to the California Public Utilities Commission, the new tariffs
include substantially higher rates in response to increases in ISDN usage and concerns about
additional costs to support this usage. Several state commissions are now review LEC
residential ISDN tariffs, and are evaluating the incremental costs of offering ISDN service.
ISDN, however, is not a packet-based technology. Because of certain architectural
efficiencies and the design of ISDN line cards in most local exchange switches, ISDN may
place a less significant congestion burden on the network than analog connections.
However, although digital, ISDN was designed to conform to the existing architecture of the
circuit-switched voice network. Moreover, although ISDN provides greater bandwidth than
POTS, it is insufficient for full-motion video and many of the new multimedia applications
that are rapidly becoming available. The unanswered question at this point in time is whether
the window of opportunity for ISDN has passed, or whether ISDN, as the most mature and
most widely available higher-bandwidth service, will be used increasingly over the next
several years.
The FCC is interested in seeing higher bandwidth available to end users. However, the
Commission's role is not to endorse any particular technology, or to artificially subsidize the
deployment of such services generally. Instead, the Commission should investigate areas
where regulatory rules may either be preventing technologies from being deployed, or
distorting investment patterns and incentives for innovation. ISDN tariffs and the application
of the SLC to ISDN may fall within this category. More generally, the deployment of high-
bandwidth Internet access technologies may be constrained by the ability of competitors to
take advantage of the existing network, either by purchasing existing tariffed services from
local exchange carriers, or by leasing pieces of the network and combining them in new
ways.
The FCC's interconnection, access charge, and price cap rules will therefore influence
the deployment of higher bandwidth. In addition, the Commission is in the process of
developing a Notice of Inquiry on innovation, to seek comment on other ways that FCC rules
can provide incentives for both incumbents and competing providers to invest in their
networks and deploy new technologies. Ultimately, only the market will decide which of
these investments are wise and which technologies will succeed, but the FCC must provide a
level playing field for those market forces to operate.
C. Universal Service and Advanced Access Technologies
Section 254 of the 1996 Act sets forth a set of requirements designed to preserve and
advance universal service in an era of new technologies and new forms of competition. The
Commission has historically been committed to universal service in telecommunications, and
has promoted efforts to make telephone service available to all Americans. Universal service
has traditionally been conceived in terms of access to voice telephony. With the development
of the Internet and other interactive computer networks, the Commission and state regulators
must consider whether access to these newer services should also be included in the
conception of universal service. Although most Internet subscribers can access an ISP POP
through a local call, users in some remote and rural areas, or regions with small local calling
areas, must pay toll charges to reach an ISP, which may make it more difficult for those users
to take advantage of the Internet.
The Federal-State joint board on universal service, formed in accordance with the 1996
Act, recommended that providers of interstate information services not be required to
contribute to the new federal universal service fund. The joint board stated that, to these
extent that information service providers do not offer for a fee any of a listed set of
"telecommunications services," they are not "carriers that provide interstate
telecommunications services" as specified in the 1996 Act.
The joint board also recommended that Internet access not be considered a "core
service" subject to universal service support under section 254(c)(1). Core services under
the Act are limited to telecommunications services, and the Commission is required to
consider factors such as whether the service is available to a majority of residential
subscribers in the country. Despite the increasing levels of Internet usage, Internet access
today is not nearly as essential to most Americans as basic voice grade access to the local
phone network. In addition, because most users access the Internet through the phone
network, universal service subsidies to reduce local phone rates for rural, low-income, and
high-cost subscribers will effectively make Internet access more affordable as well.
Current data do not provide a good estimate of the percentage of rural subscribers that
cannot access an ISP through a local call. The major national ISPs each offer several
hundred POPs throughout the country, and usually provide access in other areas through a
toll-free number for an additional charge of approximately $5.00 per hour. There is anecdotal
evidence that many rural areas are served by smaller regional and local ISPs, even when
national ISPs do not find it economical to serve those areas. Further time and study will be
needed to understand whether market forces alone will be sufficient to ensure affordable
Internet access throughout the country. Given the rapid rate of growth and change in the
Internet industry, the affordability of Internet access today may not be an accurate indicator of
the situation in the future.
In addition to the requirements of Section 254, Section 706 and 714 of the 1996 Act
direct the Commission and other regulatory bodies to take specific actions in order to make
advanced telecommunications technology widely available. Section 706 directs the
Commission, within thirty months of the passage of the Act, to initiate a notice of inquiry
concerning the availability of advanced telecommunications capability to all Americans, and
schools in particular. "Advanced telecommunications capability" is defined as "a high-
speed, switched, broadband telecommunications capability that enables users to originate and
receive high-quality voice, data, graphics, and video telecommunications using any
technology." If the Commission determines that such capability is not being deployed in a
reasonable and timely manner, the Commission is directed to take "immediate action" to
remove barriers to such deployment. Section 714 establishes the Telecommunications
Development Fund to promote the development and deployment of telecommunications
services, particularly by small businesses.
The 1996 Act contains specific requirements for the provision of services associated
with universal service at discount rates to schools, libraries, and rural health care providers,
and allows the Commission to designate other services to be covered under this
requirement. Studies have shown that advanced services such as Internet access are not yet
widely available in classrooms, especially in low-income areas. Only nine percent of all
instructional rooms (classrooms, labs, and library media centers) were connected to the
Internet as of early 1996. Schools with large proportions of students from poor families
are half as likely to provide Internet access as schools with small proportions of such
students.
Internet access will also be important for rural health care facilities. Telemedicine
allows doctors in remote areas to share data with experts elsewhere in the country, greatly
enhancing the level of care. These services often involve transmission of high-resolution
images, and therefore require large amounts of bandwidth. The FCC has formed a
Telemedicine Task Force which has made recommendations for making this bandwidth
available to health care providers.
The joint board recommended a system of discounts, between 20-90%, for schools and
libraries that purchased telecommunications and other services under this provision, to be
funded by a fund of up to $2.25 billion per year. Under the joint board's
recommendations, ISPs would be able to provide these services, and receive subsidies. The
joint board concluded that it would be impractical to separate the "conduit" services offered
by ISPs and online service providers from "content," even though universal service subsidies
are designed to fund only the connectivity portion of the service. The recommendations,
however, leave open the question of whether a system in which ISPs need not contribute to
the universal service funding mechanism, but may benefit from it, creates a competitive
distortion.
Even if services are provided at discount rates, schools and libraries will desire the most
economical means of providing Internet connectivity. For example, the wireless
NII/SUPERnet system may, in some areas, provide more cost-effective network access for
school campuses than wired local area networks. Thus, the general issues about the
economics of high-bandwidth access technologies will be important in this area as well.
VI. Conclusion
This working paper has reviewed many difficult and complex issues that have arisen as
the Internet has grown to prominence. I have attempted to identify government policy
approaches that would have a positive influence on the development of the Internet. This
final section seeks to place the challenges described throughout this paper into a broader
context.
A. The Internet and Competition in Telecommunications
The movement toward deregulation and local competition in telecommunications in the
United States may be the single most significant development for the future of the Internet.
The decisions that the FCC, state regulators, and companies make about how to create a
competitive marketplace will determine the landscape in which the Internet evolves. The
shape of local competition will influence what types of companies are able to provide Internet
access to what categories of users, under what conditions, and at what price. The removal of
barriers between different industries -- such as the prohibition on BOCs offering in-region
long-distance service -- will accelerate the convergence that is already occurring as a result of
digitalization and other technological trends.
Internet providers are potentially both substantial customers of circuit-switched voice
carriers, and competitors to them. It is ultimately in the interests of both ISPs (who depend
on the PSTN to reach their customers) and LECs (who derive significant revenue from ISPs)
to have pricing systems that promote efficient network development and utilization. If the
costs of Internet access through incumbent LEC networks increase substantially, users will
have even stronger incentives to switch to alternatives such as competitive local exchange
carriers, cable modems, and wireless access.
Dial-up Internet access today tends to be priced on a flat-rated basis, for both the PSTN
portion of the connection and the transmission of packets through Internet backbones. By
contrast, interexchange telephone service tends to be charged on a per-minute basis.
However, both networks run largely over the same physical facilities. There is some evidence
that Internet and long-distance pricing are beginning to move towards each other. This
paper has discussed some of the arguments about usage pricing for Internet connections
through the PSTN; similar debates are occurring among Internet backbone providers in
response to congestion within the Internet. With the development of differentiated quality of
service mechanisms on Internet backbones, usage pricing seems likely to become more
prevalent on the Internet, although usage in this context may be measured by metrics other
than minutes.
In the telephone world, flat-rated pricing appears to be gaining ground. The FCC
established the subscriber line charge (SLC), because the fixed costs it represented were more
efficiently recovered on a flat-rated basis. The Access Reform proceeding raises questions
about whether other usage-sensitive charges (such as the Transport Interconnection Charge
and the Carrier Common Line Charge) should be replaced with flat-rated charges, and there
was substantial debate in the Interconnection proceeding about whether LEC switching
capacity should be sold on a flat-rated basis in the form of a "switch platform." Pressure
toward flat-rated pricing is also arising for business reasons -- for example, Southwestern Bell
has reportedly considered offering a flat-rated regional long-distance plan when it receives
interLATA authorization. Customers in the U.S. seem to prefer the certainty of flat-rated
pricing even where it winds up costing more for their particular level of usage.
There are, of course, important differences in the architectures of the Internet and the
public switched telephone network. However, both of these architectures are evolving. There
will not be one universal pricing structure for the Internet or the telephone network, for the
simple reason that there will not be one homogenous network or one homogenous company
running that network. Technology and business models should drive pricing, rather than the
reverse.
Today, the vast majority of Internet users and ISPs must depend on incumbent LECs for
their connections to the Internet. These incumbent LECs have huge investments in their
existing circuit-switched networks, and thus may be reluctant, absent competitive pressure, to
explore alternative technologies that involve migrating traffic off those networks. The
economics of the Internet are uncertain, since the market is growing and changing so rapidly.
Competition will enable companies to explore the true economics and efficiencies of different
technologies. The unbundling mandated by the 1996 Act will allow companies to leverage
the existing network to provide new high-bandwidth data services.
Competition can lead to instability or confusion, especially during periods of transition.
Monopolies provide certainty of returns that, by definition, cannot be achieved in a
competitive market. With many potential players, forecasting the future of the industry can
be difficult. Companies must choose between different technologies and business models, and
those companies that do not choose wisely will see the impact on their bottom lines.
Yet, as the Internet demonstrates, uncertainty can be a virtue. The Internet is dynamic
precisely because it is not dominated by monopolies or governments. Competition in the
Internet industry, and the computer industry that feeds it, has led to the rapid expansion of the
Internet beyond anything that could have been foreseen. Competition in the communications
industry will facilitate a similarly dynamic rate of growth and innovation.
B. The Right Side of History
The legal, economic, and technical underpinnings of the telecommunications
infrastructure in the United States have developed over the course of a century, while the
Internet as a service for consumers and private businesses is less than a decade old, and the
national framework for competition in local telephone telecommunications markets was
adopted scarcely more than a year ago. Challenges that seem insurmountable today may
simply disappear as the industry and technology evolve.
As significant as the Internet has become, it is still near the beginning of an immense
growth curve. America Online, the largest ISP, has grown from under a million subscribers
to eight million in roughly four years. But those eight million subscribers represent only a
fraction of the eighty million households served by AT&T. The revenues generated by the
Internet industry, although growing rapidly, pale in comparison to those generated by
traditional telephony. Only about 15% of the people in the United States use the Internet
today, and less than 40% of households even have personal computers. A decade from now,
today's Internet may seem like a tiny niche service. Moreover, as Internet connectivity is
built into cellular phones, television sets, and other household items, the potential number of
Internet hosts will mushroom beyond comprehension. Computers are now embedded in
everything from automobiles to cameras to microwave ovens, and all of these devices may
conceivably be networked together. The Internet may exert the greatest influence on society
once it becomes mundane and invisible.
The growth potential of the Internet lends itself to both pessimistic and optimistic
expectations. The pessimist, having struggled through descriptions of legal uncertainties,
competitive concerns, and bandwidth bottlenecks, will be convinced that all these problems
can only become worse as the Internet grows. The optimist, on the other hand, recognizes
that technology and markets have proven their ability to solve problems even faster than they
create them.
The global economy increasingly depends on networked communications, and
communications industries are increasingly shifting to digital technologies. Bandwidth is
expanding, but so is demand for bandwidth. None of these trends shows signs of
diminishing. As long as there is a market for high-speed connections to the Internet,
companies will struggle to make those high-speed connections available in an affordable and
reliable manner. Once a sufficiently affordable and reliable network is built, new services
will emerge to take advantage of it, much as the World Wide Web could take off once the
Internet had reached a certain level of development.
Difficulties and confusion may arise along the way, but improvements in
communications technology will continue to provide myriad benefits for individuals,
businesses, and society. In the long run, the endless spiral of connectivity is more powerful
than any government edict.
APPENDIX A -- INTERNET ARCHITECTURE
The diagram on the following page illustrates some of the mechanisms for
accessing the Internet. Although this illustration provides greater detail than the
conceptual diagrams in Section II, it remains greatly simplified in comparison to
the actual architecture of the Internet.