Protocols for the Post Internet Era.
networks – collections of computing devices that
central control – have shown to be a disruptive technology
that has the
potential to revolutionize information systems. The ability to create
large networks on-the-fly has enabled new application services in
support of content distribution, file sharing, video streaming, and
interactive teleconferences. But could the role of peer networking be
even greater? Is it conceivable that peer network protocols become the
foundation for a new architecture that is entirely based on the
concepts of self-organizing networks? Can peer network protocols evolve
into a follow-on technology to the Internet protocols? In
our research, we are trying to address these questions by determining
the potential and
fundamental limits of the peer networking approach. They envision a
network architecture characterized by the coexistence of virtually
unlimited numbers of peer networks that can quickly grow to arbitrarily
large sizes and adapt to changes in the number of peers and in the
substrate network. The network architecture must be capable of
supporting a much broader set of platforms than the Internet protocols
main vehicle for
developing, evaluating, and deploying solutions is an overlay network
system, called HyperCast, that we have developed in my research group
over the last years.
HyperCast is an open source software system (of about 100,000 lines of code) for
self-organizing application-layer overlay networks. Initially conceived
for the empirical evaluation of large-scale reliable multicasting, the
software has evolved into a programming platform for application-layer
overlay networks that accommodates different types of overlays and
permits a variety of message semantics. HyperCast consists of a set of
software-based protocols that support ad hoc creation of peer groups
for point-to-point, point-to-multipoint, and multipoint-to-point
delivery among peers. HyperCast has been enhanced recently to include
protocols for mobile ad-hoc networks. The HyperCast software has been
used to build complex software systems, for example, a situation
awareness system for supporting emergency responders, video streaming
systems, and a sensor network application for area protection.
critical part of
future research efforts lies in moving peer networking closer to the
hardware, possibly attaching it directly onto processors of mobile
devices, routers, and switches. This could avoid some of the
performance penalties of current peer networks. Having a
state-of-the-art peer network system available, we can
explore how to exploit peer networking technologies in
lower layers of the system architecture. An open question is the
viability of the existing HyperCast system as a platform for the
interconnection of sensors and mobile devices. Also open are the design
principles of protocols that can support networks of arbitrarily large
sizes, meet or exceed the throughput and delay performance of existing
networks, satisfy security requirements in a mobile and dynamically
changing environment, and demonstrate the benefits of the architecture
in the context of demanding real-time applications. Recently,
experiments have shown that a HyperCast overlay network with 10,000
overlay sockets running on 100 Internet hosts can be built in less than
one minute. The goal for the next 2 years is to build significantly
larger peer networks, of up to one million nodes.
aims to enable new advanced technology applications. For example, an
array of thousands of sensor devices placed in northern Ontario that
monitor environmental factors, could form a peer network that connects
to the backbone infrastructure of CA*net4, Canada’s optical
network, and enable climate researchers at the University of Toronto to
monitor and visualize the output of the findings of these sensors in
- “An Overlay Approach to
Data Security in
Ad-Hoc Networks,” J. Liebeherr and G.
Dong, Ad Hoc Networks
Journal (to appear).
Overlay Networks with Overlay Sockets”, J. Liebeherr, J.
Proceedings of 5th COST 264
International Workshop on Networked Group Communications (NGC 2003),
Munich, Germany. Pages 242–253, September
Multicast with Delaunay
Triangulations”, J. Liebeherr, M.
Nahas, W. Si, IEEE Journal on Selected Areas in Communications,
Issue on Network Support for Multicast Communication,
A Protocol for
Maintaining Multicast Group Members in a Logical Hypercube
J. Liebeherr, T.
K. Beam, Proceedings of First International Workshop on Networked
Communication, Pisa, Italy, Springer Verlag, LNCS 1736, Pages
that the development of revolutionary new networks is hampered by the
lack of methods to evaluate the performance of radically different
designs of network architectures, protocols, or applications.
The development of a stochastic network calculus can potentially lead
to the development of simple models and fast computational methods for
communication networks that are very different from the networks and
protocols used today.
We assert that the development of a stochastic
, that is, a
probabilistic extension of the network calculus that was conceived in
the 1990s, can potentially lead to the development of simple models and
fast computational methods to evaluate communication
networks. The long-term goal of our research is to develop
new theoretical concepts and and algorithms for predicting the delay
and throughput performance of future networks. Towards this
goal, we focus on:
network calculus theory: We
explore the capabilities and limitations of the stochastic network
calculus approach and its applications to general network
investigate applications of the stochastic network calculus
techniques to understand the role of scheduling in high-speed
networks, for the verification of service level agreements between
network service providers, and in the analysis of
feedback-based buffer management and congestion control algorithms.
- Development of
computational methods: We
develop computational algorithms for the stochastic network calculus
that can be applied without requiring familiarity with the theory of
the calculus itself.
analytical approaches: We
attempt to leverage off and integrate the results of related approaches
for modeling and analyzing networks with statistical multiplexing, such
as queueing networks, linear systems theory, and network decomposition
Network Service Curve Approach for
the Stochastic Analysis of Networks,
F. Ciucu, A. Burchard,
and J. Liebeherr, ACM Sigmetrics '05, June 2005 (Best Student Paper
Award). Journal version in IEEE Transaction on Information
52(6):2300–2312, June 200
Per-Flow Service Bounds in a Network with Aggregate Provisioning, J.
S. D. Patek, and A. Burchard, Proceedings of IEEE Infocom
- A Min-Plus Calculus for
Statistical Service Guarantees, A. Burchard, J. Liebeherr,
D. Patek IEEE Transaction on Information Theory, 52(9):4105–4114,September
Service Assurances for Traffic Scheduling Algorithms, R. Boorstyn, A.
J. Liebeherr, C. Oottamakorn,
IEEE Journal on Selected Areas in
Special Issue on Internet QoS, December 2000.
Guarantees. A significant part
of my research efforts in the 1990s
addressed traffic control algorithms for networks with deterministic
guarantees. This work has included the derivation of best possible,
is, necessary and sufficient, admission control tests for the
(EDF) and Static Priority (SP) scheduling algorithms, and a proof of
of EDF for a deterministic service. Other work has included the
of new scheduling algorithms (RPQ, RPQ+), which can approximate the
EDF scheme arbitrarily closely, but with a lower implementation
than EDF. I also worked on video traffic characterization
video traffic, such as MPEG video. With these characterizations, and
actual MPEG video traces as benchmark traffic, it became feasible to
the maximum utilization at which deterministic service guarantees can
maintained for MPEG video traffic.
Queue Schedulers with Approximate Sorting in Output Buffered
J. Liebeherr, D. E. Wrege, IEEE Journal on Selected Areas in
Special Issue on Next Generation IP Switches and Routers, June 1999.
Admission Control in Continuous-Media Networks with Bounded Delay
J. Liebeherr, D. E. Wrege, D. Ferrari, IEEE/ACM Transactions on
Delay Bounds for VBR Video in Packet-Switching Networks: Fundamental
and Practical Tradeoffs, D.E. Wrege, E.W. Knightly, H. Zhang, J.
IEEE/ACM Transactions on Networking, June 1996.
Guarantees. Since the late
1990s, Internet researchers have
interest in class-based QoS architectures that support service
to traffic aggregates, rather than to individual flows. Class-based
trade off reduced complexity for weaker service guarantees. In his PhD
research, Nicolas Christin explored
limits of such a service and who raised the question: What are the
service guarantees that can be given in class-based service
The main contribution of this work is a new traffic control algorithm
JoBS (Joint Buffer Management and Scheduling), which makes scheduling
buffer management decisions in a single step. This is realized by
a feedback control loop at the output queue of the link level
queue. Nicolas Christin implemented JoBS in the FreeBSD Unix kernel,
showed that shown that JoBS is viable to run in modern
The implementation of JoBS has been integrated into the ALTQ and KAME
Both packages are distributed with the Unix versions FreeBSD, NetBSD,
and BSDI implementations.
- “Enhancing Class-Based Service
Adaptive Rate Allocation and Dropping Mechanisms”, N.
Christin, J. Liebeherr
and T. F. Abdelzaher, ACM/IEEE
Networking (to appear).
- “The QoSbox:
A PC-Router for Quantitative Service
Differentiation in IP
Networks”, (with N. Christin), Computer Networks (to
- A QoS
Quantitative Service Differentiation, N. Christin, J. Liebeherr, IEEE
Magazine, Special Issue on Scalability in IP-Oriented Networks,
Quantitative Assured Forwarding Service, N. Christin, J. Liebeherr, T.
Abdelzaher, Proceedings of IEEE Infocom 2002, New York, Pages
Since my PhD thesis research, I have maintained a strong interest in
access protocols (MAC) for local and metropolitan area networks. I have
worked on two media access protocols, Distributed Queue Dual Bus (DQDB)
and Hybrid Fiber-Coax (HFC). Particularly, I have worked on the design
and evaluation of protocols to improve fairness or provide priority
of these protocols. Realizing fairness and service differentiation in
protocols continue to pose difficult research problems, as evidenced by
the recently proposed MAC protocols Resilient Packet Ring (IEEE 802.17)
and WiFi (IEEE 802.11e). In my PhD thesisI presented and analyzed a
access protocol that quickly reaches a fair distribution of bandwidth.
A follow-up study showed that in the presence of multi-priority
the IEEE 802.6 standard could not provide preemptive
and proposed a solution to overcome this problem.
The work on HFC
("cable modem networks") has had impact on the IEEE 802.14
The multi-priority media access protocol that was developed by Mark
Corner and with researchers at NIST became the
for the priority scheme of the IEEE 802.14 standard effort. (The IEEE
802.14 standard was never completed, since the competing DOCSIS effort
became the de-facto standard for HFC networks.)
- A Priority
the IEEE 802.14 MAC Protocol for Hybrid Fiber-Coax Networks, M. D.
N. Golmie, J. Liebeherr, C. Bisdikian, D. Su, IEEE/ACM Transactions on
Networking, 8(2):200–211, April 2000.
for Pre-Emptive Priorities in Dual Bus Metropolitan Area Networks, J.
I. F. Akyildiz, A. N. Tantawi, Proceedings of ACM Sigcomm '92, August
In the mid-1990s, we experimented a lot with MBONE tools and
multimedia collaboration tools. We developed a system, called
grounds-wide tele-tutoring system (gwTTS),
for tele-tutoring on a
campus network.The gwtts tele-tutoring
developed has been commercialized by the Virginian company
Litton-Fibercom (CAMVision 7610 Distance Learning Controller). Paco
Hope developed a (unfortunately unpublished) tele-orchestra
application for MIDI data over IP multicast.
Interactive Distance Learning Application with
Networking”, J. Liebeherr, S. R. Brown, R. Albertson,
11(2):211–229, June 2000.