Larging it for the Grid: Big Networking for Big Science
Prof. Jon Cowcroft (Cambridge)
Dr Saleem Bhatti (UCL)
View slides(PDF format)
In terms of historic development, we are seeing today the first signs of an important change in the network provisioning scenario. In the 1970s, the network was quite "flat" in terms of access rates, routing hierarchy and administrative domains. There were relatively few networks, interconnected
by some relatively low-speed point-to-point links, with administrative relationships and addressing set-up on an ad-hoc basis. This was the starting point of what has today become the Internet, of course. The 1980s saw the arrival of multi-megabit LAN technologies (Ethernet and Token-ring, for example). The links interconnecting these LAN clouds, however, remained at speeds that were an order of magnitude lower, in general.
In the late 1980s and the early 1990s, two important events occurred. One of these was that the Internet "arrived" for the general public and led other academics and researchers from outside the original user domain to start using it more. The other important event was that "broadband" networking gathered momentum, originally in the form of ATM, and in the mid-1990s the appearance of 100Mb/s Ethernet. So, LAN speeds were commensurate with WAN speeds. Also, very importantly, the number of
users was increasing rapidly.
Today, we reach a point where multi-gigabit wide-area connectivity exists, and access speeds are commonly multi-megabits. Desktop connectivity is both possible and affordable at 1Gb/s and before long research users will be able to use that capacity with Grid applications, for example. The recent
emergence of products supporting 10Gb/s technology at three times the price of 1Gb/s access speeds has served to lower costs per network port even further. And the number of users continues to increase. We are moving towards a situation where the traffic from the access networks has the potential to swamp the core WAN links. In the past, providers have relied on over-provisioning to cope with changing traffic patterns, but it seems unlikely that this method of network capacity planning will be viable for
Academic users are also starting to run more capacity hungry and QoS-sensitive applications for both teaching as well as research, for example voice and video conferencing and distributed data-processing
using large computer-clusters. As networking and networking components become more sophisticated,
building, simulating, testing, managing and controlling such networks becomes increasingly difficult. The complex system components and protocols when connected together and driven by application traffic exhibit an emergent behaviour that can be hard to model and predict - the outcome of putting some network elements together produces a result that is more that just the sum of the parts. This is compounded by the application-level infrastructure, which users now favour to support virtual
organisations and the formation of dynamic communities. Indeed, some of these issues, relating to communities, complexity in systems and building adaptability into systems are also highlighted in a recent document from the National eScience Centre*.
The speakers describe various research projects in which they are involved and how they hope that these projects will help find answers to the problems of providing high-speed, flexible, dynamically controllable, networking for eScience.
How to get here