Slyck.com
Search Slyck  
Anonymous
Welcome
 
Local Sharing Saves Bandwidth on BitTorrent/P4P Tests
August 20, 2008
Thomas Mennecke
Font Bigger Font Smaller
It's no secret by now that Internet traffic creates a lot of bandwidth issues. There's only so much infrastructure to accommodate an insatiable population. If you believe the ISPs and network bandwidth management companies, you'll also believe that file-sharing protocols such as BitTorrent, Gnutella, eDonkey2000, and Usenet, make up a majority of Internet traffic.

It's not outside the realm of reason to believe that file-sharing technology has become the supreme communications medium of the Internet. For all its faults, it remains the best method for transferring files both large and small. BitTorrent and Usenet are best for large files, while Gnutella does a good job for small MP3s. As broadband becomes more commonplace in US households, more people are sharing larger files. So it stands to reason that ISPs are seeing an impressive percentage of their bandwidth used by file-sharing.

Unfortunately, the available bandwidth inventory remains weak. American ISPs have failed to keep up with the rest of the industrialized world, as the average broadband speed in the US is only ~6 Megabits per second. That equates to approximately 750 kilobyte per second. In other words, for every second that goes by, the most you can hope to download is 75% of a megabyte. That's about a full size MP3 in about 10 seconds or a full 750 megabyte XviD movie in about an hour.

Granted, those times aren't very bad. But that's taking into account optimal conditions, which largely exist on Usenet and private BitTorrent trackers. For the rest of the Internet community, that's when bandwidth bottlenecks start to rear their ugly heads.

Since catching up with Japan's 60 megabit per second bandwidth average is little more than a pipe dream, new and innovative alternatives have scattered themselves throughout the history of P2P. Of particular note is P2P caching, a technology used by bandwidth companies such as PeerApp. P2P caching requires an ISP to keep a caching server which stores the most frequent search requests. For example, if “A great song.mp3” is sucking up a lot of bandwidth and is popular with the file-sharing community, a peer caching server will store this request. The next time a user wants to download this file, it will come from the caching server rather than someone outside the ISP’s network. Rerouting traffic so that file transfers stay within the network keeps bandwidth down – and correspondingly, the cost. The amount of bandwidth available to the end user remains the same. Rather, the ISP keeps external bandwidth from other ISPs, and the associated connection charges, at bay.

P4P, the new file-sharing buzzword floating around town, has similar ambitions to P2P caching. Like P2P caching, the goal of P4P is to keep bandwidth local and avoid the costs involved with files transferring into and outside of the network. However, instead of caching servers, the idea takes a distributed approach. It requires cooperation and communication between the ISP and the file-sharing client, which at this point is like asking a divorced couple to embark on a two week Atlantic cruise on a small boat. The ISP is supposed to communicate to the file-sharing client the path of least bandwidth resistance and keep traffic within its network.

Sounds great, right? Conspiracy theories aside, the concept has potential. And according to a recent test by researchers at the University of Washington and Yale University, keeping P2P traffic local and off the major arteries greatly improved bandwidth allotment and completion rates for BitTorrent and other P2P traffic. The study notes that P2P applications are “network oblivious”, meaning that clients don’t care where their information comes from – just as long as the information is obtained.

P4P hopes to change that. Each ISP would be required to maintain an “iTracker”, which would keep track of network congestion and stay in contact with the P2P client when a file request is made. Once LimeWire is ready to download “A great song.mp3”, the ISP’s iTracker will tell the client where a local version of that song is, and the download will begin. The theory is, since the ISP provided a direct, short distance route for the transfer, bottlenecking will be greatly alleviated.

According to the study’s simulations, the idea seems to have merit. "Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW research assistant professor of computer science and engineering. "At the same time, speeds are increased by about 20 percent."

The study is well documented with scientific evidence that indicates that P4P technology indeed has tremendous potential. Unfortunately, P4P technology is incompatible with encrypted clients, which are quickly becoming the norm in P2P society. The project’s success depends largely on ISPs and P2P developers working together, and if the trend towards encryption is any indication, this level of cooperation is almost non-existent. It would require ISPs such as Comcast to give up on “delaying” traffic and take on a radically new approach to bandwidth management, and P2P developers to forgo encryption. With deep mistrust enveloping the ISP/P2P consumer relationship, it’s possible we might see 60 megabit connections become the norm before P4P is given any serious consideration by either side. And in the meantime, that’s too bad.


This story is filed in these Slyck News categories
BitTorrent :: BitTorrent Community
File-Sharing/P2P Related :: Other

You can discuss this article here - 6 replies

© 2001-2017 Slyck.com