P4P, the new file-sharing buzzword floating around town, has similar ambitions to P2P caching. Like P2P caching, the goal of P4P is to keep bandwidth local and avoid the costs involved with files transferring into and outside of the network. However, instead of caching servers, the idea takes a distributed approach. It requires cooperation and communication between the ISP and the file-sharing client, which at this point is like asking a divorced couple to embark on a two week Atlantic cruise on a small boat. The ISP is supposed to communicate to the file-sharing client the path of least bandwidth resistance and keep traffic within its network.
Sounds great, right? Conspiracy theories aside, the concept has potential. And according to a
recent test by researchers at the University of Washington and Yale University, keeping P2P traffic local and off the major arteries greatly improved bandwidth allotment and completion rates for BitTorrent and other P2P traffic. The study notes that P2P applications are “network oblivious”, meaning that clients don’t care where their information comes from – just as long as the information is obtained.
P4P hopes to change that. Each ISP would be required to maintain an “iTracker”, which would keep track of network congestion and stay in contact with the P2P client when a file request is made. Once LimeWire is ready to download “A great song.mp3”, the ISP’s iTracker will tell the client where a local version of that song is, and the download will begin. The theory is, since the ISP provided a direct, short distance route for the transfer, bottlenecking will be greatly alleviated.
According to the
study’s simulations, the idea seems to have merit. "Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW research assistant professor of computer science and engineering. "At the same time, speeds are increased by about 20 percent."
The study is well documented with scientific evidence that indicates that P4P technology indeed has tremendous potential. Unfortunately, P4P technology is incompatible with encrypted clients, which are quickly becoming the norm in P2P society. The project’s success depends largely on ISPs and P2P developers working together, and if the trend towards encryption is any indication, this level of cooperation is almost non-existent. It would require ISPs such as Comcast to give up on “delaying” traffic and take on a radically new approach to bandwidth management, and P2P developers to forgo encryption. With deep mistrust enveloping the ISP/P2P consumer relationship, it’s possible we might see 60 megabit connections become the norm before P4P is given any serious consideration by either side. And in the meantime, that’s too bad.