Comcast's Network Management Experiment

rated by 0 users
This post has 1 Reply | 1 Follower

Top 10 Contributor
Posts 26,421
Points 1,193,215
Joined: Sep 2007
ForumsAdministrator
News Posted: Wed, Jun 4 2008 11:03 AM

This week marks the beginning of Comcast's month-long experiment with protocol-agnostic network management practices in Chambersburg, PA and Warrenton, VA. Comcast has come under fire recently for throttling certain types of traffic on its network, such as with data for P2P clients. This alternative approach focuses purely on how much bandwidth is being consumed, and not the type of data being transmitted. 

"This new technique does not look at particular protocols or applications. Instead, it will focus on the bandwidth consumption activity of individual customers who are contributing to congestion on Comcast's network. The technique measures only aggregate bandwidth consumption, not the protocol or content being used by customers."

This is a potentially more democratic approach, attempting to insure that all customers have equal access to the same data pipes: 

"The new network management technique will result in delayed response times for Internet traffic only for those customers who are using more than their fair share of available Internet resources at the time. The network management technique manages those customers' Internet traffic until their usage falls below established bandwidth usage thresholds or until network congestion ends."

This represents a different approach to managing high-bandwidth consumers than what Time Warner Cable is currently doing. Time Warner Cable is also conducting a bandwidth-limiting trial. But instead of delaying response times for heavy users, Time Warner Cable will be levying surcharges for users who go over their monthly allotments. This is akin to going over the number of minutes of your cell phone plan. Comcast's statement begs the questions, however: How does it define one's "fair share of available Internet resources?"

By implicitly stating what their network management policies are, both Comcast and Time Warner Cable costumers should have a reasonable expectation of the services being provided. This should take some pressure off the providers who are accused of discriminating against users who utilize certain types of data, such as P2P. Throttling P2P traffic is an inexact science, which can inadvertently impact traffic other than what is intended. Also, P2P traffic--which has traditionally been associated with illegal file sharing in the past--is increasingly being utilized for legitimate purposes. To continue to throttle P2P traffic would likely alienate too many paying customers.

By self-regulating with a practice that does not discriminate against particular types of data, the broadband-service provides are also attempting to placate lawmakers who want to take a more controlling interest in public access to the Internet via net neutrality legislation.

Last, but possibly most important, Comcast and Time Warner Cable happen to both be cable television providers. HD video consumes a significant amount of the available bandwidth on their respective networks, and available video bandwidth is in direct competition with available data bandwidth. By managing how much data is transmitted via broadband connections, allows the companies to insure that there is enough available bandwidth to also support the quickly growing and lucrative HD video distribution business.

The most likely shakeout of all of this will be tiered services for customers, where high-bandwidth consumers will pay more. The future of unlimited Internet access is looking awfully gloomy.



  • | Post Points: 20
Top 10 Contributor
Posts 4,838
Points 45,830
Joined: Feb 2008
Location: Kennesaw
rapid1 replied on Wed, Jun 4 2008 12:57 PM

They should have some kind of a metering response indicater for customers with anything they do. I mean come on do you think a typical home internet user can anywhere close to reality tell how much internet they use. Either way I think this coming HD implementation in february will actually help a lot. They will have to upgrade there entire backbone to support it therefore there will be more bandwidth available on the network because I know how they'll do it. They will try to get the max bandwidth for the lowest amount which will either eay more than double there current data space if not quadruple it. They are also supposed to be stepping up to docsis 3 which makes this somewhat comical. How are they going to limit docsis 3 with it's current seen implementation bandwidth if they can't control it on docsis 2?????

OS:Win 7 Ultimate 64-bit
MB:ASUS Z87C
CPU:Intel(R) Core(TM) i7 4770 ***
GPU:Geforce GTX 770 4GB
Mem:***ingston 16384MB RAM
  • | Post Points: 5
Page 1 of 1 (2 items) | RSS