Google's SPDY Incorporated Into Next-Gen HTML, Offers TCP Enhancements

rated by 0 users
This post has 7 Replies | 1 Follower

Top 10 Contributor
Posts 26,504
Points 1,196,940
Joined: Sep 2007
ForumsAdministrator
News Posted: Tue, Jan 24 2012 5:22 PM
Google's efforts to improve Internet efficiency through the development of the SPDY (pronounced "speedy") protocol got a major boost today when the chairman of the HTTP Working Group (HTTPbis), Mark Nottingham, called for it to be included in the HTTP 2.0 standard. SPDY is a protocol that's already used to a certain degree online; formal incorporation into the next-generation standard would improve its chances of being generally adopted.

SPDY's goal is to reduce web page load times through the use of header compression, packet prioritization, and multiplexing (meaning combining multiple requests into a single connection). By default, a web browser opens an individual connection for each and every page request, which can lead to tremendous inefficiencies.


SPDY's performance as compared to HTTP

Whether or not the proposal will pass is up in the air. HTTPbis's original task was to draft and approve the HTTP 1.1 standard; taking on HTTP 2.0 is a significant addition.  Argument Debate over the proposal broke out almost immediately, including challenges to the name HTTP 2.0 as being too similar to the admittedly loathed "Web 2.0."  As one commenter noted: "Well, if we announce one year, maybe we'll manage to succeed in 3."

Google has also fielded a proposal to accelerate and streamline the venerable TCP protocol. TCP grew out of a paper written in 1974 by Vint Cerf and Bob Kahn, and, like nearly everything from that decade, it's not aged all that well. Google's research shows that web browsers typically retrieve content through "several dozen parallel TCP connections. This strategy overcomes inherent TCP limitations but results in high latency in many situations and is not scalable."

Google engineer Yuchung Cheng suggests that the situation could be significantly improved by reducing the initial timeout period (the period of time before a packet is determined to be lost and retransmitted) from 3s to 1s and adopting the TCP Fast Open standard. Google claims that using TCO could reduce web page load times by 10% on average and "over 40% in many situations."  

Cheng also recommends increasing the TCP initial congestion window from three to 10 packets. The "initial congestion window" is the number of packets that can be outstanding at the beginning of a web transaction. The advantage of slow start is that it limits network congestion by limiting the amount of unacknowledged packets that can pile up in response to a short-lived connection. The disadvantage is that the majority of network requests are often short-lived. Google's proposed change would improve performance for short-lived connections but have a minimal impact on the chances of losing a packet.

One of the most significant questions raised about Google's proposed TCP optimizations is how they'd affect users in areas where Internet connectivity is less than ideal, and what sort of fallback options might be available to deal with these scenarios. Improving page load latencies and lowering network congestion are both important, but there needs to be a way to maintain connectivity when connecting over weak wireless or even satellite.
  • | Post Points: 65
Top 150 Contributor
Posts 495
Points 4,825
Joined: Jan 2012
Location: Brighton, MA

I'll share this with my brother he is sick with that of HTML. Like i see in that graph thats pretty nice enhancements from spyd over the http

  • | Post Points: 5
Not Ranked
Posts 3
Points 30
Joined: Jan 2012

You have some information that's not quite right.

HTTP/1.0 opens a connection for every request. HTTP/1.1 (which was ratified in 1999 and pretty much what any contemporary browser uses) does not open a connection for each web request.

Any contemporary browser maintains open connections to a contemporary web server via the HTTP/1.1 specification.

Furthermore, older browsers will be conservative in the number of connections they'll open against a web server. Contemporary web browsers will generally open six TCP connections so it's not quite as bad as you think.

Finally there's something few (technical) people know about, the HTTP/1.1 spec includes something called "HTTP pipelining" which allows an HTTP client, e.g., a browser, to request multiple elements at once.

Out of the major browsers, only Firefox supports this (but it's off by default). Neither Chrome, Opera or IE readily support HTTP Pipelining. In a nutshell if a web page has N elements (references to images, Cascading Style Sheets, external JavaScript files, etc., etc.), Firefox can ask for multiple elements at a time instead of one by one.

To enable HTTP Pipelining in Firefox simply go to the URL edit box and type "about:config" then in the filter enter "pipe". In the results you'll see "network.http.pipelining", change that to "true" (by double clicking on the "false" value). While you're at it crank up maxrequests to 7 instead of 4. That's the number of elements to fetch at once.

Yes, SPDY would be faster if all web servers spoke it... but they don't. Only Google's web servers have any notable presence on the web as far as talking the SPDY protocol.

On the other hand Firefox allows you to leverage HTTP Pipelining everywhere. Meaning Firefox can fetch web pages faster than any other browser on the web at large (FETCH not render, not JavaScript execution; though Firefox has made great strides as far as JavaScript speed goes) .

Lastly HTTP Pipelining makes an even *BIGGER* difference if you're on a higher latency connection, e.g., you're using tethering against your phone or perhaps you're using some 3G Aircard. That's because you're not waiting for round trip times on each and every distinct web resource mentioned on an HTML page you just hit. Firefox will just say, give me M elements, give me another M elements, etc., etc. (maxrequests) vs. asking for things one at a time.

See Chrome fanboys, it's not as über as you thought...

  • | Post Points: 20
Not Ranked
Posts 1
Points 20
Joined: Jan 2012
demilemi replied on Thu, Jan 26 2012 4:35 AM

It was a great post until this:

>See Chrome fanboys, it's not as über as you thought...

Maybe is not as shitty as you want it to be.

http://www.tomshardware.com/news/google-chrome-http-pipelining-browser-wars-firefox,13768.html

  • | Post Points: 20
Not Ranked
Posts 1
Points 20
Joined: Jan 2012

Why use the TCP protocol? I suggest using the SCTP protocol instead. I mean one could tweak some things in the TCP protocol, but when looking at the future, SCTP might be a better solution. SCTP supports multiple steams over one connection and much more advantages over TCP connections.

  • | Post Points: 20
Not Ranked
Posts 3
Points 30
Joined: Jan 2012

It's about time... Firefox has had HTTP Pipelining since 2004... and truth be told, I do use Chrome, just nowhere as much as I do Firefox.

  • | Post Points: 5
Not Ranked
Posts 3
Points 30
Joined: Jan 2012
Beteljuice replied on Thu, Jan 26 2012 10:23 PM

@GromBeestje

Because SCTP would require rewriting all web browsers and web servers and the only company that cared about SCTP was Sun and you know where they're at nowadays (R.I.P.) In other words none of the major client platforms come with SCTP.

SPDY just builds upon what's already there, i.e. TCP, the right decision.

  • | Post Points: 5
Not Ranked
Posts 1
Points 5
Joined: Nov 2012
sarath replied on Wed, Nov 21 2012 2:01 PM

the main advantage of speedy is that no modification of the current network infra is required.

facebook,wordpress,twitter already started to use and implement it.

http://slashroot.in/spdy-speedy-protocol-developed-google

  • | Post Points: 5
Page 1 of 1 (8 items) | RSS