Page 1 of 1

#1 Google: "The web is too slow."

Posted: Fri Nov 13, 2009 6:19 pm
by rhoenix
Popular Science wrote:Google has scarcely stopped for a breather since launching its cloud-based Chrome OS as an alternative to PC and Mac operating systems. Now its Chromium group has announced an effort to replace the traditional HTTP web browser language with a new protocol that supposedly boosts Internet browsing by up to 55 percent.

HTTP currently is the protocol used by all web servers and browsers, hence the "http" in front of web addresses. But, as noted by Ars Technica, HTTP becomes inefficient when transferring many small files on many modern websites.

By contrast, Google's cleverly named SPDY protocol (pronounced SPeeDY, get it?) can compress and handle the individual requests via one connection that's SSL-encrypted. That allows higher-priority files to slip through immediately without becoming backed up behind large files.

SPDY has shown up to 55 percent web page loading when tested under lab conditions, and the Google team has released their source code for public feedback.


But Ars Technica raises some points of caution about the mandatory SSL encryption requiring more processing power from small devices and computers alike. Requiring SSL could also worsen the problem where server operators neglect SSL encryption and unintentionally encourage people to ignore warnings about unsecured websites.

Still Google's team recognizes these problems and has already proposed workaround solutions. An open approach has already proven a smashing success on Google's Android operating system, but redesigning the Internet's architecture will undoubtedly prove trickier in the days to come.
ArsTechnica wrote:
On the Chromium blog, Mike Belshe and Roberto Peon write about an early-stage research project called SPDY ("speedy"). Unhappy with the performance of the venerable hypertext transfer protocol (HTTP), researchers at Google think they can do better.

The main problem with HTTP is that today, it's used in a way that it wasn't designed to be used. HTTP is very efficient at transferring an individual file. But it wasn't designed to transfer a large number of small files efficiently, and this is exactly what the protocol is called upon to do with today's websites. Pages with 60 or more images, CSS files, and external JavaScript are not unusual for high-profile Web destinations. Loading all those individual files mostly takes time because of all the overhead of separately requesting them and waiting for the TCP sessions HTTP runs over to probe the network capacity and ramp up their transmission speed. Browsers can either send requests to the same server over one session, in which case small files can get stuck behind big ones, or set up parallel HTTP/TCP sessions where each must ramp up from minimum speed individually. With all the extra features and cookies, an HTTP request is often almost a kilobyte in size, and takes precious dozens of milliseconds to transmit.

In an attempt to avoid these issues, SPDY uses a single SSL-encrypted session between a browser and a client, and then compresses all the request/response overhead. The requests, responses, and data are all put into frames that are multiplexed over the one connection. This makes it possible to send a higher-priority small file without waiting for the transfer of a large file that's already in progress to terminate.
Compressing the requests is helpful in typical ADSL/cable setups, where uplink speed is limited. For good measure, unnecessary and duplicated headers in requests and responses are done away with. SPDY also includes real server push and a "server hint" feature.

On the SPDY white paper page, the Google researchers show a speed increase of up to 50 percent.

So should we all praise Google and switch to SPDY forthwith? Not quite yet. With the mandatory SSL encryption and gzip compression, SPDY will hit server and client CPUs much harder than traditional HTTP. Of course HTTP also runs over SSL in many cases, but there's also lots of content out there that doesn't need encryption. Making SSL mandatory is a strange move that has the potential to increase the number of people who don't bother getting a proper certificate for their server, meaning that users will become even more blasé about ignoring the resulting security warnings. This, in turn, would pave the way for more man-in-the-middle attacks.

On small devices, SSL slows down the communication significantly, and because it can't be cached, SSL-protected sites are often slower on big machines as well. The extra CPU cycles also mean that more servers are needed to handle the same number of clients.

It also looks like this protocol is designed by Web people, rather than network people. How the IETF applications area will respond to this effort is a big unknown. For instance, one thing that isn't mentioned in the protocol specification is how a browser knows that it should set up a SPDY connection rather than an HTTP connection. Are we going to see SPDY:// in URLs rather than HTTP:// ? That wouldn't work with browsers that don't support the new protocol.

It's for reasons like this that the IETF isn't a big fan of replacing protocols wholesale. It's much more in line with the IETF way of doing things to add the new features proposed in SPDY to a new—but backward-compatible—version of HTTP. Designing a new protocol that does everything better than an existing protocol usually isn't the hard part. The real difficulty comes in providing an upgrade path that allows all the Internet users to upgrade to the new protocol in their own time such that everything keeps working at every point along that path.

This is something the SPDY developers recognize. There are proposals for running HTTP over SCTP, a protocol similar to TCP, but with the ability to multiplex several data streams within a single session. That would have some of the same advantages as SPDY. Unfortunately, most home gateways don't know about SCTP and can only handle TCP and UDP, so HTTP over SCTP would face a long, uphill battle, not unlike IPv6, but without the ticking clock that counts down the available IPv4 addresses.

That said, it's good to see interest in improving the underlying technologies that power the Web, and Google should be applauded for taking the discussion in a new direction. There's still a lot to be done in this space.
This is just funny to me. I do agree that the protocol itself could use updating, but I'm not sure about this step.

EDIT: added an ArsTechnica article