Rick van Rein
Published

zo 16 juli 2017

←Home

NGI 2: The poverty of HTTP

Give a technician a hammer, and soon she'll see nails everywhere. Why use other tools, if a hammer works so well? This is pretty much the position that HTTP is in, and this is far from well-deserved. A healthy Internet requires a plethora of protocols, all optimised for their particular purposes.

This is part of a series of technical articles in response to the European Commission's initiative to explore the Next Generation Internet, an initiative that we warm-heartedly support.

Most of the things we do online run over HTTP, aka "the web". We tend to think of this as a properly standardised protocol. In reality, this is only true in a very limited sense; HTTP is at its heart a protocol for pushing and pulling documents from remote locations, lardered with little more than a MIME-type, filename and language.

Contrast that with the modern tendency to pass all sorts of data over HTTP, including dynamic data. We use representations like XML and JSON, and pull it all together with a lovely JavaScript application.

This model is poor in many ways, and leads to problems that are a testament to the use of an unsuitable protocol; but problems have been patched with things like WebSockets and HTTP Events to keep it afloat. But at its heart, HTTP is not suitable for much it is made out to be today. Or has anyone spotted offline webmail reading, cross-site integration of travel booking, and editing local files?

Let's start with JavaScript. This allows us to run applications straight into our browser. Though this saves us from installing a real application on our system, it also disables that in many cases. JavaScript is basically a custom-made wrapper for undefined data. That is what you start doing when there is no specification to work on. Document push/pull is so abstract that many details need to filled in to make it into, say, a chat application. Notwithstanding the fact that actual chat applications are available everywhere, people are eager to rebuild it over HTTP, and end up being incompatible with the other chat applications that already existed. The entrance is zero-effort, but the price comes in terms of communication freedom.

Shrink-wrapping data with code can also be seen as a way to circumvent proper specification of the data. And that is expensive for users — it means that they cannot use their own software to operate on the data; not unless they are willing to figure it all out by themselves, and risk future breakage when the data format changes. The end result is that web-based access to whatever data source leads to one-sided automation which is prevented when properly specified and purpose-specific protocols are used. Since HTTP is not specific to the purpose at hand, it should not be used for everything. HTTP is as good a tool as any hammer, but not everything is a nail; there are no one-size-fits-all protocols.

There are many counter-examples, all of which are being tried over HTTP and all of which have led to problems; these problems have been addressed in a somewhat generic manner, and still less useful than anything purpose-specific:

  • chat is best done using XMPP or the older IRC protocols (problem: HTTP pulls documents, but messages may need to be pushed downstream)
  • telephony, with or without video, works best over SIP (problem: realtime traffic is best sent outside of connections like the one HTTP maintains)
  • data is best passed over LDAP (problem: data definitions and syntaxes are local and undefined when using JSON)

JavaScript is a very, very dangerous innovation. It made HTTP the generic application wrapper that it is today, but precisely that generic nature makes it damaging to its users. Advertisements can have adverse effects on privacy (think keyboard tapping), cross-site scripting leads to many security shocks but is nonetheless hallowed for its usefulness in enabling certain new tricks — as if opening up generic holes does not come with the responsibility of guarding what else comes through it.

And yet, a powerful movement drives more and more facilities into JavaScript, including sensitive ones. But this is a platform where anyone can run code and where the average session includes a few sources that do not have the best interest of the user at heart, be it advertisements or the website's pick of behavioural tracking.

It is already common to handle credentials in JavaScript, and a new movement is even trying to play key material into the hands of the platform. In general, keys and passwords as well as your local files are the sort of information that should never land in foreign and unverified hands and yet this is exactly what we are doing. Why, I wonder? In the case of credentials it is an attest to the poverty of HTTP authentication, but we'll talk about that in a separate posting.

WebSockets were invented to allow applications, written in JavaScript and running in a browser, to access "real" protocols through an indirection over HTTP. Not only does this introduce potential problems with HTTP itself, it also misses out on useful configuration hints that may be present in DNS (but wrapping the knowledge into code "solves" that — locally).

There is a growing tendency to specify APIs on top of HTTP. This does indeed address the lack of specification that stems from HTTP's generic nature; at least when properly worked out. But it comes loaded with the properties of HTTP, which are not always advantageous:

  • the need to escape characters
  • the requirement to parse with a security mindset
  • lacking reasonable support for binary strings
  • the unnecessarily strict request/response lockstep
  • a very low threshold, possibly leading to lower quality specifications
  • open to "not invented here" variety: incompatibility without a need
  • very often, localisation of a service is not standardised

The problem of localisation of a service has more impact than one may expect on first sight. It is not easy, for example, to use identities under one's own control (like when using one's own domain name) and announce the interface that we currently rely on for HTTP-based communication, which may change when, say, a user agreement changes to our disadvantage. The generic nature of HTTP and its resulting versatility makes it unlikely that such facilities will indeed be made generally available; but localisation facilities have been defined for most purpose-specific protocols. This is usually done through DNS records, which are relatively easy to define given full awareness of the purpose at hand.

The HTTP approach to the localisation problem is simply to have one "central" service and use it for the whole World. This leads to centralisation of the Internet and its protocols, which is generally bad for users and their privacy; their communications now belong to the website owner, and it is usually not possible to escape without breaking communication bonds. Though HTTP may be pleasant as a wrapper for special purposes, it is easily beaten by the higher level of automation and retained control of purpose-specific protocol implementations. This control over one's own communication usually does not cease when using a large-scale deployer, simply because that is not how purpose-specific protocols are designed; they are supportive of large-scale hosting, but not to selling one's soul.

In conclusion, HTTP is a powerful tool to exchange documents as-is, but it is generic in nature. This is perfect for that level of abstraction, but it turns into poverty when we try to stretch the model beyond its capacities. Even though it has been put on steroids with many new extensions, these too are kept general where specificity would be desired, and at the same time it can be too specific to capture a protocol's requirements well. The tendency to abandon purpose-specific protocols for hacks on top of HTTP is not a sign of proper engineering.

Go Top