Give a technician a hammer, and soon she'll see nails everywhere.
Why use other tools, if a hammer works so well?
This is pretty much the position that HTTP is in, and this is far
from well-deserved. A healthy Internet requires a plethora of protocols,
all optimised for their particular purposes.
Give a technician a hammer, and soon she'll see nails everywhere. Why use other tools, if a hammer works so well? This is pretty much the position that HTTP is in, and this is far from well-deserved. A healthy Internet requires a plethora of protocols, all optimised for their particular purposes.
Most of the things we do online run over HTTP, aka "the web". We tend to think of this as a properly standardised protocol. In reality, this is only true in a very limited sense; HTTP is at its heart a protocol for pushing and pulling documents from remote locations, lardered with little more than a MIME-type, filename and language.
This model is poor in many ways, and leads to problems that are a testament to the use of an unsuitable protocol; but problems have been patched with things like WebSockets and HTTP Events to keep it afloat. But at its heart, HTTP is not suitable for much it is made out to be today. Or has anyone spotted offline webmail reading, cross-site integration of travel booking, and editing local files?
Shrink-wrapping data with code can also be seen as a way to circumvent proper specification of the data. And that is expensive for users — it means that they cannot use their own software to operate on the data; not unless they are willing to figure it all out by themselves, and risk future breakage when the data format changes. The end result is that web-based access to whatever data source leads to one-sided automation which is prevented when properly specified and purpose-specific protocols are used. Since HTTP is not specific to the purpose at hand, it should not be used for everything. HTTP is as good a tool as any hammer, but not everything is a nail; there are no one-size-fits-all protocols.
There are many counter-examples, all of which are being tried over HTTP and all of which have led to problems; these problems have been addressed in a somewhat generic manner, and still less useful than anything purpose-specific:
- chat is best done using XMPP or the older IRC protocols (problem: HTTP pulls documents, but messages may need to be pushed downstream)
- telephony, with or without video, works best over SIP (problem: realtime traffic is best sent outside of connections like the one HTTP maintains)
- data is best passed over LDAP (problem: data definitions and syntaxes are local and undefined when using JSON)
There is a growing tendency to specify APIs on top of HTTP. This does indeed address the lack of specification that stems from HTTP's generic nature; at least when properly worked out. But it comes loaded with the properties of HTTP, which are not always advantageous:
- the need to escape characters
- the requirement to parse with a security mindset
- lacking reasonable support for binary strings
- the unnecessarily strict request/response lockstep
- a very low threshold, possibly leading to lower quality specifications
- open to "not invented here" variety: incompatibility without a need
- very often, localisation of a service is not standardised
The problem of localisation of a service has more impact than one may expect on first sight. It is not easy, for example, to use identities under one's own control (like when using one's own domain name) and announce the interface that we currently rely on for HTTP-based communication, which may change when, say, a user agreement changes to our disadvantage. The generic nature of HTTP and its resulting versatility makes it unlikely that such facilities will indeed be made generally available; but localisation facilities have been defined for most purpose-specific protocols. This is usually done through DNS records, which are relatively easy to define given full awareness of the purpose at hand.
The HTTP approach to the localisation problem is simply to have one "central" service and use it for the whole World. This leads to centralisation of the Internet and its protocols, which is generally bad for users and their privacy; their communications now belong to the website owner, and it is usually not possible to escape without breaking communication bonds. Though HTTP may be pleasant as a wrapper for special purposes, it is easily beaten by the higher level of automation and retained control of purpose-specific protocol implementations. This control over one's own communication usually does not cease when using a large-scale deployer, simply because that is not how purpose-specific protocols are designed; they are supportive of large-scale hosting, but not to selling one's soul.
In conclusion, HTTP is a powerful tool to exchange documents as-is, but it is generic in nature. This is perfect for that level of abstraction, but it turns into poverty when we try to stretch the model beyond its capacities. Even though it has been put on steroids with many new extensions, these too are kept general where specificity would be desired, and at the same time it can be too specific to capture a protocol's requirements well. The tendency to abandon purpose-specific protocols for hacks on top of HTTP is not a sign of proper engineering.Go Top