Rick van Rein
Published

do 04 februari 2016

←Home

New Web Era 1: Frontends and Backends

We published articles on pernicious developments on the web; now it is time to explain how we see this improve under the InternetWide Architecture. As usual, our approach is practical, but we don’t shy away from adopting new standards if they improve the overall situation.

This document is part of an article series on Web Architecture.

We often hear remarks such as “everything should be done over TLS” but this discussion barely touches upon actually securing the web infrastructure. As an example, hardly anyone realises that anyone can substitute for a TLS client. As a weak protection against this we have gotten used to passwords, but those are barely a challenge to an insisting attacker. So what can we do to actually make life better, while keeping the practical usability of the whole system?

Statics and Backends

The web began with static pages, later extended with dynamic pages, and later entire web sites became server-generated; nowadays, clients embellish a static basic site by rendering small pieces of dynamic data into it. This dynamic data often comes from a separate component that is plugged into the web server. The web server can then be simplified to serve static pages, and only be involved with dynamicity by proxying requests to purpose-specific backends.

Recall our intent on splitting domain hosting into identity hosting and domain-related services. In this architecture, the proxy web server is best located at the identity host and plugin services are perfectly located with a service. This way, the backends can offer configuration and browsing and whatever else they would like to offer to the identity host’s frontend web server. It also enables the combination of multiple services in that one frontend.

Crossing the Channel

One problem to solve in this infrastructure is the connectivity between the proxy/frontend and the backend. Since these are located at different parties, they may cross long distances of the Internet. More so, they may cross between independent realms of trust, so there must be some way of authentication and perhaps encryption. If we followed popular demand we would be calling to "do it all over TLS", but we can be smarter than that.

The static pages are relatively simple to resolve; any filesystem caching mechanism will do, with emphasis on publishing updates quickly. A very simple mechanism could be authorised uploads from the backend to the frontend/proxy, using a mechanism like sftp or rsync.

Dynamic backends in a website usually take the shape of JSON or XML snippets; backends rarely work with large blocks of information at a time, but their responsiveness is of direct influence on the feeling of interactivity. This means that the backend must be designed for this purpose. This guided us towards a number of interesting choices:

  • Connections between a frontend and backend are bulk connections. They carry traffic for lots of sessions at once, preferrably everything that links the frontend and backend. This evades the setup delay for new connections, especially the extra exchanges and heavy-duty computations of cryptographic authentication and key agreement;

  • Connections between a frontend and backend are made over SCTP. Being a bulk channel, the separation of frames is helpful. In addition, most dynamic frames will be smaller than SCTP's 64k frames, in which case they can be sent without care for delivery order, but still reliably; this avoids the head-of-line delays to which TCP can be subjected.

  • One connection between a frontend and backend can carry more than just web service. Additional links can be made for logging and monitoring, and general service management.

Example: HTML, CSS, JavaScript, JSON

Imagine a web server publishing static files containing HTML, CSS and JavaScript. Users may logon to this system, and access dynamically generated content, inasfar as they are authorised for it. The backend defines authorisation levels for users and service administrators, and the frontend is configured to map its idententies to these authorisation levels.

  • The protocol run over the SCTP connection between frontend and backend is supposed to be neutral to programming languages and other technological choices. A surprisingly potent mechanism to this end is the relatively old FastCGI de-facto standard protocol. This is already in wide use for backend connectivity within a web server, with a fresh TCP connection for each backend access, which would work badly over distances; we should instead pass it over an SCTP stream. The nice thing about FastCGI is that it has been specified with multiplexing capabilities, thus leveraging the flexibility that stems from an SCTP carrier.

  • The web proxy handles various forms of access control; our favourite example is TLS-KDH, because it is a single-signon system that can be used across realms and across application protocols. Based on the authenticated identity, the frontend can assign one of the authorisation levels. In addition, it may apply pseudonymity and roles if this is desired by the users.

  • Any dynamic portions of the web site are passed to the frontend as JSON fragments over such authenticated HTTPS connections. Meaning, the frontend knows that they can be trusted but the backend must still be convinced.

  • Now assume that one or more secret keys were agreed between the frontend and backend. These keys can be symmetric, to keep their interaction really simple. Using the appropriate key, the frontend could wrap the JSON request into standard JSON Web Tokens, using signing and/or encryption in a one-to-one fashion. The symmetric mechanisms skip the hard work of public-key crypto, making them light-weight and still leading to the desired certainty of protection from rogue access to the backend.

Solutions ought to be Generic

An approach based on JSON Web Tokens provids a good cryptographic mechanism, and it is relatively easy to get going. What it is not, is general. It covers JSON applications, but a parallel mechanism for AJAX would introduce the severe complexities of XML Digital Signing, and many other backend formats are not covered. Moreover, non-web applications are not covered. This is why we chose another general approach for the InternetWide Architecture.

We let the backend register with the frontend using GSS-API over one of the bulk SCTP streams. This means that the backend logs on as part of the Kerberos5 realm and services are created for paths like HTTP/www.example.com/svc/mail. Similar patterns can be defined for non-web services such as monitoring and logging. The backend would execute administrative commands over other paths than user commands, and rely on the web proxy to apply access control to these paths. The proxy does not contain application code, so it is far less likely to be corrupted than the backend with all its application logic. Meanwhile, GSS-API protects the link to the backend from rogue access just like JSON Web Tokens would — or better, since all symmetric keys are regularly replaced under GSS-API.

As far as end users are concerned, their separate passwords for the various sites they use are all replaced by the single sign-on benefits of Kerberos. This makes security much easier to manage, especially because Kerberos tickets last only for a day or so, and a derived ticket is tailor-made for each web site visited.

The internal structure of InternetWide web services is more complex than the current localised LAMP stack, but we have seen the progress of the LAMP platform come to a stifling halt since its inception 25 years ago, to a level where the distributive nature of the Internet is systematically in danger. The added complexity of the InternetWide Architecture exists solely to achieve the flexibility arising from separating roles for identity providers and service providers, which is meant to create a flourishing market for service offerings. Where an identity provider would focus chiefly on arranging access control, privacy and security, there is an additional option of revenue for service providers if they focus on specialised plugin facilities that can be used under any hosted identity on the Internet. The current “central” services on the Internet would co-exist with new, independent services that can connect liberally among each other because they adhere to standard protocols. And that is a future Internet that we believe is worth the investment of some added complexity.

Go Top