While we are tightening our infrastructure, we may also consider
letting guests into our setup. As you might expect, we intend to
do this in the tightest possible manner.
While we are tightening our infrastructure, we may also consider letting guests into our setup. As you might expect, we intend to do this in the tightest possible manner.
During the SecureHub phase, we settled on a number of authentication mechanisms, notably the TLS-KDH mechanism that we built into our TLS Pool so it would be incredibly easy to use. We also started off on realm crossover and are following up on that in the next project phase, IdentityHub.
Authorisation is an entirely different matter than authentication. Once we are handed reliably validated remote identities, what can we do to assure desired access is granted? As it turns out, there are two kinds of authorisation in the InternetWide Architecture to deal with.
Authorisation for Resource Access
This is the most well-known form of authorisation, precisely because most projects are looking inward only. (This is why we use and need a second kind of authorisation, but more on that later.) The general idea to keep in mind is that resources are generally made available internally.
Note that public-access resources are still considered internal in the following; it just so happens that the internal World has become rather large for that particular resource.
The way we authorise resource access is as follows:
- We have obtained a
REMOTE_USERidentity from an authentication mechanism
- Given the resource being asked, the paths being followed, and so on,
there may for an intermediate
AUTHZ_USER, or a user that the resource-accessing party wants to pose as. We generally allow this when the arrows of identity inheritance permit it. We then continue with the
AUTHZ_USERas the new
REMOTE_USER. Note how this step is optional, but fatal when it breaks the arrows of the inheritance diagram.
- We now lookup an
AUTHZ_RESOURCE, or perhaps a more generally configured
AUTHZ_POLICY, which applies to the resource being used. This resource would provide us with ACLs for a number of purposes:
Originators. Based on matches of the
REMOTE_USERagainst each of these, the permissions to be granted can now be computed. Note that some overrule others, such as the
Writersthat are all
- While determining the privileges, we may find that the identities to
grant them all are in fact not the
REMOTE_USERbut even more specific ones. Perhaps a user changes to a group member before it can continue. When this is needed to fulfil the ACL, the
REMOTE_USERwill be updated accordingly, and the result is passed back to the authorisation requester as its identity to use. TODO: When we return a list of rights, this might be ambiguous! But it oughn't be -- we generally don't want to write with another user name as used to read the same data. To be resolved.
We now have a set of rights, together with a
REMOTE_USER name that can be
used to exercise these rights. We are ready to go!
The entries on each ACL are formatted as DoNAI Selectors, which means that they may capture anything from perfectly concrete address to catch-alls for domains or every anyone anywhere anytime. If you like, you may care to review some examples of DoNAI Selectors.
Authorisation for Communication
The InternetWide Architecture is, well, Internet-wide. This means that it also handles authentication and authorisation between realms. Not everyone is our friend, and so not everyone may be welcome to talk to us.
To be able to communicate between realms, we employ a black and white list, with authorisation outcomes that may be black, white or sometimes gray.
Outbound communication is always permitted; when contacting a new remote peer,
its address will silently be added to the white list. The peer may contact
us with replies, and we should be open to him, at least initially. Do note
that only the exact address of the peer is added — not his domain,
not his aliases and not his marketing mail generator's
Inbound communication is filtered through the black and white lists. Like other ACLs, each list holds DoNAI Selectors with the patterns of senders that are welcomed. More on that can be found here.
Depending on the presence or absense of black and white lists, there are a few approaches:
- Neither list is present: Communication is prohibited.
- Only a white list is present: Communication defaults to rejection, but the white list may overrule that.
- Only a black list is present: Communication defaults to acceptance, but the black list may overrule that.
- Both are present: Find the most concrete matches on each list; discard those that have more concrete entries on the other list; hope to be left with either black or white entries.
- When left with both entries, proceed through gray listing.
The topic of "most concrete matches" is concerned with how abstract the
DoNAI Selectors are. Very abstract forms are like
john@., and they are all more abstract than
firstname.lastname@example.org so they
would cancel against it. This can be used to remove entries from the
white list against the black list, as well as the other direction, but it
may also be used within a list.
Note that a list may not have a single most concrete value. Both the values
john@.net are abstractions of
they are not relative to one another. The same may apply when comparing
entries between black and white lists. In fact, there are two dimensions
of abstraction, namely the user name and the domain name, so up to two may
be left on each list.
When something remains on both the black and white list, there is a need for gray listing, which is meant to resolve the uncertainty by trying to get the local user to communicate with the remote peer. The current best method is to contact the user over XMPP with a contact request, perhaps after trying to get one approved by the initiating remote peer, which should of course have opened up for us already. While all this is going on, communication is on hold; this may mean that it is deferred (SMTP gray listing), ignored (interactive protocols like SIP), or not consumed yet (AMQP queue processing). There may well be a timeout triggered as a response of this, which is why we think that XMPP is the best device for doing this. The hope is to always end up with a concrete entry on the black or white list, so that future uncertainty will not rise.
The middleware discussion pointed out the TLS Pool for authentication, and an open block for authorisation. The logic for that block has hereby been given more clarity. Do let us know how you feel about the directions taken, and the choices made!
Clearly, the implementation should pre-work the various complexities that are caused by the model, especially the inheritance diagram:
Responses are probably going to be provided in a number of forms. Our Diameter format over SCTP is going to be likely. Another one may well be the Auth Request module in Nginx, which makes a simple HTTP request and interprets the response code as an authorisation choice.
Something not decided on yet, is whether we should also support the
OpenStack Keystong API
and would therefore service OpenStack components beyond Swift, which may
be of interest to the hosting providers that we want to help forward.
The reason of doubt is that we currently think of the OpenStack authorisation
as tenant-level, so perhaps domain-level authentication, rather than a
direct match for all the
users, aliases and groups
that we develop under the InternetWide Architecture. We suspect that this
is also going to be the level at which any other components from
OpenStack are going to be of use to the hosting provider — domains
own things like virtual machines, and in turn those virtual machines hold
accounts and groups to welcome the plethora of user identities from the
InternetWide Architecture. Here's an example
john :x:1000:1000:John Examplar:user :/home/john :/bin/bash john+singr:x:1000:1000:John Singer :alias:/home/john/singr:/bin/bash john+sales:x:1000:1100:John Salesman:user :/home/john/sales:/bin/bash mary :x:1001:1001:Mary Examplar:user :/home/mary :/bin/bash mary+sales:x:1001:1001:Mary Saleswoo:user :/home/mary/sales:/bin/bash sales :x:1100:1100:Sales Account:group:/home/sales :/sbin/nologin sales+john:x:1101:1100:Salesman John:grpmb:/home/sales/john:/bin/bash sales+mary:x:1102:1100:Saleswoo Mary:grpmb:/home/sales/mary:/bin/bash
...and this is a portion of an example
john :x:1000:john mary :x:1001:mary sales :x:1100:john,mary
...showing that very funny things can be done; we can set the same userid for an alias, but end up in another root directory. Does this stuff work? Not completely sure, but it ought to. Anyhow, that's just a quick impression of the level where it does seem to make more sense to pickup on users and groups, much more than at the level of OpenStack Keystone. For now, we assume that we will implement some form of Keystone, but just for domain names annex realms.
It is very, very likely that we will add support for existing IdP systems, such as OAuth, OAuth2, OpenID Connect, SAML-based authentication and authorisation, and perhaps Mozilla Persona; we would simply use our own mechanisms as local access mechanisms to get to the internal IdP, and then make that flood out to everywhere the user wants to surf today.
Image credit: Azmie Kasmy — the work has been edited to fill a horizontal bar.