📜 ⬆️ ⬇️

So why all the same need Refresh tokens in OAuth?

Surely every programmer working with OAuth 2.0 wondered - why do we need Refresh tokens, aren't Access tokens enough? 64 KB - They should be enough for everyone!

This topic is quite actively discussed - that’s the question on Stackoverflow and on Habré is also being discussed . Actually, it was the discussion on Habré that made me speak.

All opinions proposed by commentators and authors relate to the safety of the two-walled approach. Of course, this is the way it should be, because security is the main thing for the authorization / authentication framework! But let's be frank - in many cases of using the approach with two tokens does not give any gain in security compared to the simple and blunt approach with one token. Or this is not immediately visible ...
')
“Refresh token can be stored more securely!” - can and should, although almost no one does.
“Access token is transmitted over the network more often - and the probability of its leakage is greater” - completeness, we always use TLS, right?
“The leak of the token to the token is as terrible as the leak of the Refresh token” - yes, and this is also true, which is why the token does not appear in the Refresh browser ...

There are many nuances, there are many usage scenarios in which the use of different tokens becomes useful, you just can't see them right away!

But there is one more argument, which for some reason I have never met - although he, in my opinion, fully explains why Refresh token is needed and why it is impossible, absolutely, absolutely only Access token cannot be dispensed with.

Performance.

And this is not about a website or application with a million or ten million users, no!

Let's think about how the services of Google, Facebook and Twitter should work with OAuth authentication.

Here is the backend of this Big Service Access token. How to check its validity?
Well, you can search it in the database. Since we are talking about the Service with a capital letter, then the Database will be large, powerful, with a capital letter.

Yes, it looks like we will have a problem with this Base, right? Let's place it in inMemory, let's add some replicable copies ... Can we cope? Probably…

Or do you know what? Let's keep all the information about the token in it! This is what a self-contained token is called. We will, of course, encrypt and sign it when creating it - and when we receive it we will decrypt and verify the signature. And we extract all that is necessary: ​​the user name, his rights and the validity of the token.

Yes, this approach seems to really work! No need for a monstrously fast and responsible Tokens Database! Any of our server (and we have a lot of them, we are distributed!) Easily and simply checks the validity of the token and extracts the necessary information about rights and access from it.

So ... what will we do if the user has changed the password? It is necessary to invalidate all old tokens? What shall we do if the administrator has blocked the user or changed his rights?

Yes, but you can't do without a database! Well, so let's check the user's rights are not always - and sometimes, at some reasonable intervals. Well, let's say at the time of updating tokens - that is, once an hour!

Yes, that is how it works. Programmers implementing OAuth on the authentication server side have a choice — they can check the token owner’s rights each time an Access token is received — and they can do it only occasionally, when creating a new pair of tokens.
The advantages and disadvantages of both approaches are obvious - well, Big Services doesn’t have a choice, and they work according to the self-contained tokens scheme.

It turns out that another advantage of this complex, but such flexible framework is oAuth 2.0!

Source: https://habr.com/ru/post/327702/


All Articles