During a recent security assessment, we found a critical authentication bypass, which at the first glance looked like a classic Json Web Token (JWT) issue - no cryptographic signature verification and possibility to forge valid tokens as a result. A blackbox assessment would probably have called it a day and reported the issue as a lack of cryptographic signature verification, which would be a legitimate issue. However, since the assessment consisted of whitebox code review, it was possible to dive deeper into the application’s logic.
The application was implemented using recent .NET framework and accepted cookies carrying the JWT as the authentication credential. A quick JWT recap:
A JWT consists of three parts separated by dots: the header, the payload, and the signature.

Header
The header typically consists of two parts: the token type, which is JWT, and the signing algorithm being used, such as HMAC SHA256 (HS256) or RSA (RS256). This JSON is then Base64Url encoded to form the first part of the JWT.

Payload
The payload contains the claims, which are JSON fields containing statements about an entity (typically the user) and additional data. There are multiple kinds of claims, such as iss (issuer), exp (expiration time), sub (subject) and iat (issued at timestamp). These are “registered claims” defined by the JWT standard. There are also “private claims”, which are custom fields defined by the application. The example below shows public and private claims creating the payload. This data will be further processed by the application.

The payload is also Base64Url encoded to form the second part of the JWT.
Signature
The signature is used to verify that the sender of the JWT is who it says it is and to ensure that the message wasn’t tampered with along the way. To create the signature, take the encoded header, the encoded payload, a secret, and sign them with the algorithm specified in the header.
For example, using HMAC SHA256, the signature is created like this:

The secret key should be known only by the server. This signature ensures the token’s integrity. If a malicious user changes the header or payload, the signature will become invalid. A JWT playground may be found here: https://www.jwt.io/.
Moving forward to the real application, first, a test was made to check if signatures are validated. It’s trivial to test, as the signature may be just modified in a web proxy like Burp Suite changing a single character, if the application accepts the token, the issue is obvious. Arguably, it’s the easiest JWT vulnerability to spot.
While the signature wasn’t necessary at all for the application to process the token and grant access to its API, it required certain fields to be present in the token. The quickest way to check it is just to take a valid, application-generated token and remove fields one by one until the application stops responding correctly. An example request with a stripped down token accepted by the application (censored for anonymity reasons):

A cookie named accessTokenCore is then passed down into the NET’s authentication middleware. The token’s payload may be decoded as

It may be noted there are no public claims - it completely depends on the application logic whether they’re required. However, the application may also require valid private claims - just like in the example above. unique_name and uid fields were essential for the application to accept the token. Note how such fields often represent user entities in a database. Could we perhaps impersonate another user?
As it turns out, yes. The application responds with the audytor5 user data:

but the ID is different. In the database, the ID parameter is equivalent to the uid claim sent in the JWT payload.
At this point, the application has been taken over and any user’s token may be forged provided that the following conditions are met:
- the unique_name exists and is valid - attacker chooses the victim here
- the uid exists in the database and is valid but belongs to any user - such identifiers are often disclosed by the application, for example when visiting someone else’s profile, therefore an attacker would be easily able to acquire it.
Okay, but does the application simply not validate the signature and just expects valid claims in the JWT payload? Sort of, but let’s dive deep into the code.
It was a business requirement to accept tokens sent as cookies, not as an Authentication: Bearer header. It is not a problem by itself if handled properly. The .NET middleware is able to process cookies and pass on the authentication decision:

here, OnMessageReceived event means a HTTP request was performed and grabbed by the application. It’s a an event in .NET framework from JwtBearerEvents class, allowing the authentication middleware to grab HTTP requests, which are, in this instance, events.
The context is an object containing information about the HTTP request - in this case, accessTokenCore cookie is extracted.
Then, a jwtToken object is created from the cookieWithToken variable. So far so good, the token is just being passed around. What happens next is disturbing.

Let’s take a look. We have the claims mentioned before, uid and unique_name. Basing on these, application fetches the user data from a database into the user variable. Unfortunately, no signature validation takes place. This is a mistake. The signature should always be validated before anything else is done with the token. Perhaps the assumption was the framework does it automagically. This is the root cause of a critical issue. However, there is more.

This is where the magic happens. It will be weird from the attacker perspective, but bear with us a little longer. Roles are fetched from the database, credentials are correctly created with a HS256 signing algorithm on board. A new token is also created and signed - interesting.
The newly generated, valid token is assigned to the authentication context (context.Token). It will then be passed on to the subsequent validation stages within the .NET framework. Since this token was just created and signed by the server, it will always pass validation successfully. This way, the forged token sent as an HTTP cookie is replaced with a new, fully trusted token.

The magic? Everything happens in-memory during processing of the HTTP request. The attacker never gets to see the newly created token, just that the application authenticates users basing on forged tokens.
How to prevent such mistakes?
- always validate cryptographic signatures of a token before processing it
- take care to properly configure cryptographic libraries, don’t trust the alg header of an incoming token, as it may be forged too - require a chosen signature scheme, don’t allow the client to choose it
- write security-related code explicitly, it’s not worth to save a few lines of code hoping that a framework abstraction will interpret the logic correctly
- trust your framework with cryptographic operations, but verify the logic
- run security tests - a blackbox assessment would have found the vulnerability too.




