Skip Navigation
10 comments
  • This does kind of drive home some points. Obviously, once malware is running with your full user permissions, all bets are off. But there are some things that could have mitigated harm here.

    The malware wasn’t just mining cryptocurrency—it was also stealing as much sensitive information as possible. It collected:

    • SSH keys from ~/.ssh/

    If you password-protect your SSH keys with a decent password, it will help address this. Now, the problem is that any software that can get at your SSH keys probably has a shot at also setting up some kind of keylogger system, but at least it makes it not a one-step process.

    • Shell history from .bash_history and .zsh_history

    Avoiding using sensitive data as command line arguments is a good habit to be in. They're visible systemwide to all processes on a normal system, which already creates a meaningful leak on multiuser systems, and various pieces of command-line software go out of their way to avoid having passwords and the similar secrets passed on the command-line.

    In this case, I assume that some of the goal may be looking for other hosts that the user might be sshing to, but best not to compromise other credentials here as well.

    • AWS and Azure credentials from ~/.aws/ and ~/.azure/

    Not familiar with the current forms of these, but I bet that they provide some way not to store unencrypted credentials there.

    • Environment variables and system information

    Environment variables are a really good place to avoid putting sensitive data, at least if one's talking variables exported to all processes run by a user, because software that crashes and uploads a crash dump to God-knows-where will also tend to dump environment variables along with it, as it's important debugging information. Storing credentials in an environment variable is not a good idea.

    This experience was a harsh reminder to never blindly trust PoC exploits, especially ones that include random files like PDFs.

    I feel like one thing that might help is software making it really easy to create a container that by-default runs in isolation with minimal access to the rest of the system, and then lets a user easily add individual permissions. I'll sometimes use firejail, but it's a "default-insecure" model, which really isn't great for dealing with this sort of thing. Maybe use iptables or something to detect network access attempts and let a user approve per-host network access; you can't simply block outbound network access for this sort of software, which is presumably demonstrating some kind of network-based exploit.

  • Never, EVER, do anything security related while sleep deprived, drunk, high, having sex, or all of the above.

    After that... no, don't trust. Zero trust.

    There are basic hygiene measures to run anything related to any exploit — including "just" PoCs — depending on how risky a total pwn would be:

    • container
    • VM
    • separate machine
    • airgapped
    • airgapped in a faraday cage (tinfoil works wonders to kill WiFi, a cheap DVB stick turned SDR is great for making sure).

    Reading through the code is nice, and should be done anyway from an educational point of view... but even when "sure", basic hygiene still applies.

    Keeping tokens in one VM (or a few), while running the exploit in another, is also a good idea. Stuff like ”Windows → WSL2 → Docker", works wonders (but beware of VSCode's pass-through containers). Bonus points if passkeys and a fingerprint reader get involved. Extra bonus points for logging out before testing (if it asks to unlock any passkey... well, don't), then logging out again afterwards.


    What I'm not so sure about, is deleting the siphoned data without alerting the potential victims. Everyone kind of failed at security, but still. A heads up to rotate all keys, would be nice.

10 comments