This is the second part of my series about a challenge I developed for the WPCTF. In the first article (Infection Chain – Behind the scene), I talked about my experience participating in the WPCTF from a different perspective, not as a player, but as a challenge creator. I introduced the idea behind my challenge and explained why I wanted to design something more defensive-focused, moving away from the usual “exploit and own the machine” approach. In this second part, I’ll focus on the feedback I received from players and share some thoughts on what I could have improved.
Overall, I think the challenge turned out very well. It was something different from the usual CTF format and, more importantly, it pushed players out of their comfort zone. Instead of following a familiar exploitation path, they had to slow down, change prospective and think more like defenders.
What really confirmed this feeling was the reaction from the players after the competition. Many came to talk to me looking for hints or explanations on how to find specific flags, while others were simply curious about the story behind the challenge and the design choices I made. Those conversations were incredibly valuable, because they showed genuine engagement, not just with solving the challenge, but with understanding why it was built the way it was.
Now it’s time to talk about the less positive aspects. Not everything went smoothly, but that’s part of the game, and in fact, it was even more interesting because it gave me a chance to see how real-time problems are discovered, managed, and resolved.
Before the competition, my biggest fear was a last-minute issue: a misconfiguration in the OVA file, download problems due to its size (a few gigabytes), or some other unexpected technical issue. And of course, a couple of hours after the competition started, the exact scenario I feared actually happened. A team couldn’t start the VM due to a compatibility issue but thankfully we were able to fix it quickly.
That experience gave me some valuable insights. For example, I learned that many CTF players use QEMU, which seems to be something of a “gold standard” in the community and for some unknown reason the file I provided was in conflict with some of its configuration. Another thing I learnt was about the file size: waiting 30/40 minutes to download several gigabytes is not so good in an environment where every minute counts. A better approach could have been to distribute the file a few days earlier in encrypted form, and release the decryption key only at the start of the CTF. These were all things that dind’t come to my mind during the preparation phase, but they’re definitely tips I’ll take with me for the future.
The only thing that remained a mystery to me was the low engagement with the challenge until the last couple of hours. Of course, this could have been due to the intimidating file size, the “out-of-comfort-zone” factor, or perhaps a less appealing challenge description.
Talking about more specific, flag-related feedback, I realized something important: what feels obvious from the creator’s point of view isn’t always intuitive for someone actually playing the challenge. A good example of this was the delivery method flag. Some players told me it was too obvious, while others said it wasn’t intuitive at all. More specifically, a few participants struggled to locate the flag because the URL pointed to a non-existent domain, and they didn’t think to look at the query parameters. Here is an example:
hxxp://evilsite[.]com/auth=V1BDVEZ7UGgxc2hfZDB3bmxfYmxvYjd9=
I’m actually quite happy with this outcome. My goal was to introduce a bit of confusion and avoid giving players a straight, step by step solution and in the end, that’s very much in the spirit of a CTF.
The only part that truly disappointed me was the persistence flag. It wasn’t as well connected to the story or to the artifacts left on the machine as I originally intended and that’s my fault. I simply couldn’t come up with a better solution within the constraints of the challenge.
What stood out even more was the fact that a few players completely outplayed this flag. The intended approach was to inspect the running processes on the system, following the idea of an active compromise. Instead, some players solved it the “easy” way: by checking the common startup applications folder, finding the only suspicious file there, and submitting the flag without even realizing that the process was actually running. It wasn’t a dumb workaround. There was logic behind it, and those players demonstrated that they knew where to look for persistence mechanisms even if it wasn’t the exact way I had in mind.
On the other hand, the third flag is the one I’m happiest with. From am environment perspective, it represented the perfect conclusion to the investigation: everything players had found on the machine led somehow there, so it tied together the environment, the artifacts, and the overall narrative I had built. It felt like a real closing chapter, where the full picture finally made sense. In real-world scenarios, scheduled tasks are a common persistence and execution mechanism, and I wanted players to become comfortable inspecting them, questioning non standard configurations, and thinking about how such mechanisms could be abused outside of a CTF context.