Rendered at 09:00:00 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
chrismorgan 13 minutes ago [-]
Meta: this was submitted with the article’s title “The CTF scene is dead” which I found very easy to understand. It has just been updated to use the subtitle’s first sentence, “Frontier AI has broken the open CTF format”. I find that much harder to grasp, rather like a garden-path sentence. My immediate thoughts were that “Frontier” was a company name, and that there was some file format named CTF. If you don’t know about Capture The Flag contests, the change doesn’t help. If you do, I think the change makes it worse.
baq 21 minutes ago [-]
Replace ‘CTF’ with ‘high school’ or ‘university’ and you’ve described the total slow motion collapse of education; the only saving grace is that most of it is requires in person presence.
We’ve figured out the human replacement pipeline it seems, but we haven’t figured out the eduction part. LLMs can be wonderful teachers, but the temptation to just tell it ‘do it for me’ is almost impossible to resist.
mold_aid 2 minutes ago [-]
>LLMs can be wonderful teachers
Are they or aren't they
daymanstep 16 minutes ago [-]
Wonderful teachers that give unreliable information with total confidence?
Bawoosette 5 minutes ago [-]
To be fair, that was much of my actual experience with human professors in university.
k__ 10 minutes ago [-]
Anti-intellectualism is at it again, hu?
pjc50 8 minutes ago [-]
"Education is just a CTF for the valuable flag of a credential. In this essay I will --"
himata4113 59 minutes ago [-]
I was writing an obfuscator recently, I just had the model deobfuscate and optimize the code back to original and I kept improving the obfuscator until it couldn't. The funny thing is that after all this I also ended up with a really strong deobfuscator and optimizer which is probably more capable than most commercial tools.
The solution is just to make CTFs harder, but when do CTFs become too hard? Maybe the problem is that 'hard' CTFs are fundementally too 'simple' where it's just a logic chain and an exhaustive bruteforce towards a solution since there really are limited ways to express a solution in plain sight.
Or maybe human creativity has been exhausted and we're not so limitless as we thought. Only time will tell.
I had another idea spring to mind: we could hide two flags, one that could only be found by ai agents and not humans or tools written by humans.
koolala 42 minutes ago [-]
A portion could require astral projection and computers can't do that. Or maybe just a VR mini-game like the 90s always imagined.
himata4113 32 minutes ago [-]
bringing CTF solutions into the real world is a really good idea! I didn't even think of this until you mentioned it.
we have very powerful simulation tools so something like "project a pattern at these angles" wouldn't really work as you could simulate that.
I guess something cool is that we can make simulating the solution very expensive, but in real world it would be free since it's analog... As long as simulations take longer than it takes for a human to find a solution it would be a pretty good way to deal with it. I am sure people smarter than me can come up with something.
Maybe I was too early to dismiss human creativity.
SirHumphrey 5 minutes ago [-]
Competitive programming scene always included offline competition and with AI they are becoming more important (and in general they were more fair even before). If CTFs are to survive, they should probably try to adopt this strategy.
You could even go so far that anything loaded on your computer is fair game, but not more than that (certain competitive programming competition for example allow unlimited amount of paper material - for CTFs you probably need much more than that, therefore electronic).
virtualritz 4 minutes ago [-]
Chess and Go are not dead just because Ai got better than humans at these games.
What am I missing here?
motbus3 8 minutes ago [-]
I think soon there will be ways to trick this models and I think when it happens it will be yet another layer like aslr
These models seems completely unbeatable only in the ads. There are 100+ times way someone puts Hindi Yoda talk In Morse Code and it goes nuts.
The reason they are going to hard for PR Marketing on this is because they know it is a matter of time.
SoylentOrange 20 minutes ago [-]
Great article, well written, and good analogy to chess. I’ve been playing competitive chess most of my adult life and I think that the solution lies in how chess dealt with this problem:
Explicit ELO measurements with some cheating detection. AI assistance wholly banned. As you climb the ELO ladder, detection gets more onerous. At top level during online events, anti cheating teams require the use of both monitoring software and multiple cameras.
Idea is that you can cheat pretty easily at the lowest levels but it gets less easy the higher you go. This allows for better feeding into the truly elite competitions.
I think chess’s very firm stance that AI is never allowed in competition (neither online nor in person), rather than CTF’s acceptance, was the right call.
hoyd 18 minutes ago [-]
«That feedback loop is breaking. If the visible scoreboard is dominated by teams using AI, a beginner is pushed toward using AI before they have built the instincts the AI is replacing. That is an anti-pattern. It prevents active learning, and active struggle is the bit that actually teaches you. It is also completely demotivating to put in real effort and see no visible progress because the ladder above you has been automated.»
This stands out to me, and speaks perhaps broader than the article itself? I’m sure this has been in the spotlight before, but well put for many areas I think.
still has no mention of AI, but that will likely change as they increasingly dominate competition.
amingilani 59 minutes ago [-]
I don’t think CTFs are dead, they’ll just evolve. The difficulty level will need to be increased or the rules locked down. Just like sports and racing persist despite the existence of performance enhancing drugs and rocket technology.
I just did a CTF where I was in the top 10. It was the first CTF I completed and I used AI because the rules permitted it. That said, I couldn’t solve all challenges.
But yes, it was significantly easier now than I last attempted one. Even manually solving with AI assisted assembly interpretation was much easier.
mort96 53 minutes ago [-]
Increasing the difficulty level is a terrible solution. The problem with CTFs isn't that they're too easy. Making them harder just makes them even less accessible to people who don't cheat. It'd be like seeing people who put hidden electric motors in their bikes during Tour de France and conclude, "oh we just need longer distances and steeper hills".
rurban 60 minutes ago [-]
I don't do CTF's but took part at the security workshop for fun ~2 years with my Android phone only. I was first with the first simple challenge, but then couldnt continue because my phone was just too limited. But I watched what the others did. And a young Indian guy did everything with ChatGPT then. I found it silly, but amusing, because he actually got second. There was no Codex nor Claude then. Nowadays it must be dead for real, because I would solve everything with my agents, as I do in the real world.
raphman 57 minutes ago [-]
Interesting and well written article that mirrors/foreshadows how LLMs do and will change other scenes.
As I don't know much about the CTF scene, I looked for other takes on this topic.
Here's an article from 2015 about how tool-assistance already changed CTFs:
> Individual skill will undoubtedly be a factor next year. But, I'm left wondering whether next year's DEFCON CTF will tell us anything more than how well-developed each team's tools are (and how well they can interpret the results).
And here's someone explaining how Claude Max allowed them to win CTFs:
> I had always been interested in CTF as one of the only ways people could compete and show off their skill in coding/problem solving on a global scale. It was just too difficult and didn't make sense for me to learn the fundamentals as an electrical engineer. As time went on, I got better and better, and it was hard to tell whether it was because of experience or if it was because of improvements in AI.
> I accomplished my goals, and for that reason I'm quitting CTF, at least for now. [...] I'd like to think I highlighted the problem before it became a bigger issue. So, how do we fix this? Teams and challenge authors losing motivation is not good. CTF dying is not good. AI bad. Or is it?
The only article that saw LLMs as a non-negative force for CTFs was this one. Fittingly, it sounds like LLM output ("Let's be honest", "This is where things get interesting.") and only contains hallucinated references.
I have normally found any sort of timed technical competition intimidating. Even so, about 6 or 7 years ago, after being persuaded by a colleague, I participated in a few CTFs. I am glad I did, back when this type of thing still meant something. I have kept a screenshot from one of the CTFs that I am quite fond of: https://susam.net/files/blog/ctf-2019.png
kevinsimper 1 hours ago [-]
You could make it offline and with provided laptops only, just like with the competitive CS2 scene.
sheept 35 minutes ago [-]
Offline CTFs could also incorporate physical security challenges, like lockpicking
tylerchilds 13 minutes ago [-]
I do like the idea of escape the room games becoming the cybersecurity employable competition meta
hsbauauvhabzb 45 minutes ago [-]
Ctfs need preparation and unconstrained internet, even if you block domains it’s possible to tunnel out
sheept 33 minutes ago [-]
Presumably if you block domains, you wouldn't be able to use AI to find a way around the block. So doing so demonstrates at least some human skill
hsbauauvhabzb 18 minutes ago [-]
Or forethought, I’m sure you could ask an AI how to circumvent any blocks.
belabartok39 36 minutes ago [-]
Use jumpbox to access CTF. Disable all wireless for the playing hall.
hsbauauvhabzb 19 minutes ago [-]
I think you’re forgetting hotspots, or laptops with inbuilt 4/5g
eastbound 44 minutes ago [-]
Since real-life situations involve AI, banning AI would make CTFs just a simple game, not a demonstration of capabilities and talent.
mort96 40 minutes ago [-]
What do you mean? Solving a CTF challenge demonstrates way more capabilities and talent than just asking a chat bot to solve a CTF challenge.
loeg 40 minutes ago [-]
They always were just a game?
r4indeer 24 minutes ago [-]
I'm conflicted on the use of AI in CTFs. On the one hand, they are supposed to mirror real-life scenarios, so of course you should be able to use any tool that would be available to you in real life.
On the other hand, CTFs are fundamentally a game and a competition which are supposed to be fun and compare and improve ones skill. So when I let an LLM generate the entire solution for me, what's the point anymore? I did not learn anything. I did not work for that place on the leaderboard, I just copied the solution. And worst of all, I did not have any fun. It's boring.
So how does using AI as a solver not feel like cheating?
13 minutes ago [-]
Grimburger 59 minutes ago [-]
Very impressed that OP has gone from starting university in 2021 to becoming a Senior Security Engineer.
It's an incredibly exciting time in security research in my humble old man opinion.
Think the cadence of new exploits is perhaps a good measure of that rather than subjective thoughts by anyone regardless of experience.
eecc 1 hours ago [-]
“solve”, why not solution? Like “spend” and not expenditure, why use the verb as a noun and not care about grammar?
sheept 29 minutes ago [-]
These examples that you're calling "verbs as a noun" are standard grammar. You can't just invent simplified rules about a language and declare it wrong when the rules fall apart.
43 minutes ago [-]
iainmerrick 58 minutes ago [-]
They’re shorter.
Why so pedantic?
chvid 1 hours ago [-]
What is CTF? And why is the cyber security world filled with silly gaming references?
mort96 1 hours ago [-]
Capture The Flag is a cybersecurity game where the organizers set up a bunch of intentionally vulnerable computer systems with a "flag" on them, a string that's "supposed to be" secret but is accessible through exploiting the vulnerabilities. This may be a line in /etc/password, a string in memory, a field in a database, whatever. The goal of the game is to hack into the computer systems, find ("capture") the flag, then copy/paste it into the organiser's scoreboard website to prove that you solved that particular challenge.
It's pretty fun. Or at least it was, back when you had some sense that your competitors were competing on an even playing field and just beat you because they were better than you.
I wouldn't say the name is a "gaming reference", it's just a descriptive name for a game.
used to see some really good CTF videos show up on youtube and now nothing like that shows up on the feed
walletdrainer 1 hours ago [-]
>I started playing CTFs in 2021
>and the old game is not coming back
For many people the CTF scene was already dead in 2021 because it had turned into something unrecognisable.
In reality it’s just different.
lukan 1 hours ago [-]
Well, I had to google what CTF means (capture the flag, a hacking competition), so surely cannot judge here, but the text indicates that with AI some things are very different today:
"That makes open CTFs pay-to-win. The more tokens you can throw at a competition, the faster you can burn down the board. Specialised cybersecurity models like alias1 by Alias Robotics are becoming less relevant compared to general frontier LLMs. The competition is turning into "who can afford to run enough agents, with enough context, for long enough.""
walletdrainer 27 minutes ago [-]
There are two different schools of thought:
1) It’s OK to do just about anything to win a CTF, including installing malware on the organisers computers months before the actual event so you’ll have an easy time stealing the flags.
2) It’s not ok to try and win the CTF with a solution the authors did not intend.
Recently the #2 crowd has been winning because the hacking scene has turned corporate and boring. People started to partake in CTFs in the hopes of landing a job(!)
CTFs are indeed ruined for those people, I personally don’t mind.
For the people in group #1 LLMs change little. Attacking the challenges directly was always a last resort.
mock-possum 1 hours ago [-]
Isn’t that the bitter lesson in a nutshell? “Specialised cybersecurity models … are becoming less relevant compared to general frontier LLMs.”
Grimburger 58 minutes ago [-]
>Learning about eternal September in May 2026
Hits different doesn't it
1 hours ago [-]
1 hours ago [-]
3qw128 20 minutes ago [-]
The article is the thickest of AI slop. Don't believe anything.
sevindob 12 minutes ago [-]
ikr, if bro can't be bothered to write an article himself then anything he says is automatically suspect
vasco 1 hours ago [-]
My first ever was Stripe CTF in 2012 I think, I still wear the shirt I got (now super fainted) from passing some challenges.
I was a student in portugal and remember receiving the shirt for it and thinking, maybe those Americans aren't any better than me and I can compete at the same level.
I never got super into security but it gave me the confidence to play in the same field and lose the stupid aura I had that somehow "rich americans" would be better than me at everything because they had better universities or because of Hollywood or something.
Sad that another cool thing is lost to AI but I guess kids will learn in other ways.
deafpolygon 1 hours ago [-]
Unrelated, but does anyone find this site incredibly hard to read?
walletdrainer 1 hours ago [-]
Bizarre font and poor contrast, yep.
The text itself being exceedingly long for no obvious reason doesn’t help.
lukan 1 hours ago [-]
Poor contrast? White on black?
And if you think it was too long, what part would you have shortened? I never knew about the scene and found it interesting to read this personal take on it.
zzvimercm 22 minutes ago [-]
[flagged]
utopiah 46 minutes ago [-]
Right, the same way that car racing has "broken" jogging. This is so dumb. /s
The whole point of competitions is to provide a safe environment thanks to a set of rules all participants AGREE on in order to progress together.
If new tools "break" the competition, we change the rules and that's A-OK.
CTF isn't a natural phenomenon, if tools change, rules change, simple.
We’ve figured out the human replacement pipeline it seems, but we haven’t figured out the eduction part. LLMs can be wonderful teachers, but the temptation to just tell it ‘do it for me’ is almost impossible to resist.
Are they or aren't they
The solution is just to make CTFs harder, but when do CTFs become too hard? Maybe the problem is that 'hard' CTFs are fundementally too 'simple' where it's just a logic chain and an exhaustive bruteforce towards a solution since there really are limited ways to express a solution in plain sight.
Or maybe human creativity has been exhausted and we're not so limitless as we thought. Only time will tell.
I had another idea spring to mind: we could hide two flags, one that could only be found by ai agents and not humans or tools written by humans.
we have very powerful simulation tools so something like "project a pattern at these angles" wouldn't really work as you could simulate that.
I guess something cool is that we can make simulating the solution very expensive, but in real world it would be free since it's analog... As long as simulations take longer than it takes for a human to find a solution it would be a pretty good way to deal with it. I am sure people smarter than me can come up with something.
Maybe I was too early to dismiss human creativity.
You could even go so far that anything loaded on your computer is fair game, but not more than that (certain competitive programming competition for example allow unlimited amount of paper material - for CTFs you probably need much more than that, therefore electronic).
What am I missing here?
These models seems completely unbeatable only in the ads. There are 100+ times way someone puts Hindi Yoda talk In Morse Code and it goes nuts. The reason they are going to hard for PR Marketing on this is because they know it is a matter of time.
Explicit ELO measurements with some cheating detection. AI assistance wholly banned. As you climb the ELO ladder, detection gets more onerous. At top level during online events, anti cheating teams require the use of both monitoring software and multiple cameras.
Idea is that you can cheat pretty easily at the lowest levels but it gets less easy the higher you go. This allows for better feeding into the truly elite competitions.
I think chess’s very firm stance that AI is never allowed in competition (neither online nor in person), rather than CTF’s acceptance, was the right call.
This stands out to me, and speaks perhaps broader than the article itself? I’m sure this has been in the spotlight before, but well put for many areas I think.
still has no mention of AI, but that will likely change as they increasingly dominate competition.
I just did a CTF where I was in the top 10. It was the first CTF I completed and I used AI because the rules permitted it. That said, I couldn’t solve all challenges.
But yes, it was significantly easier now than I last attempted one. Even manually solving with AI assisted assembly interpretation was much easier.
As I don't know much about the CTF scene, I looked for other takes on this topic.
Here's an article from 2015 about how tool-assistance already changed CTFs:
> Individual skill will undoubtedly be a factor next year. But, I'm left wondering whether next year's DEFCON CTF will tell us anything more than how well-developed each team's tools are (and how well they can interpret the results).
https://fuzyll.com/2015/ctf-is-dead-long-live-ctf/
But there are quite a few recent (2026) articles with the same core message as in the original article, e.g., https://blog.includesecurity.com/2026/04/ctfs-in-the-ai-era/ or https://k3ng.xyz/blog/ctf-is-dead
And here's someone explaining how Claude Max allowed them to win CTFs:
> I had always been interested in CTF as one of the only ways people could compete and show off their skill in coding/problem solving on a global scale. It was just too difficult and didn't make sense for me to learn the fundamentals as an electrical engineer. As time went on, I got better and better, and it was hard to tell whether it was because of experience or if it was because of improvements in AI.
> I accomplished my goals, and for that reason I'm quitting CTF, at least for now. [...] I'd like to think I highlighted the problem before it became a bigger issue. So, how do we fix this? Teams and challenge authors losing motivation is not good. CTF dying is not good. AI bad. Or is it?
https://blog.krauq.com/post/ctf-is-dying-because-of-ai
The only article that saw LLMs as a non-negative force for CTFs was this one. Fittingly, it sounds like LLM output ("Let's be honest", "This is where things get interesting.") and only contains hallucinated references.
https://caverav.cl/posts/ctfs-not-dead/ctfs-not-dead/
On the other hand, CTFs are fundamentally a game and a competition which are supposed to be fun and compare and improve ones skill. So when I let an LLM generate the entire solution for me, what's the point anymore? I did not learn anything. I did not work for that place on the leaderboard, I just copied the solution. And worst of all, I did not have any fun. It's boring.
So how does using AI as a solver not feel like cheating?
It's an incredibly exciting time in security research in my humble old man opinion.
Think the cadence of new exploits is perhaps a good measure of that rather than subjective thoughts by anyone regardless of experience.
Why so pedantic?
It's pretty fun. Or at least it was, back when you had some sense that your competitors were competing on an even playing field and just beat you because they were better than you.
I wouldn't say the name is a "gaming reference", it's just a descriptive name for a game.
Its a war game reference I guess?
>and the old game is not coming back
For many people the CTF scene was already dead in 2021 because it had turned into something unrecognisable.
In reality it’s just different.
"That makes open CTFs pay-to-win. The more tokens you can throw at a competition, the faster you can burn down the board. Specialised cybersecurity models like alias1 by Alias Robotics are becoming less relevant compared to general frontier LLMs. The competition is turning into "who can afford to run enough agents, with enough context, for long enough.""
1) It’s OK to do just about anything to win a CTF, including installing malware on the organisers computers months before the actual event so you’ll have an easy time stealing the flags.
2) It’s not ok to try and win the CTF with a solution the authors did not intend.
Recently the #2 crowd has been winning because the hacking scene has turned corporate and boring. People started to partake in CTFs in the hopes of landing a job(!)
CTFs are indeed ruined for those people, I personally don’t mind.
For the people in group #1 LLMs change little. Attacking the challenges directly was always a last resort.
Hits different doesn't it
I never got super into security but it gave me the confidence to play in the same field and lose the stupid aura I had that somehow "rich americans" would be better than me at everything because they had better universities or because of Hollywood or something.
Sad that another cool thing is lost to AI but I guess kids will learn in other ways.
The text itself being exceedingly long for no obvious reason doesn’t help.
And if you think it was too long, what part would you have shortened? I never knew about the scene and found it interesting to read this personal take on it.
The whole point of competitions is to provide a safe environment thanks to a set of rules all participants AGREE on in order to progress together.
If new tools "break" the competition, we change the rules and that's A-OK.
CTF isn't a natural phenomenon, if tools change, rules change, simple.