I wonder what made them imagine an LLM could answer questions that involve reasoning about firewall configurations. It’s just not the kind of thing LLMs are good at, since they can’t parse something specialist like that or create a model then simulate applying the rules one by one to evaluate the end state. An LLM will improvise a plausible-sounding answer based on similar-looking questions and answers in its training data, and that’s not trustworthy at all.
I wonder what made them imagine an LLM could answer questions that involve reasoning about firewall configurations. It’s just not the kind of thing LLMs are good at, since they can’t parse something specialist like that or create a model then simulate applying the rules one by one to evaluate the end state. An LLM will improvise a plausible-sounding answer based on similar-looking questions and answers in its training data, and that’s not trustworthy at all.