From my understanding, the impetus was that F5 submitted a CVE for a vulnerability, for an optional, “beta” feature that can be enabled. Dounin did not think a CVE should be submitted, since he did not considered it to be “production” feature.
That said, the vulnerability is in shipping code, regardless of whether it is optional or not, so per industry coding practices, it should either be patched or removed entirely in order to resolve the issue.
The messaging around this so far doesn't lead me to want to follow the fork on production. As a sysadmin I'm not rushing out to swap my reverse proxy.
The problem is I'm speculating but it seems like the developer was only continuing to develop under condition that they continued control over the nginx decision making.
So currently it looks like from a user of nginx, the cve registration is protecting me with open communication. From a security aspect, a security researcher probably needs that cve to count as a bug bounty.
From the developers perspective, f5 broke the pact of decision control being with the developer. But for me, I would rather it be registered and I'm informed even if I know my configuration doesn't use it.
Again, assuming a lot here. But I agree with f5. That feature even beta could be in a dev or test environment. That's enough reason to know.
Edit:Long term, I don't know where I'll land. Personally I'd rather be with the developer, except I need to trust that the solution is open not in source, but in communication. It's a weird situation.
Frivolous CVEs aren't a good thing for security. This bug was a possible DOS (not e.g. a privilege escalation) in a disabled-by-default experimental feature. It wasn't a security issue and should have been fixed with a patch instead of raising a false alarm and damaging trust.
It is WAY better to over report than under report. I don't want vendors to have a lot of ability to say "nope that's not a security problem, sweep it under the rug".
"A liar who lies repeatedly won't be believed" is definitely equivalent to "A company conservatively warned that one of their products was dangerous in some specific situations."
That's... not the point either. The point is that "reporting false positives isn't a bad thing" is only true up to a point. The discussion is then "is this before or after that point." Which, given the context of the bug, isn't really a given. But I don't want to have that discussion with you anymore because you're annoying.
Girl, you're saying you trust software that documents security vulnerabilities that don't apply to you less than one that doesn't document those vulnerabilities?
A CVE isn't a black mark on a projects reputation.
Because of the way you misused terms, I'm guessing you're not particularly familiar with cybersecurity. It's an ever more important field for sysadmins and devs. I recommend taking the time to learn more.
It's not worthy of a CVE and whether it applies to me is irrelevant. I didn't say a CVE is a black mark. Frivolous reporting of CVEs damages trust in the usefulness of the system in identifying critical vulnerabilities. This is a known issue related to resumé padding by newcomers to the cybersecurity industry.
Uh, no. But thanks for guessing. It's frivolous because it violates several principles of responsible disclosure. Yes, the scope of impact is relevant; the availability of methods of remediation is relevant; and the development/patch lifecycle is relevant. The feature being off-by-default and labeled experimental are indirect references to the scope of impact and availability of remediation, and the latter is an indirect reference to the state of development lifecycle. Per the developer(s)' words, this is a bug that had limited risk and was scheduled to be fixed as part of the normal development schedule. Escalating every such bug, of which the vast majority go without a CVE, would quickly drown out notices that people actually care about. A CVE is not a bug report.
"Uh, no. The CVE is valid, but it's not about that." You say, scrambling. "The dev cycle! It was already scheduled for release, so it's not necessary to disclose. If everyone disclosed security bugs, we'd have too much information and we wouldn't be able to filter for the notices we care about." You retort, not realizing that you had already conceded that this wasn't about the fact you didn't care about the CVE, and instead arguing that less information is better rather than building tools to cope with the number of CVEs that are increasing regardless of their relevance to you personally.
I didn't say the CVE was valid. I explained why it was a mistake. I didn't say "disclosing security bugs" is, in general, a bad thing, I said raising undue alarm about a specific class of bugs is bad. It's not a matter of "less or more information," because as I said, a CVE is not a bug report. It is not simply "acknowledgment of information." If you think my argument has no merit and there is no reason why "more information" could be worse, you're free to talk to someone who gives a shit.
It’s not a matter of “less or more information[...]”
Escalating every such bug [...] would quickly drown out notices that people actually care about.
If your argument is that a specific class of security bugs aren't worth CVEs, then make that argument. Instead, you're saying the CVE isn't valid and making an argument about the risk assessment and development lifecycle (as if those aren't part of a CVE) and not the class of security bug.
I have, this entire time, said it's a valid CVE that you don't care about and that you shouldn't be working as a cybersecurity professional. You have conceded the first point and continued to demonstrate the later.