My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.
Not one single serious security bug has been found since 2023. [1]
The only bugs auditers have been finding are things like “Deadwood, when fully recursive, will take longer than usual to release resources when getting this unusual packet” [2] or “This side utility included with MaraDNS, which hasn’t been able to be compiled since 2022, has a buffer overflow, but only if one’s $HOME is over 50 characters in length” [3]
I’m actually really pleased just how secure MaraDNS is now that it’s getting real in depth security audits.
Well, as you bundle Lua 5.1 (as Lunacy), instead of making a library and loading it, and you bundled the 2012 version, you're probably affected by CVE-2014-5461 and others. Lua hasn't been security fix free.
Unless the service accepts Lua code from the internet (and that would be a completely insane thing), the CVE-2014-5461 will not apply. And while I have not reviewed every Lua CVE, I bet most (all?) of then require a specifically crafted code, or at least highly-complex user input (such as arbitrary json)
It's important to look at the actual vulnerability at the context, and not just list any CVE which matches by version.
I have several libraries that I've written. Not one single serious security bug in them has been found since 1991. Granted, nobody uses my libraries...
Not to diminish your team's achievement! :D But it's important to contextualize claims like this with information about what your userbase looks like
I remember being delighted finding maradns as an alternative to the “do everything” of dnsmasq way back when I set up a dns server, and more importantly, I haven't had to think about it since then.
dnsmasq has served me well for like an eternity in multiple setups for different use cases. As all software it has bugs. And once located those get fixed. Its author is also easy to communicate with.
Why should I switch over to something way less proven? I'm quite sure your software also has bugs, many still not located. Maybe because it's less popular/ less well known nobody cares to hunt for those bugs? Which means even if the numbers of found bugs is less in your software at the moment, and it may look more audited for this reason, it may actually be way less secure.
When you offer up an alternative as technically superior in some manner then yes, it is on you to demonstrate such a claim in a convincing manner. "No bugs in 3 years in this software with a much smaller audience and also look AI audits!" comes across as off topic shameless self promotion. At least if an insightful technical discussion ensued the subthread might prove worthwhile but so far it's just the usual tired shit flinging.
I have far more evidence of a very good security record with MaraDNS than “No bugs in 3 years in this software with a much smaller audience and also look AI audits!”
• The software has been around for 25 years
• The software is popular enough to have been subjected to dozens of security code audits, including two audits in the post-AI era
• In those 25 years, only two remote “packet of death” bugs have been found
• Also, in those same 25 years, only one single bug report of remotely exploitable memory leaks has been found
This isn’t something which, as implied here, has a lot of security bugs only because no one has used or audited the software. This is a long term, mature code base which has only had a few serious security bugs in that timeframe.
For what it's worth I didn't know about maradns prior to this. Maybe it actually sees fairly wide use? Whether or not I accept your evidence would hinge on that. Regardless I think my point stands - if you don't lead with a convincing line of reasoning all that's left is an empty assertion. Unless I happen to recognize you as an authority in the field that's not going to do anything for me since by default you're some stranger on the internet that might be a dog for all I know.
To illustrate the issue with an extreme example, consider that a disused repository on github full of security holes is highly unlikely to have any CVEs regardless of age. The software has to present a worthwhile target (ie have a substantial long term userbase) before anyone will bother to look for exploits. (I guess that might change in the near future thanks to AI but I don't think we're there just yet.)
"All software has bugs" is the most meaningless statement ever. It is just used for bonding with fellow bug writers who sit at a virtual campfire and muse about inevitabilities.
Demonstrably some software has fewer bugs, and its authors are often hated, especially if they are a lone author like Bernstein. Because it must not happen!
Projects with useless churn and many bug reports are more popular because only activity matters, not quality.
"All software has bugs" so "be wary of the one trying to say they haven't had any in 3 years" not so "I guess all are equal". For extremely low security bug rates either the scope is extremely narrow, the claim is dubious, or the project is a massive effort which the community talks about directly in posts rather than plugs (e.g. curl).
DJB, with Qmail and DjbDNS (as well as Publicfile, which didn’t catch on in an era of CGI scripts), showed that one could have (mostly) security bug free software without the scope being “extremely narrow”, and without the claim being “dubious”.
It’s not normal for software to be so poorly written, one doubts the claim that a security bug hasn’t been found in over three years. If one thinks the claim of no security bugs of consequence in three years is dubious, feel free to do a security audit of MaraDNS (or DjbDNS, which I also will take responsibility for even though my software is, if you will, a “competitor” to DjbDNS), and report any bugs you find.
Speaking of DJB, DjbDNS has had a few security bugs over the years (but not that many), but I’m maintaining a fork of DjbDNS with all of the security bugs I know about fixed:
I am saying all this as someone who has had significant enough issues with DJB’s software, I ended up writing my own DNS server so I didn’t have to use his server (I might not had done so if DjbDNS was public domain in 2001, but oh well).
(As a matter of etiquette, it’s a little rude to claim someone is saying something “dubious”, especially when the claim is backed up with solid evidence [multiple audits didn’t find anything of significance in the last year, as I documented above], unless you have solid evidence the claim is dubious, e.g. a significant security hole more recent than three years old)
People here don't know that MaraDNS was already popular on extremely critical security mailing lists that basically hated anything but qmail and postfix. If you introduce more bugs and blog about them, it will probably gain in popularity. :)
> It’s not normal for software to be so poorly written, one doubts the claim that a security bug hasn’t been found in over three years.
Can you back that claim up with at least some sort of theory? Because it doesn't match my perception of the real world, nor does it match my mental model of how CVEs happen.
Is that not begging the question? You have asserted X and now you point to a particular track record to back the claim of X up but the track record only serves as valid evidence of X if we already accept your assertion that X is the case.
I think this is the breaking point where replacing our code written in C for code written in memory safe languages is becoming urgent.
The vast majority of vulnerabilities found recently are directly related to being written in memory unsafe languages, it's very difficult to justify that a DNS/DHCP server can't be written in rust or go and without using unsafe (well, maybe a few unsafe calls are still needed, but these will be a very small amount)...
Maybe this is the kick in the ass Debian needs to upgrade the embarrassingly ancient dnsmasq in "stable" because while I can't think of any new features, the latest versions contain many non-CVE bug fixes.
But I doubt it, they will lazily backport these patches to create some frankenstein one-off version and be done with it.
Before anyone says "tHaT's wHaT sTaBlE iS fOr": they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable". They would rather ship useless, broken code than something too new. It's crazy.
They're not going to put a newer version in stable. The way stable gets newer versions of things is that you get the newer version into testing and then every two years testing becomes stable and stable becomes oldstable, at which point the newer version from testing becomes the version in stable.
The thing to complain about is if the version in testing is ancient.
That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?
And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.
On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
> That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time.
That's not what it's about.
What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.
They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.
But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.
You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...
So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
One is security updates and bug fixes. These need to fix the problem with the smallest change to minimize the amount of possible breakage, because the code is already vulnerable/broken in production and needs to be updated right now. These are the updates stable gets.
The other is changes and additions. They're both more likely to break things and less important to move into production the same day they become public.
You don't have to wait until testing is released as stable to run it in your test environment. You can find out about the changes the next release will have immediately, in the test environment, and thereby have plenty of time to address any issues before those changes move into production.
You definitely need different channels for high priority fixes and normal releases, stable and testing releases and all that.
But two years is impractical and Debian gets a ton of friction over it. Web browsers and maybe one or two other packages are able to carve out exceptions, because those packages are big enough for the rules to bend and no one can argue with a straight face that Debian is going to somehow muster up the manpower to do backports right.
But for everyone else who has to deal with Debian shipping ancient dependencies or upstream package maintainers who are expected to deal with bug reports from ancient versions is expected to just suck it up, because no one else is big enough and organized enough to say "hey, it's 2026, we have better ways and this has gotten nutty".
Maybe the new influx of LLM discovered security vulnerabilities will start to change the conversation, I'm curious how it'll play out.
> ...upstream package maintainers who are expected to deal with bug reports from ancient versions...
They are not expected to deal with this. This is the responsibility of the Debian package maintainer.
If you (as an upstream) licensed your software in a manner that allows Debian to do what it does, and they do this to serve their users who actually want that, you are wrong to then complain about it.
If you don't want this, don't license your software like that, and Debian and their users will use some other software instead.
If package maintainers were always fine upstanding package maintainers as you imagine them to be I wouldn't be complaining, but I have in fact had Debian ship my software and screw it up and gotten a flood of bug reports, so... :)
I think you need to chill out. Relicensing the way you suggest would be _quite_ the hostile act, and I'm not going to that either. But I am an engineer, so of course I'm going to talk about engineering best practices when it comes up.
You don't have to take it as an attack on your favorite distro - that really does pee in the pool of the upstream/downstream relationship between distros and their upstream.
> I am an engineer, so of course I'm going to talk about engineering best practices when it comes up.
The trouble is you seem to be assuming that best practices for you, in your opinion, also apply to everyone else. They don't. Not everyone sees things the way you do or is facing the same issues or is making the same set of tradeoffs. There are downsides to what debian does but there are also upsides.
At this point, given the plethora of high quality options available as well as how easy it is to mix and match them on the same system thanks to container-related utilities and common practices I really don't think there's any room for someone who doesn't like the debian model (ie in general, as opposed to targeted objections) to complain about how they do things. If you want cutting edge userspace on debian stable at this point you have at least 3 options between nix, guix, and gentoo. There's also flatpak and snap which come built in.
We're in the middle of a huge spike in LLM discovered security vulnerabilities, which means not everything will get assigned a CVE, a lot of people are watching repositories to look for exploitable bugs, and in the frenzy of backporting that people are now having to do things will get missed.
I wager it's only a matter of time before we see a mass rooting event that hits Debian hard while everyone running something more modern has already been patched.
I think that might be what cuts down on the grandstanding about "freedoms" and "that's how we've always done things". You certainly are, up until it becomes a public nuisance.
No one is grandstanding about freedom here though? I claimed that the approach debian takes has both upsides and downsides. I stand by that. Personally I pull my networked services from testing while running stable on the host. I absolutely do not want constant churn of the filesystem code or drivers on my devices but I would also prefer not to run some franken build of ssh or apache or what have you. However I can also sympathize with others who need a more structured process and substantial lead time in staging prior to making major changes to production.
Why would you expect LLMs not to be simultaneously leveraged to catch backports that were missed or inadvertently broken?
Given recent headlines I think it's far more likely that we see a mass rooting event hit one or more of the bleeding edge rolling release distros or language ecosystems due to supply chain compromise. Running slightly out of date software has never been more attractive.
Good grief, you are not forced to uae Debian! Please leave the only stable distro alone, and just use one more to your style.
I assure you, enormous sums of people prefer Debian the way it is. I do not, ever, want "new stuff" in stable. I have better things to do than fight daily change in a distro, it's beyond a waste of time and just silly.
If you want new things, leave stable alone, and just run Debian testing! It updates all the time, and is still more stable than most other distros.
Debian is the way it is on purpose, it is not a mistake, not left over reasoning, and nothing you said seems relevant in this regard.
For example, there is no better way than backporting, when it comes to maintaining compatibility. And that's what many people want.
If you don't like the debian model, didn't use debian. There are people that like the debian model, it seems like you aren't one of them, though. That doesn't make them wrong.
> You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...
Doing terrible work every 2 years is better than doing it every day?
Personally I'd rather have a manageable stream of little bad things consistently over time rather than suddenly having a mountain of bad things one day.
Clearly you disagree with the debian stable perspective. That's fine, it's not for everyone. You can just run debian unstable or debian testing, depending on where exactly you draw the line.
If you want the rolling release like distro, just run debian unstable. That's what you get. It's on par with all the other constantly updated distros out there. Or just run one of those.
Also, Debian stable has a lifetime a lot longer than 2 years, see https://www.debian.org/releases/. Some of us need distros like stable, because we are in giant orgs that are overworked and have long release cycles. Our users want stuff to "just work" and stable promises if X worked at release, it will keep working until we stop support. You don't add new features to a stable release.
From a personal perspective: Debian Stable is for your grandparents or young children. You install Stable, turn on auto-update and every 5-ish years you spend a day upgrading them to the next stable release. Then you spend a week or two helping them through all the new changes and then you have minimal support calls from them for 5-ish years. If you handed them a rolling release or Debian unstable, you'd have constant support calls.
...or just leave grandparents on the previous version of Stable until they get a new computer. Honestly not a huge fan of upgrading software at all, if I'm the one supporting the machines.
Just depends on if that's something grandparents/kids can/want to afford.
Personally, If the hardware is working great, seems like a waste of money replacing it, just to upgrade software. Especially with Debian oldstable -> Debian stable where it's usually quite easy and painless.
> You don't want that as an automatic update because it will break in production for anyone who is actually using it
The problem with this take is that it’s stuck in the early 2000’s, where all servers are pets to be cared for and lovingly updated in place.
It’s also circular: you have the same problem with the current model if you don’t have a test environment. And if you do have a test environment, releases can be tested and validated at a much higher cadence.
If you want that, you don't want Debian. Other people do.
Some people will even run Debian on the desktop. I would never, but some people get real upset when anything changes.
Debian does regularly bring newer versions of software: they release about every two years. If you want the latest and greatest Debian experience, upgrade Debian on week one.
From your description, you seem to want Arch but made by Debian?
> From your description, you seem to want Arch but made by Debian?
Isn't that essentially Debian unstable (with potentially experimental enabled)? I've been running Debian unstable on my desktops for something like 20 years.
Refactoring and rewrites prove time and time again that they also introduce new bugs and changes in behaviour that users of stable releases do not want.
For what you want, there are other distributions for that. Debian also has stable-backports that does what you want.
No need to rage on distributions that also provide exactly what their users want.
Don't get me wrong, I use and encourage extensive automated testing. However only extensive manual testing by people looking for things that are "weird" can really find all bugs. (though it remains to be seen what AI can do - I'm not holding my breath)
Yeah was about to comment, parent says "if it is ancient", it is not. So the root comment is nothing burger. Stable has 1 release cycle old, and depending on how things play out, testing may have 2.93 or later anyways.
You don't have to use Debian stable, if you'd prefer Ubuntu every 6 months, or Fedora (6 months? 9 months?), or even Arch Linux updated daily ...
I use Arch on my laptop, when I got it 2 years ago the amd gpu was a bit new so it was prudent to get the latest kernel, mesa, everything. Since I use it daily it's not bad to update weekly and keep on top of occasional config migrations.
I use Debian stable on my home server, it's been in-place upgraded 4-ish times over 10 years. I can install weekly updates without worrying about config updates and such. I set up most stuff I wanted many years ago, and haven't really wanted new features since, though I have installed tailscale and jellyfin from their separate debian package repos so they are very current. It does the same jobs I wanted it to do 8 years ago, with super low maintenance.
But if you don't want Debian stable, that's fine. Just let others enjoy it.
That's what stable is for though. Like, sure, stable's policy is ludicrous and you would have to be insane to run stable. But the remedy for that isn't to try to change Debian policy, it's to get people to stop running stable. Maybe once no-one uses it Debian will see sense.
What if the new release which contains the fixes has new dependencies and those also have new dependencies? I assume they have to Frankenstein packages sometimes to maintain the borders of the target app while still having major vulns patched right in stable.
Never liked using dnsmasq. Always felt like too much in one tool. A local caching resolver, dhcp server, and tftp/pxe boot setup were always things I preferred to configure separately.
It's more of a good thing that, in most cases, it's on devices that won't send it any packets unless a client first authenticates to a Wi-Fi station or physically plugs into an Ethernet port.
The AI bug report tsunami is not in all projects. As the top comment notes, MaraDNS didn't have any. I assume djbdns and tinydns didn't either, otherwise they'd shout it from the rooftops.
I never understood why some projects get extremely popular and others don't. I also suspect by now that the reports by tools that are "too dangerous to release" scan all projects but selectively only contact those with issues, so that they never have to admit that their tool didn't find anything.
They can block traffic to update servers so the computers behind the router aren't all patched up, then exploit them. They also get access to all the IoT devices on the internal network. They can also use your router as a proxy so their scraping/attack traffic comes from your IP address instead of theirs.
If you blindly TOFU ssh sessions, those can be pwned easily in many common use cases. Legacy software configurations like NFS with IP authentication will be bypassed. Realistically the most likely scenario is using your home as a VPN, or a DDOS node.
they could try and exploit any device on your network, and since they see which servers you connect to and how often you communicate with one they can write phishing mails which are tailored just for you.
The CVEs here have their fair share of silly C problems, but also more rigid input validation and handling. These more rigid validations exclude stuff which may even be valid by the spec, but entirely problematic in practice.
As examples, take a look how many valid XML documents are practically considered unsafe and not parsed, for example due to recursive entity expansion. This renders the parsers not flawless and in fact not in spec.
Or, my favorite bait - there should be a maximum length limit on passwords. Why would you ever need a kilobyte sized password?
LLMs certainly make it more feasible to rewrite a product in a memory-safe language, eliminating a whole class of bugs.
Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.
As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).
Just because something is good at finding bugs, it may not find all the bugs. Finding a bug only tells you there was one bug you found, it doesn't tell if the rest is solid.
Have you ever met a security engineer? I’ve never met one who was also a good engineer (not saying they don’t exist, I just haven’t met one). Do they find vulnerabilities? Sure. Could they write the tools they use to find vulnerabilities, most probably not.
My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.
Not one single serious security bug has been found since 2023. [1]
The only bugs auditers have been finding are things like “Deadwood, when fully recursive, will take longer than usual to release resources when getting this unusual packet” [2] or “This side utility included with MaraDNS, which hasn’t been able to be compiled since 2022, has a buffer overflow, but only if one’s $HOME is over 50 characters in length” [3]
I’m actually really pleased just how secure MaraDNS is now that it’s getting real in depth security audits.
[1] https://samboy.github.io/MaraDNS/webpage/security.html
[2] https://github.com/samboy/MaraDNS/discussions/136
[3] https://github.com/samboy/MaraDNS/pull/137
It's important to look at the actual vulnerability at the context, and not just list any CVE which matches by version.
If I can find a CVE that _may_ affect the stack in five minutes, what _actual_ problems lurk there?
You vendor Lua - thus, it _is_ your responsibility to review every Lua CVE. You've set yourself up as the maintainer by vendoring.
"Well, sure, this component is insecure but an attacker can't reach it" is like a 50%+ positive signal for an unexpected privilege elevation bug.
I have several libraries that I've written. Not one single serious security bug in them has been found since 1991. Granted, nobody uses my libraries...
Not to diminish your team's achievement! :D But it's important to contextualize claims like this with information about what your userbase looks like
The question is a matter of impact because of how used the software is.
dnsmasq has served me well for like an eternity in multiple setups for different use cases. As all software it has bugs. And once located those get fixed. Its author is also easy to communicate with.
Why should I switch over to something way less proven? I'm quite sure your software also has bugs, many still not located. Maybe because it's less popular/ less well known nobody cares to hunt for those bugs? Which means even if the numbers of found bugs is less in your software at the moment, and it may look more audited for this reason, it may actually be way less secure.
Must they prove their software to you? They're offering an alternative, not bargaining for a deal.
• The software has been around for 25 years
• The software is popular enough to have been subjected to dozens of security code audits, including two audits in the post-AI era
• In those 25 years, only two remote “packet of death” bugs have been found
• Also, in those same 25 years, only one single bug report of remotely exploitable memory leaks has been found
This isn’t something which, as implied here, has a lot of security bugs only because no one has used or audited the software. This is a long term, mature code base which has only had a few serious security bugs in that timeframe.
Here is my evidence:
https://samboy.github.io/MaraDNS/webpage/security.html
If this evidence isn’t “convincing” to you, I don’t know what evidence would be “convincing”.
To illustrate the issue with an extreme example, consider that a disused repository on github full of security holes is highly unlikely to have any CVEs regardless of age. The software has to present a worthwhile target (ie have a substantial long term userbase) before anyone will bother to look for exploits. (I guess that might change in the near future thanks to AI but I don't think we're there just yet.)
Demonstrably some software has fewer bugs, and its authors are often hated, especially if they are a lone author like Bernstein. Because it must not happen!
Projects with useless churn and many bug reports are more popular because only activity matters, not quality.
I haven’t noticed antipathy, but I have noticed skepticism. I assume people with outlier records in any field get some extra inspection.
If it becomes jealousy-fueled not-picking, those people are insecure jerks. But unusual track records are worth understanding.
It’s not normal for software to be so poorly written, one doubts the claim that a security bug hasn’t been found in over three years. If one thinks the claim of no security bugs of consequence in three years is dubious, feel free to do a security audit of MaraDNS (or DjbDNS, which I also will take responsibility for even though my software is, if you will, a “competitor” to DjbDNS), and report any bugs you find.
Speaking of DJB, DjbDNS has had a few security bugs over the years (but not that many), but I’m maintaining a fork of DjbDNS with all of the security bugs I know about fixed:
https://github.com/samboy/ndjbdns
I am saying all this as someone who has had significant enough issues with DJB’s software, I ended up writing my own DNS server so I didn’t have to use his server (I might not had done so if DjbDNS was public domain in 2001, but oh well).
(As a matter of etiquette, it’s a little rude to claim someone is saying something “dubious”, especially when the claim is backed up with solid evidence [multiple audits didn’t find anything of significance in the last year, as I documented above], unless you have solid evidence the claim is dubious, e.g. a significant security hole more recent than three years old)
Can you back that claim up with at least some sort of theory? Because it doesn't match my perception of the real world, nor does it match my mental model of how CVEs happen.
https://samboy.github.io/MaraDNS/webpage/DNS.security.compar...
Also, my sister post: https://news.ycombinator.com/item?id=48112042
The vast majority of vulnerabilities found recently are directly related to being written in memory unsafe languages, it's very difficult to justify that a DNS/DHCP server can't be written in rust or go and without using unsafe (well, maybe a few unsafe calls are still needed, but these will be a very small amount)...
But I doubt it, they will lazily backport these patches to create some frankenstein one-off version and be done with it.
Before anyone says "tHaT's wHaT sTaBlE iS fOr": they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable". They would rather ship useless, broken code than something too new. It's crazy.
The thing to complain about is if the version in testing is ancient.
That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?
And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.
On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
That's not what it's about.
What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.
They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.
But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.
So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
One is security updates and bug fixes. These need to fix the problem with the smallest change to minimize the amount of possible breakage, because the code is already vulnerable/broken in production and needs to be updated right now. These are the updates stable gets.
The other is changes and additions. They're both more likely to break things and less important to move into production the same day they become public.
You don't have to wait until testing is released as stable to run it in your test environment. You can find out about the changes the next release will have immediately, in the test environment, and thereby have plenty of time to address any issues before those changes move into production.
That's where you're wrong. They're not one and the same.
Debian stable often defers non-security bug fixes for up to two years by playing this game.
I'm not interested in new features unless they make things actually work.
Debian stable time and again favors broken over new. Broken kernels, broken packages. At least they're stable in their brokenness.
Hence my complaint.
But two years is impractical and Debian gets a ton of friction over it. Web browsers and maybe one or two other packages are able to carve out exceptions, because those packages are big enough for the rules to bend and no one can argue with a straight face that Debian is going to somehow muster up the manpower to do backports right.
But for everyone else who has to deal with Debian shipping ancient dependencies or upstream package maintainers who are expected to deal with bug reports from ancient versions is expected to just suck it up, because no one else is big enough and organized enough to say "hey, it's 2026, we have better ways and this has gotten nutty".
Maybe the new influx of LLM discovered security vulnerabilities will start to change the conversation, I'm curious how it'll play out.
They are not expected to deal with this. This is the responsibility of the Debian package maintainer.
If you (as an upstream) licensed your software in a manner that allows Debian to do what it does, and they do this to serve their users who actually want that, you are wrong to then complain about it.
If you don't want this, don't license your software like that, and Debian and their users will use some other software instead.
I think you need to chill out. Relicensing the way you suggest would be _quite_ the hostile act, and I'm not going to that either. But I am an engineer, so of course I'm going to talk about engineering best practices when it comes up.
You don't have to take it as an attack on your favorite distro - that really does pee in the pool of the upstream/downstream relationship between distros and their upstream.
The trouble is you seem to be assuming that best practices for you, in your opinion, also apply to everyone else. They don't. Not everyone sees things the way you do or is facing the same issues or is making the same set of tradeoffs. There are downsides to what debian does but there are also upsides.
At this point, given the plethora of high quality options available as well as how easy it is to mix and match them on the same system thanks to container-related utilities and common practices I really don't think there's any room for someone who doesn't like the debian model (ie in general, as opposed to targeted objections) to complain about how they do things. If you want cutting edge userspace on debian stable at this point you have at least 3 options between nix, guix, and gentoo. There's also flatpak and snap which come built in.
I wager it's only a matter of time before we see a mass rooting event that hits Debian hard while everyone running something more modern has already been patched.
I think that might be what cuts down on the grandstanding about "freedoms" and "that's how we've always done things". You certainly are, up until it becomes a public nuisance.
Why would you expect LLMs not to be simultaneously leveraged to catch backports that were missed or inadvertently broken?
Given recent headlines I think it's far more likely that we see a mass rooting event hit one or more of the bleeding edge rolling release distros or language ecosystems due to supply chain compromise. Running slightly out of date software has never been more attractive.
OpenBSD in particular can use competent developers to fix their dogshit filesystem.
I assure you, enormous sums of people prefer Debian the way it is. I do not, ever, want "new stuff" in stable. I have better things to do than fight daily change in a distro, it's beyond a waste of time and just silly.
If you want new things, leave stable alone, and just run Debian testing! It updates all the time, and is still more stable than most other distros.
Debian is the way it is on purpose, it is not a mistake, not left over reasoning, and nothing you said seems relevant in this regard.
For example, there is no better way than backporting, when it comes to maintaining compatibility. And that's what many people want.
Doing terrible work every 2 years is better than doing it every day?
If you want the rolling release like distro, just run debian unstable. That's what you get. It's on par with all the other constantly updated distros out there. Or just run one of those.
Also, Debian stable has a lifetime a lot longer than 2 years, see https://www.debian.org/releases/. Some of us need distros like stable, because we are in giant orgs that are overworked and have long release cycles. Our users want stuff to "just work" and stable promises if X worked at release, it will keep working until we stop support. You don't add new features to a stable release.
From a personal perspective: Debian Stable is for your grandparents or young children. You install Stable, turn on auto-update and every 5-ish years you spend a day upgrading them to the next stable release. Then you spend a week or two helping them through all the new changes and then you have minimal support calls from them for 5-ish years. If you handed them a rolling release or Debian unstable, you'd have constant support calls.
Personally, If the hardware is working great, seems like a waste of money replacing it, just to upgrade software. Especially with Debian oldstable -> Debian stable where it's usually quite easy and painless.
The problem with this take is that it’s stuck in the early 2000’s, where all servers are pets to be cared for and lovingly updated in place.
It’s also circular: you have the same problem with the current model if you don’t have a test environment. And if you do have a test environment, releases can be tested and validated at a much higher cadence.
Debian patches defaults in OpenSSH code so it behaves differently than upstream.
They shouldn't legally be allowed to call it OpenSSH, let alone lecture people about it.
Let them call their fork DebSSH, like they have to do with "IceWeasel" and all the other nonsense they mire themselves into.
When you break software to the point you change how it behaves you shouldn't be allowed to use the same name.
Some people will even run Debian on the desktop. I would never, but some people get real upset when anything changes.
Debian does regularly bring newer versions of software: they release about every two years. If you want the latest and greatest Debian experience, upgrade Debian on week one.
From your description, you seem to want Arch but made by Debian?
Isn't that essentially Debian unstable (with potentially experimental enabled)? I've been running Debian unstable on my desktops for something like 20 years.
But that does nothing for people who write and support code Debian wants to ship - packaging code badly can create a real mess for upstream.
For what you want, there are other distributions for that. Debian also has stable-backports that does what you want.
No need to rage on distributions that also provide exactly what their users want.
Don't get me wrong, I use and encourage extensive automated testing. However only extensive manual testing by people looking for things that are "weird" can really find all bugs. (though it remains to be seen what AI can do - I'm not holding my breath)
FWIW the fixes referenced here are already fixed in trixie: https://security-tracker.debian.org/tracker/source-package/d...
I use Arch on my laptop, when I got it 2 years ago the amd gpu was a bit new so it was prudent to get the latest kernel, mesa, everything. Since I use it daily it's not bad to update weekly and keep on top of occasional config migrations.
I use Debian stable on my home server, it's been in-place upgraded 4-ish times over 10 years. I can install weekly updates without worrying about config updates and such. I set up most stuff I wanted many years ago, and haven't really wanted new features since, though I have installed tailscale and jellyfin from their separate debian package repos so they are very current. It does the same jobs I wanted it to do 8 years ago, with super low maintenance.
But if you don't want Debian stable, that's fine. Just let others enjoy it.
Irrelevant strawman, since you're not accusing the dnsmasq package in Debian stable of being straight-up broken.
Answer: no, but they're working on it.
https://forum.openwrt.org/t/dnsmasq-set-of-serious-cves/2500...
10/10, no regrets, would recommend.
"a remote attacker capable of asking DNS queries or answering DNS queries can cause a large OOB write in the heap."
Malformed DNS response causes "infinite loop and dnsmasq stops responding to all queries."
Malicious DHCP request can cause buffer overlow.
I never understood why some projects get extremely popular and others don't. I also suspect by now that the reports by tools that are "too dangerous to release" scan all projects but selectively only contact those with issues, so that they never have to admit that their tool didn't find anything.
It's in popular projects.
It is a distorted view, because projects become popular by allowing indiscriminate commits, bugs, maintainers.
If I'd start a new project I'd allow anyone in and blog about 100 exploits every year, because that is exactly what people want. I'm serious.
What else can they do, assuming the computers behind the router are all patched up.
It's definitely bad.
Welcome to the new world order.
why can't machine-learning write a product from scratch that is flawless?
The CVEs here have their fair share of silly C problems, but also more rigid input validation and handling. These more rigid validations exclude stuff which may even be valid by the spec, but entirely problematic in practice.
As examples, take a look how many valid XML documents are practically considered unsafe and not parsed, for example due to recursive entity expansion. This renders the parsers not flawless and in fact not in spec.
Or, my favorite bait - there should be a maximum length limit on passwords. Why would you ever need a kilobyte sized password?
Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.
As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).
Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.