> resolvconf(8) is a shell script which does not validate its input. A lack of
quoting meant that shell commands pass as input to resolvconf(8) may be
executed.
The fix consists of implementing an XXX present since the code was added:
/*
* XXX validate that domain name only contains valid characters
* for two reasons: 1) correctness, 2) we do not want to pass
* possible malicious, unescaped characters like `` to a script
* or program that could be exploited that way.
*/
It is wild that it was in that state for so long. It probably took just about as long to write that comment as it would have to implement the proper solution.
This actually makes me happy! I must be getting old!
It truly is a bad one but I really appreciate Kevin Day for finding/reporting this and for all the volunteer work fixing this.
All I had to do was "freebsd-update fetch install && reboot" on my systems and I could continue my day. Fleet management can be that easy for both pets and cattle. I do however feel for those who have deployed embedded systems. We can only hope the firmware vendors are on top of their game.
My HN addiction is now vindicated as I would probably not have noticed this RCE until after christmas.
This makes me very grateful and gives me a warm fuzzy feeling inside!
Even better, the reboot wasn't needed as the kernel didn't get bumped on this one. Just restart the rtsold service if you're using it and sanity check your resolv.conf and resolvconf.conf.
As for noticing it quickly, add `freebsd-update cron` to crontab and it will email you the fetch summary when updates are available
Where major dependency is everything that even indirectly touches network. Doesn't really matter if the thing that gives everyone access to your systems is major or not.
It's amazing the number of people that thing shell scripts should be anything other than throwaway single-person hacks.
They should probably go through their whole system and verify that there aren't more shell scripts being used, e.g. in the init system. Ideally a default distro would have zero shell scripts.
Probably not a joke. In the same way people want to get away from the C language due to its propensity to memory vulnerabilities, shell scripts have their own share of footguns, the most common being a variable not being quoted when it should (which is exactly the issue described in this advisory).
It doesn't mean getting away from scripting languages; it means getting away from shell scripts in particular (the parent poster said specifically "zero shell scripts"). If the script in question was written in Lua, or heck even Javascript, this particular issue most probably wouldn't have happened, since these scripting languages do not require the programmer to manually quote every single variable use.
That's fine; I just thought it was weird to say that we should check to see whether any shell scripts are used in the BSD init system. We know there are; it was a deliberate design decision at the time, even if we might now wish for it to be different.
Not a joke. I knew they used to use a pile of janky shell scripts for their init system. I didn't know they still do. That's disappointing.
And cesarb is correct - the issue isn't scripts; it's shell scripts, especially Bash and similar. Something like Deno/Typescript would be a decent option for example. Nushell is probably acceptable.
Even Python - while a terrible choice - is a better option than shell scripts.
The issue is POSIX standardizing legacy stuff like shells, thereby tempting people to write "portable" software, leading these technologies to ossify and stick with us for half a century and counting. Someone comes along and builds something better but gets threatened for not following "the UNIX way".
This is a very good point. I wonder how hard it would be to get POSIX to standardise a scripting language that isn't awful.
Probably never going to happen. There is a dearth of good scripting languages, and I would imagine any POSIX committee is like 98% greybeard naysayers who think 70s Unix was the pinnacle of computing.
POSIX does not specify the init/rc script system, so it's not a factor here at all. A POSIX-compliant system could use Python scripts. macOS (which is UNIX 03 certified) uses launchd. A POSIX system has to ship the shell, not use it.
And FreeBSD isn't actually POSIX-certified anyway!
The real consideration here is simply that there are tons of existing rc scripts for BSDs, and switching them all would be a large task.
Unfortunately your joke has wooshed over quite a few heads but what you say is true. The shell should be one of the most reliable parts of your operating system. Why on earth would you NOT trust the primary interface of your OS? Makes no sense.
I'm not sure I follow you but it wasn't a joke. Shell scripts are notoriously error-prone. I absolutely do not trust shell script authors to get everything right.
Also the shell isn't even "the primary interface of your OS". For Linux that's the Linux ABI, or arguably libc.
Unless you meant "human interface", in which case also no - KDE is the primary interface of my OS.
I've always believed sh, csh, bash, etc, are very bad programming languages that require excessive efforts to learn how to write code in without unintentionally introducing bugs, including security holes.
vulnerable to remote code execution from
systems on the same network segment
Isn't almost every laptop these days autoconnecting to known network names like "Starbucks" etc, because the user used it once in the past?
That would mean that every FreeBSD laptop in proximity of an attacker is vulnerable, right? Since the attacker could just create a hotspot with the SSID "Starbucks" on their laptop and the victim's laptop will connect to it automatically.
As far as I know, access points only identify via their SSID. Which is a string like "Starbucks". So there is no way to tell if it is the real Starbucks WiFi or a hotspot some dude started on their laptop.
There is nothing wrong with using public networks. It's not 2010 anymore. Your operating system is expected to be fully secure[1] even when malicious actors are present in your local network.
[1] except availability, we still can't get it right in setups used by regular people.
And when you connect to a non-public WiFi for the first time - how do you make sure it is the WiFi you think it is and not some dude who spun up a hotspot on their laptop?
Why does it matter? I mean I guess it did in this case but that is considered a top priority bug and quickly fixed.
I guess my point is the way the internet works is that your traffic goes through a number of unknown and possibly hostile actors on it's way to the final destination. Having a hostile actor presenting a spoofed wifi access point should not affect your security stance in any way. Either the connection works and you have the access you wanted or it does not. If you used secure protocols they are just as secure and if you used insecure protocols they are just as insecure.
Now having said that I will contradict myself, we are used to having our first hop be a high security trusted domain and tend to be a little sloppy there even when it is not. but still in general it does not matter. A secure connection is still a secure connection.
Hmm. Are you sure that your stack wouldn't accept these discovery packets until after you've successfully authenticated (which is what those chains are for) ?
Take eduroam, which is presumably the world's largest federated WiFi network. A random 20 year old studying Geology at Uni in Sydney, Australia will have eduroam configured on their devices, because duh, that's how WiFi works. But, that also works in Cambridge, England, or Paris, France or New York, USA or basically anywhere their peers would be because common
sense - why not have a single network?
But this means their device actively tries to connect to anything named "eduroam". Yes it is expecting to eventually connect to Sydney to authenticate, but meanwhile how sure are you that it ignores everything it gets from the network even these low-level discovery packets?
We had a soccer player in NL that was wildly popular and he had these funny remarks every now and then which got him nicknamed the most well known dutch philosopher. One of these was 'every advantage has its disadvantage', I guess this is one of those.
Can we be done with the house of cards that are shell scripts everywhere?
Anyways, this feels like a big issue for "hidden" FreeBSD installs, like pfSense or TrueNAS (if they are still based on it though). Or for servers on hosting providers where they share a LAN with their neighbors in the same rack.
Sure, as long as the solution isn't to just bolt on another distinct DNS monolith. The root of the problem IMO is that no libc, AFAIK, exports an API for parsing, let alone composing or manipulating, resolv.conf formatted data. The solutions have either been the same as FreeBSD (openresolv, a portable implementation of Debian's resolvconf tool), or just freezing resolv.conf (notwithstanding occassional new libc features) and bolting atop (i.e. keeping in place) the existing infrastructure a monolithic resolver service with their own bespoke configs, such as macOS and Linux/systemd have done. But resolv.conf can never go away, because it's the only sane and portable way for your average userland program to load DNS configuration, especially async resolver libraries.
It's a coordination problem. Note that the original notion of resolvconf, IIUC, was it was only stitching together trusted configuration data. That's no excuse, of course, for not rigorously isolating data from execution, which is more difficult in shell scripts--at least, if you're not treating the data as untrusted from the get go. It's not that difficult to write shell code to handle untrusted data, you just can't hack it together without keeping this is mind. And it would be much easier if the resolver infrastructure in libc had a proper API for dealing with resolv.conf (and others), which could be exported by a small utility which in turn could be used to slice and dice configurations from shell scripts.
The problem with the new, alternative monoliths is they very quickly run off into the weeds with their crazy features and configuration in ways that create barriers for userland applications and libraries to rely upon, beyond bootstrapping them to query 127.0.0.1:53. At the end of the day, resolv.conf can never really go away. So the proper solution, IMO, is to begin to carefully build layers around the one part that we know for a fact won't go away, rather than walking away with your ball to build a new playground. But that requires some motivated coordination and cooperation with libc developers.
> Sure, as long as the solution isn't to just bolt on another distinct DNS monolith
Why not? And I don't mean this in tongue-in-cheek, but as a genuine interrogation: why not go the macOS/systemd route?
DNS is a complex topic. Much more complex than people admit it is, and that can definitely not be expressed fully with resolv.conf. I do agree that it is too late to get rid of it (and was not my concern actually), but it is too limited to be of actual use outside of the simple "I have a single DNS server with a single search domain". IMHO, a dedicated local daemon with its own bespoke config definitely has value, even if it solely provides a local cache for applications that don't have one already (like most of them outside of browsers). And for more complex cases, simple integration with the network configuration daemon provides actual value in e.g. knowing that a specific server is reachable through a specific interface that has a specific search domain. That is, native routing to the correct servers to avoid the timeout dance as soon as you have split networks.
Also, for the local ad-hoc configuration part. We already have nsswitch which is its own can of worms that pretty much nobody have ever heard about let even touched its configuration. Heck, I've written DNS servers but only looked once at nsswitch. resolved's configuration is integrated in the systemd ecosystem, has an approachable and well documented configuration, and is pretty useful in general.
Anyways, the main gripe I had was not really at the mess that is DNS on Linux, but the general stance in the UNIX-like world against anything that's not a lego of shell scripts because "that's not the unix philosophy". Yeah you can write an init system fully with sh, have their "units" also all be written in sh, but oh lord has stuff like systemd improved the situation for the init + service part. Having a raw string from a network packet land in a shell script is a recipe for disaster, seeing how much quoting in scripts is famously difficult.
> The problem with the new, alternative monoliths is they very quickly run off into the weeds with their crazy features and configuration
Agreed for the crazy features. systemd is a godsend for the modern linux world, but I'm skeptical when I see the likes of systemd-home. Yet the configuration is not where I'd pick at those systems though, because they tend to be much more configurable. They are opiniated, yes, but the configuration is an actual configuration and not a patchwork of shell scripts somewhere in /etc, when they're not direct patches to the foundational shell scripts!
> in ways that create barriers for userland applications
How so? In the specific example of resolved, I'd argue it's even less work for applications, because they don't need to query multiple DNS servers at once (it'll handle it for them), don't need to try resolution with and without search domain, etc.
In the end, I find that resolved's approach at symlinking its stub resolv.conf is the most elegant approach with our current setups.
PS: I talk a lot about resolved because that's the one I know best, not the one I think is the best! It has loads of shortcomings too, yet it's still a net improvement to whatever was in place before.
> DNS is a complex topic. Much more complex than people admit it is, and that can definitely not be expressed fully with resolv.conf. I do agree that it is too late to get rid of it (and was not my concern actually), but it is too limited to be of actual use outside of the simple "I have a single DNS server with a single search domain".
resolv.conf is limited, but it's also been highly stable for decades, and it's sufficient if not ideal for controlling how getaddrinfo works (at least for on-the-wire requests), including controlling things like EDNS0, parallel requests, etc. Most if not all libc resolvers support things like parallel querying and other simple knobs which are configurable (if at all--see musl libc) through resolv.conf, demonstrating that it's expressive enough for most if not all common requirements shared among various client-side stub resolvers.
> And for more complex cases, simple integration with the network configuration daemon
But which one? Are you suggesting integration by way of loading it's configuration(s) (which puts us back at square 0), or by a modified query protocol, or by interfacing with the broader but even more diverse native configuration systems? None of the options seem remotely practical from the perspective of most open source projects, unless they're specifically targeting a single environment like Linux/systemd/resolvd. I don't see a viable pathway to make that happen. By contrast, embracing and hopefully improving resolv.conf as an integration point could be done piecemeal, environment by environment. The syntax is already effectively universal across systems, with the options directive providing most of the knobs. We could even make an initial push through POSIX by officially standardizing the syntax, which may even convince musl libc to make its resolver actually configurable.
> In the specific example of resolved, I'd argue it's even less work for applications, because they don't need to query multiple DNS servers at once (it'll handle it for them), don't need to try resolution with and without search domain, etc.
Yes, in most cases it's sufficient for userland applications to just make simple requests to the locally managed resolver service defined in resolv.conf. But the cases and projects needing more control over how they do their requests, using their own resolvers, only grows, especially with the proliferation of DNS schemes-see, e.g., the various recent HTTP-related DNS records which often require multiple queries and can benefit from parallel queries managed internally. A prime example is getaddrinfo itself, some implementations of which do parallel queries for A/AAAA lookups. Which brings us back to my main point: resolv.conf is the only common centralized point across almost all environment (Windows being the major exceptoin) for configuring basic DNS services.
I'm not arguing for improving resolv.conf integration as a way to replace local DNS services or their configuration. Just that for decades the staleness of resolv.conf has been a conspicuous and growing pain point from both a system configuration and userland integration perspective, and a little coordinated love & attention across the ecosystem, if only firmly committing to what's already there (especially for glibc and FreeBSD) as a reliable and more easily leveraged source of truth for code that needs it, would go a long way.
No, I don't think you are understanding this right, but there are some good questions you are asking. Where is the flag button?
If you are a real human, the most interesting question you're bringing up is What about all the appliances backed by FreeBSD? Yes, they are obsolete if they use IPv6 and accept RAs and if they don't get updates.
That was my first thought, if this is an embedded system without an update path this will be super hard to solve. People usually are not even aware of what OS their appliances run under the hood and whether or not they are updated automatically and how to update them if they are not.
IMHO you do not need "active" IPv6. Most LANs (unless you have some switch-level filtering that blocks router advertisements from "unauthorized" nodes) can transport such IPv6 packets. Then it just takes being connected to the LAN and being able to send an arbitrary ICMP6 packet (which probably means being root on the attacker machine, not a very high barrier I'd say).
Are you referring to the OMB IPv6 mandate? That only relates to federal networks, and even there its requiring only 80% adoption. It has zero relevance to normal commercial/private networks
The fix consists of implementing an XXX present since the code was added:
https://www.freebsd.org/security/patches/SA-25:12/rtsold.pat...It truly is a bad one but I really appreciate Kevin Day for finding/reporting this and for all the volunteer work fixing this.
All I had to do was "freebsd-update fetch install && reboot" on my systems and I could continue my day. Fleet management can be that easy for both pets and cattle. I do however feel for those who have deployed embedded systems. We can only hope the firmware vendors are on top of their game.
My HN addiction is now vindicated as I would probably not have noticed this RCE until after christmas.
This makes me very grateful and gives me a warm fuzzy feeling inside!
You should go into comedy, this would kill at an open mic!
As for noticing it quickly, add `freebsd-update cron` to crontab and it will email you the fetch summary when updates are available
Always makes sense to subscribe to the security-announce mailing list of major dependencies (distro/vendor, openssh, openssl etc.) and oss-security.
They should probably go through their whole system and verify that there aren't more shell scripts being used, e.g. in the init system. Ideally a default distro would have zero shell scripts.
It doesn't mean getting away from scripting languages; it means getting away from shell scripts in particular (the parent poster said specifically "zero shell scripts"). If the script in question was written in Lua, or heck even Javascript, this particular issue most probably wouldn't have happened, since these scripting languages do not require the programmer to manually quote every single variable use.
And cesarb is correct - the issue isn't scripts; it's shell scripts, especially Bash and similar. Something like Deno/Typescript would be a decent option for example. Nushell is probably acceptable.
Even Python - while a terrible choice - is a better option than shell scripts.
Probably never going to happen. There is a dearth of good scripting languages, and I would imagine any POSIX committee is like 98% greybeard naysayers who think 70s Unix was the pinnacle of computing.
And FreeBSD isn't actually POSIX-certified anyway!
The real consideration here is simply that there are tons of existing rc scripts for BSDs, and switching them all would be a large task.
Also the shell isn't even "the primary interface of your OS". For Linux that's the Linux ABI, or arguably libc.
Unless you meant "human interface", in which case also no - KDE is the primary interface of my OS.
I've always believed sh, csh, bash, etc, are very bad programming languages that require excessive efforts to learn how to write code in without unintentionally introducing bugs, including security holes.
That would mean that every FreeBSD laptop in proximity of an attacker is vulnerable, right? Since the attacker could just create a hotspot with the SSID "Starbucks" on their laptop and the victim's laptop will connect to it automatically.
Joking, but not that much :)
As far as I know, access points only identify via their SSID. Which is a string like "Starbucks". So there is no way to tell if it is the real Starbucks WiFi or a hotspot some dude started on their laptop.
Aka "unknown" or "public" Network....don't do that.
[1] except availability, we still can't get it right in setups used by regular people.
And when you connect to a non-public WiFi for the first time - how do you make sure it is the WiFi you think it is and not some dude who spun up a hotspot on their laptop?
I guess my point is the way the internet works is that your traffic goes through a number of unknown and possibly hostile actors on it's way to the final destination. Having a hostile actor presenting a spoofed wifi access point should not affect your security stance in any way. Either the connection works and you have the access you wanted or it does not. If you used secure protocols they are just as secure and if you used insecure protocols they are just as insecure.
Now having said that I will contradict myself, we are used to having our first hop be a high security trusted domain and tend to be a little sloppy there even when it is not. but still in general it does not matter. A secure connection is still a secure connection.
Take eduroam, which is presumably the world's largest federated WiFi network. A random 20 year old studying Geology at Uni in Sydney, Australia will have eduroam configured on their devices, because duh, that's how WiFi works. But, that also works in Cambridge, England, or Paris, France or New York, USA or basically anywhere their peers would be because common sense - why not have a single network?
But this means their device actively tries to connect to anything named "eduroam". Yes it is expecting to eventually connect to Sydney to authenticate, but meanwhile how sure are you that it ignores everything it gets from the network even these low-level discovery packets?
Anyways, this feels like a big issue for "hidden" FreeBSD installs, like pfSense or TrueNAS (if they are still based on it though). Or for servers on hosting providers where they share a LAN with their neighbors in the same rack.
And it's a big win for jailbreaking routers :D
It's a coordination problem. Note that the original notion of resolvconf, IIUC, was it was only stitching together trusted configuration data. That's no excuse, of course, for not rigorously isolating data from execution, which is more difficult in shell scripts--at least, if you're not treating the data as untrusted from the get go. It's not that difficult to write shell code to handle untrusted data, you just can't hack it together without keeping this is mind. And it would be much easier if the resolver infrastructure in libc had a proper API for dealing with resolv.conf (and others), which could be exported by a small utility which in turn could be used to slice and dice configurations from shell scripts.
The problem with the new, alternative monoliths is they very quickly run off into the weeds with their crazy features and configuration in ways that create barriers for userland applications and libraries to rely upon, beyond bootstrapping them to query 127.0.0.1:53. At the end of the day, resolv.conf can never really go away. So the proper solution, IMO, is to begin to carefully build layers around the one part that we know for a fact won't go away, rather than walking away with your ball to build a new playground. But that requires some motivated coordination and cooperation with libc developers.
Why not? And I don't mean this in tongue-in-cheek, but as a genuine interrogation: why not go the macOS/systemd route?
DNS is a complex topic. Much more complex than people admit it is, and that can definitely not be expressed fully with resolv.conf. I do agree that it is too late to get rid of it (and was not my concern actually), but it is too limited to be of actual use outside of the simple "I have a single DNS server with a single search domain". IMHO, a dedicated local daemon with its own bespoke config definitely has value, even if it solely provides a local cache for applications that don't have one already (like most of them outside of browsers). And for more complex cases, simple integration with the network configuration daemon provides actual value in e.g. knowing that a specific server is reachable through a specific interface that has a specific search domain. That is, native routing to the correct servers to avoid the timeout dance as soon as you have split networks.
Also, for the local ad-hoc configuration part. We already have nsswitch which is its own can of worms that pretty much nobody have ever heard about let even touched its configuration. Heck, I've written DNS servers but only looked once at nsswitch. resolved's configuration is integrated in the systemd ecosystem, has an approachable and well documented configuration, and is pretty useful in general.
Anyways, the main gripe I had was not really at the mess that is DNS on Linux, but the general stance in the UNIX-like world against anything that's not a lego of shell scripts because "that's not the unix philosophy". Yeah you can write an init system fully with sh, have their "units" also all be written in sh, but oh lord has stuff like systemd improved the situation for the init + service part. Having a raw string from a network packet land in a shell script is a recipe for disaster, seeing how much quoting in scripts is famously difficult.
> The problem with the new, alternative monoliths is they very quickly run off into the weeds with their crazy features and configuration
Agreed for the crazy features. systemd is a godsend for the modern linux world, but I'm skeptical when I see the likes of systemd-home. Yet the configuration is not where I'd pick at those systems though, because they tend to be much more configurable. They are opiniated, yes, but the configuration is an actual configuration and not a patchwork of shell scripts somewhere in /etc, when they're not direct patches to the foundational shell scripts!
> in ways that create barriers for userland applications
How so? In the specific example of resolved, I'd argue it's even less work for applications, because they don't need to query multiple DNS servers at once (it'll handle it for them), don't need to try resolution with and without search domain, etc.
In the end, I find that resolved's approach at symlinking its stub resolv.conf is the most elegant approach with our current setups.
PS: I talk a lot about resolved because that's the one I know best, not the one I think is the best! It has loads of shortcomings too, yet it's still a net improvement to whatever was in place before.
So far tailscale magicdns just works on FreeBSD.
I second that systemd is great, for services. Anything beyond that? Just a gargantuan opaque buggy overreach.
resolv.conf is limited, but it's also been highly stable for decades, and it's sufficient if not ideal for controlling how getaddrinfo works (at least for on-the-wire requests), including controlling things like EDNS0, parallel requests, etc. Most if not all libc resolvers support things like parallel querying and other simple knobs which are configurable (if at all--see musl libc) through resolv.conf, demonstrating that it's expressive enough for most if not all common requirements shared among various client-side stub resolvers.
> And for more complex cases, simple integration with the network configuration daemon
But which one? Are you suggesting integration by way of loading it's configuration(s) (which puts us back at square 0), or by a modified query protocol, or by interfacing with the broader but even more diverse native configuration systems? None of the options seem remotely practical from the perspective of most open source projects, unless they're specifically targeting a single environment like Linux/systemd/resolvd. I don't see a viable pathway to make that happen. By contrast, embracing and hopefully improving resolv.conf as an integration point could be done piecemeal, environment by environment. The syntax is already effectively universal across systems, with the options directive providing most of the knobs. We could even make an initial push through POSIX by officially standardizing the syntax, which may even convince musl libc to make its resolver actually configurable.
> In the specific example of resolved, I'd argue it's even less work for applications, because they don't need to query multiple DNS servers at once (it'll handle it for them), don't need to try resolution with and without search domain, etc.
Yes, in most cases it's sufficient for userland applications to just make simple requests to the locally managed resolver service defined in resolv.conf. But the cases and projects needing more control over how they do their requests, using their own resolvers, only grows, especially with the proliferation of DNS schemes-see, e.g., the various recent HTTP-related DNS records which often require multiple queries and can benefit from parallel queries managed internally. A prime example is getaddrinfo itself, some implementations of which do parallel queries for A/AAAA lookups. Which brings us back to my main point: resolv.conf is the only common centralized point across almost all environment (Windows being the major exceptoin) for configuring basic DNS services.
I'm not arguing for improving resolv.conf integration as a way to replace local DNS services or their configuration. Just that for decades the staleness of resolv.conf has been a conspicuous and growing pain point from both a system configuration and userland integration perspective, and a little coordinated love & attention across the ecosystem, if only firmly committing to what's already there (especially for glibc and FreeBSD) as a reliable and more easily leveraged source of truth for code that needs it, would go a long way.
> IPv6 users that do not configure the system to accept router advertisement messages, are not affected.
Maybe I'm missing something but isnt that a workaround?
"PC or computers or hardware that uses OS that consume FreeBSD, has a faulty software for the router's firmware?"
"The router's software performs ad distributions?"
"The version of internet, the router uses, is updated, whereas, the target machine, or the user's machine is still running a old version"
"The security patch works for the modern but not the precursor version?"
"This leaves older systems obsolete in the market?"
"is this a step-by-step instructions to business owners to introduce new products, selling that older products are obsolete" ?
If you are a real human, the most interesting question you're bringing up is What about all the appliances backed by FreeBSD? Yes, they are obsolete if they use IPv6 and accept RAs and if they don't get updates.
Google tracks IPv6 adoption at almost 50% globally and over 50% in the USA (https://www.google.com/intl/en/ipv6/statistics.html)
IPv6 is mainstream.