Authors are from STMicro, polytechnic Turin, Freie universitat Berlin, and Inria. Examined writing firmware for an IOT sensor platform. From the abstract:
> Two teams concurrently developing the same functionality (one in C, one in Rust) are analyzed over a period of several months. A comparative analysis of their approaches, results, and iterative efforts is provided. The analysis and measurements on hardware indicate no strong reason to prefer C over Rust for microcontroller firmware on the basis of memory footprint or execution speed. Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context. It is concluded that Rust is a sound choice today for firmware development in this domain.
> Rust is evolving far too fast to be used in code which needs to run for years to decades down the line.
Code doesn’t stop running on existing hardware when the language changes in a future compiler. You can still use the same old toolchain.
I’ve done a lot of embedded development in a past life. Keeping old tool chains around for each old platform was standard.
I would much rather go through the easy process of switching to an older Rust tool chain to build something than all of the games we played to keep entire VMs archived with a snapshot of a vendor tool chain that worked to build something.
I remember a coworker having to fight with an old platform's build not working because our user/group IDs were bigger than 2^16. I can't remember which utility was causing the problem, I'd have to guess tar. This is when we learned to play the archive a VM game.
I know a defence company that has a bunch of vaxes stored in low oxygen environments because they legally have to be able to provide software updates to firmware they’ve written for the next 20 or so years and it was written on a vax.
They had some great stories trying to get something or other running again where they had to fly one of the original designers over to hand solder a board back into action.
How we do that today is a bit of an interesting problem I don’t think they’ve convincingly solved; basically maintaining nightly builds forever — a couple 1U’s of kubernetes in deep storage ain’t gonna do it, you’re not gonna be able to solder a xeon back to life..
I know I’d rather be trying get a load of c99 rebuilt for some mips or other after 20 years that some random version of rust.
> I know a defence company that has a bunch of vaxes stored in low oxygen environments because they legally have to be able to provide software updates to firmware they’ve written for the next 20 or so years and it was written on a vax.
So uh, will these ever make it to an auction site you think?
I can't imagine theres much overlap between "we will need to update this firmware for the next decade." and "Let's bet the farm on the documentation being perfect, and all the downloads still available."
The good news is that C seems also contaminated with "move fast, break things " phylosophy. The modern code writer is not able to make things that last more than a couple of months.
What you are describing happens all the time. Usually the toolchain provider will continue updating a list of known issues for some time after EOL. Beyond that you have third parties that do it for decades, if the platform is big enough. They collect bug reports from the industry, investigate them, then create lists that you subscribe to. Those lists include detailed examples, explanations, and usually linter rules to detect code that could trigger the bug.
The truth is: If the toolchain was good enough to ship your product, has time to go EOL, and then you do a patch that surfaces an esoteric toolchain bug, then the odds are that you'll know exactly what triggered the bug and you can work around it by writing different code.
Because even if the newer shinier compiler/toolchain had the issue fixed, most companies wouldn't upgrade to it at that point. It's almost never desirable to change your toolchain for a shipping product, you're just introducing more unknowns.
Rust uses "Editions" (e.g., 2015, 2018, 2021, 2024) to introduce breaking changes without splitting the ecosystem. Every edition remains supported by newer compiler versions _indefinitely_. The only churn is on projects targeting "nightlies" but there's no reason you can't target a stable one for projects that need that stability.
Do you recall which libraries? Use of nightly fell of a cliff after 2018. Looking at the bottom of https://lib.rs/stats#rustc-usage, ~8% of all crates.io requests came from a nightly newer than that corresponding to 1.86. That's am upper bound, as using a nightly compiler doesn't mean that a nightly compiler was needed. The prevalence of nightly is also niche specific. If you're in embedded it is likely you need to use some nightly-only features that haven't been stabilized, but if you have an OS chances are that you don't.
> That's am upper bound, as using a nightly compiler doesn't mean that a nightly compiler was needed.
To be fair it's not even a lower bound, as using a stable compiler doesn't imply the absence of nightly only feature (as in Cargo features, the ones you can enable on crates you depend on).
For the purposes of this discussion the question is not whether or not a crate exposes optional features that require a nightly compiler, but whether or not a crate makes use of the nightly compiler mandatory, which has become extremely rare in my experience. Perhaps it's more common in some embedded use cases, but if people want to make that assertion, I would ask that they either mention which libraries they're specifically talking about or which nightly features they're specifically referring to.
I think the divide is apps vs libraries: a library that requires their dependants to set an environment variable opting out of stability guarantees is unlikely to gain adoption, but applications that do so are more common, like Firefox.
You have the same issue with C, no? C is upgrading versions, compilers have changed, hardware evolves and somethings in the past aren't supported as well anymore.
The code won't magically stop running because the Rust community continued evolving the language. The old toolchains will be available if there's a compatibility change.
Probably just depends on what you are doing. Library support could move forward and new features / security updates for libraries that are not part of core Rust could possibly be an issue if they don't work on older versions.
Might not matter for a lot of embedded, but if you are doing something like exposing functionality via a webserver or something that would be network-connected, then security updates in third-party libraries may be important.
For example, it would be really easy for me to run old code that's pinned to something like Python 3.7, but if libraries have updated to Python 3.x without backwards compatibility, then I'm stuck using the out of date versions or just backporting myself.
I'm curious why I've seen this sentiment repeated in so many places, I learned Rust once 5 years ago and I haven't had to learn any new idioms and there have been no backwards incompatible changes to it that required migrating any of my code.
I think people don't like the JavaScript treadmill. People want to think about using tools and getting proficient with them rather than relearning tools. I'm not saying rust is like that, but I do feel that way about python and JavaScript. Those are dynamic languages but it is what all this editions stuff evokes. It's an if it were stable, it wouldn't be changing sort of thing.
> using tools and getting proficient with them rather than relearning tools
This attitude works in carpentry, but not in software. You need to get proficient, but your tools will keep evolving, like everything else in the software world.
This attitude doesn't even work in carpentry, depending on the timeframe you look at, tools have changed over time. You can still use a hand saw, where a table saw would be just as suitable, or have a SawStop(tm) and reduce the likelihood of losing a finger.
That's exactly the point. This is not normal even in software.
You can, in fact, learn C exactly once. Or any number of other languages. The entire argument being made here is that the world you're suggesting is a problem. Software developers should not have to continually relearn their tools and it is abnormal to suggest they should.
To be very fair there are legitimate gripes here, they're small but they are worth covering, and then there's a huge nonsense
L1: The edition system allows Rust to literally mutate the language. 2024 edition (if you begin a new Rust project today) has different rules from 2021 Edition, from 2018 edition and the Rust 1.0 "2015 edition". These changes aren't exactly huge, but they are real and at corporate scale you would probably want to add say a one day internal seminar to learn what's new in a new edition if you want to adopt that edition. For example we hope 2027 edition will swap out the 1..=10 syntax to be sugar for the new core::range::RangeInclusive<i32> not today's core::ops::RangeInclusive<i32> and this swap delivers some nice improvements.
L2: Unlike C++ the Rust stdlib unconditionally grows for everybody in new compiler releases. So even if you stuck with 2015 Edition, all the time since Rust 1.0, when you use a brand new Rust compiler you get the standard library as it exists today in 2026, not how it was in 2015 when you began coding. If you decided you needed a "strip_suffix" method for the string slice reference type &str you might have written a Rust trait, say, ImprovedString and implemented it for &str to give it your strip_suffix method. Meanwhile in Rust 1.45 the Rust standard library &str also gained a method for the same purpose with the same name and so now what you've written won't compile due to ambiguity. You will need to modify your software to compile it on Rust 1.45 and later.
L3: Because Rust is a language with type inference, changes to what's possible which seem quite subtle and of no consequence for existing code may make something old you wrote now ambiguous because what once had a single obvious type is now ambiguous. This is more surprising than the L2 case because now it seems as though this should never have compiled at all. Type A and B already existed, before it inferred type A, now it insists B might be possible, but it may be quite a tangle to discover why B was not a possibility until this new version of Rust. If the compiler had rejected your code when you wrote it in 2015 as ambiguous you'd have grunted and written what you meant, but at this distance in time it may be hard to remember, did you mean B here?
Now the nonsense: There's a vague superstition that Rust is constantly changing while good old C is absolutely stable. Neither is true by orders of magnitude. If you really need certainty you should freeze actual hardware and software, or at the very least build a VM and then nothing changes because you changed nothing. If you'd have been comfortable upgrading to a new CC version, you shouldn't be scared about upgrading the Rust tools.
I'm curious what the concern is with the rust editions mechanics in place. Each crate gets to define the language edition it is compiled with. Even if dependencies up convert to later editions they can still be linked against by crates that are an older edition.
As for the broader crate ecosystem, if crates you depend on drop support for APIs you depend on, that could cause you to get stuck on older unsupported releases. Though that is no different of a problem than any other language.
I only tried Rust for small hobby projects, but I did experience weird code rot when you just leave the code there and after a while it does not compile. Might have something to do with how Cargo manages dependencies
Do you remember more specifics? I've seen four cases:
- a project with no Cargo.lock, where there have been breaking changes in a dependency that wasn't specific enough in Cargo.toml; fixing this requires some finessing of dependencies but is possible to get the project building without any code changes
- a project with proper dependency tree specified, but where a std change cause inference to break specific older versions of a crate in your tree (time 0.35 comes to mind); this requires similar changes to the above
- a project relies on UB on stable code that should always have been disallowed and since fixed; this is tricky, on a dependency, an updated version will likely exist, on your own project you'd have to either change your code or use the older toolchain, knowing that the code might not be doing what you want it to do (this happened a handful of times pre 1.20)
- an older project, with the proper dependency versions specified, being built on a newer platform; I saw this with someone trying to build a project untouched since 2018 on an ARM Mac: the toolchain for it didn't exist back then, and the macOS specific lib they were using didn't have any knowledge either. Newer versions of the library do, of course, but that required updating a set of libs that would be compatible too.
All of these cases are quite rare. You could encounter all of them at the same time, and that would be annoying, enough to have someone doing it for fun say "fuck it" and drop it. You can also get hit by a lightning.
But between Cargo.lock which should allow your project to build on newer toolchains, and access to all prior toolchains, your project should continue to build forever on the same platform.
I'd add pinning a rust toolchain version (using rust-toolchain.toml or similar) in addition to Cargo.lock
Rustc does have fairly frequent (every ~18 months of so) minor breaking changes between versions. These are often related to type inference, usually only affect a very small number of crates, and are usually mitigated by publishing patch versions of those crates that don't run into the issue. But if you have the patch version locked with a lockfile then that won't help you, and there is increased likelihood of the build failing, so it's best to lock down the rustc version too.
Luckily pinning the rustc version is very easy to do.
---
On regular projects this kind of issue can usually also be fixed by upgrading to the latest rustc and running `cargo update`. But conservative embedded projects may have legitimate reasons for not wanting to upgrade rustc to the latest version, and parts of ecosystem's disregard for MSRVs means that running `cargo update` on an older rustc has a high chance of causing build breakage due to MSRV issues.
I've had issues compiling Python 3.12 on ArchLinux when Python 3.12 -> Python 3.13 happened, and few of important packages broke. So I had to compile older version of gcc and build Python 3.12
So, it can happen in any programming language, and to any large projects.
Rust allows me to handle this easily with rust.toolchain file, so, this concern is kinda overblown imo
> Might have something to do with how Cargo manages dependencies
Build against the lockfile to use the same versions.
Unless they were pulled from upstream, they won’t suddenly stop building against the same compiler version. Rustup makes it easy to switch compiler versions to get back to the same one you used, too.
Even if a crate is yanked, if you have the version in a lock file it will still download and build. (This was done precisely after seeing the left-pad incident.)
I'm sure you have ways to entirely purge a crate. And the situation will arise that you need to do so. In which case all the old code will, indeed, break.
Vendoring is the only solution to this but it's really discouraged in rust-land and there is no first-party support for it. You can kind of manually vendor your deps with cargo, and there are third party tools. But compare that to go-land where `go mod vendor` gets you 95-100% of the way there.
This is not a Rust issue but an inherent issue with dependencies in all languages. External dependencies rot.
For Rust code for serious industrial use cases or firmwares, it's always best to minimize dependencies as much as possible to avoid this. Making local copies of dependencies is also a thing for certain use cases.
There is a difference in C and Rust culture. Embedded C projects rarely have external dependencies, and in rare cases when there are dependencies (e.g. most projects use vendor SDKs nowadays), they are pinned and there is an expectation of API compatibility anyway
Rust on the contrary incentivises using dependencies, and especially embedded software is hard to write without using external packages (e.g. cortex-m-rt, bytemuck and many others)
Firefox explicitly opts out of stability guarantees by using nightly features on a stable compiler in an unsupported manner, not dissimilar to using an unstable GNU extension in C. But good example of the caveat that if you're not using stable, then yes, you have no stability guarantees.
Code in all languages bitrots. Even if your dependencies are "done", the language is unchanging, the toolchain mature, a vendor can introduce a new platform and all of a sudden your code won't compile anymore, because IBM introduced a new RISC server platform, or macOS changed the definition of time_t, or Windows blocked direct win32.DLL access (I know, a stretch), that your older libraries didn't know about.
Stretch or not, MDAC can no longer be installed on Windows. (The Microsoft Data Access Components are a rollup of database interface libraries from when they seemed to do around one a year, to the point that Spolsky remarked on it[1].) This means a significant corpus of old but still 32-bit line-of-business apps no longer runs, like anything written in VB6 or VBA that needs to access a database.
We have Rust code in a living code base that is more than 5 years old and it's required maybe one touch in the last 5 years to fix some issues due to stricter rules. It was simple enough it could have been automated.
I've often found that trying to compile decade-old C code with a current toolchain and current libraries will have issues. It isn't always clear what versions the code is expecting (no equivalent to a lockfile), newer C compilers or standards can break old code, and newer libraries especially can break old code. It might still build if you could recreate exactly what it expects, but it becomes decreasingly possible to do that if you weren't compiling it a decade ago and archived off exactly what worked then.
Good article! I will give you my 2c, as someone in this space mostly for hobbies, but with one active work project:
Rust is fantastic for embedded. There are no hard obstacles. The reason to do it IMO is not memory safety, but because holistically the language and tools are (to me) nicer. Enums, namespacing, no headers, `cargo run --release` "just works". (I have found, at least in OSS, compiling C embedded project is a mess. Linker errors, you need to have a certain OS with certain dependencies, there are many scripts etc). Good error messages, easy to structure your programs in a reliable way etc. Overall, I just find it to be a better designed language.
I have found the most fundamental tooling for rust on Espressif's RiscV, and Cortex-M ARM for various STM-32 variants to be great. The cortex-m crate, defmt, and probe-rs, and the PAC project is fantastic.
On the down side, I have have to build my own tooling. I wrote and maintain my own HAL for STM32, and have had to write my own libraries for every piece of hardware. This comes with the territory of a new language, and suspect this will gradually improve this time - especially with vendor support. Because the fundamental libraries are around, this is just reading datasheets and making rust functions/structs etc that do the MMIO as described in the datasheets. Can be tedious (Especially if building a complete library instead of implementing what you need for a given project), but is not an obstacle.
My most complicated rust embedded firmware was a FPV-style UAS. I did it without an RTOS, using interrupt-based control flow.
We passed on Rust for Ada/SPARK2014 to write to bare metal on Cortex-M processor for real-time, high-integrity, and verifiable mission-critical software. Rust is making strides to be a future competitor, but it's new to the formal verification tooling and lacks any real world legacy in our domain. Ada's latest spec. is 2022. Other than AdaCore's verified Rust compiler, Rust still does not have a stable language specification like C/C++, Lisp, or Ada, SPARK 2014. I have no doubt that it will start rising to tick all the boxes that Ada/SPARK do right now with their decades of legacy in high-intetrity, mission-critical applications. The mandate to use memory-safe software put into effect this past Jan 1 2026 puts some wind in Rust's sails, but it's more than memory-safety in this domain. Plus, I do not enjoy Rust, but Cargo is nice. We're looking at Lean for further assistance in verifying our work. I think there was and is lot of Rust evangelism that will also carry it forward and boost even more Rust popularity,
Rust is not really memory safe if you combine it with external libraries. Too many "unsafe" keywords, and lack of tooling for code analysis and verification.
Edit:
With c, you can do memory safety analysis on all system libraries and entire Linux kernel. Some OS kernels, libs and languages do not have dynamic memory allocation at all!
Some languages are memory safe!
Learn more about embedded programming!
Yes, and AdaCore's tooling is formally verified and produces reports already familiar to aerospace, railway, and auto auditors for verifying certifications making it attractive to this industry segment of high-integrity apps. Memory safety is taken care of mainly through the features Ada/SPARK2014 offer in creating safe, high-integrity programs, correct.
> It is concluded that Rust is a sound choice today for firmware development in this domain.
This conclusion was reached with a single experiment.
> Two teams concurrently developing the same functionality — one in C, one in Rust — are analyzed over a period of several months.
> Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context.
> The authors thank Davide Aliprandi and Davide Sergi of the STAIoTCraft team, and the wider Ariel OS team.
So one team had Ariel OS developer support, and it's unclear what support the other team had. Seems fair.
In Figure 12, they simply stop optimizing the code once desired rate is reached. Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
Additionally, there is a claim that "Ariel OS is shown to provide an efficient and portable system runtime" - but there are no real tests for portability are conducted. Worst still:
> Where C-based projects require a separate project setup and manual code copying per target, Rust on Ariel OS consolidates everything within a single project [..]
This claim is just not true. This sounds like somebody that is not as familiar with C.
> In Figure 12, they simply stop optimizing the code once desired rate is reached.
Yes. The goal was to handle the maximum data rate of the used sensor, and stop there. Time was limited on both ends.
> Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
> Yes. The goal was to handle the maximum data rate of the used sensor, and stop there. Time was limited on both ends.
I understand, and I understand that there were limits to what could be done with the resources there were. What irks me is the strength of the claim made without enough evidence to make it.
> The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
Fair enough, hats off to the intern. This kind of thing is common in MCUs, even on low-end CPUs weird defaults can be selected. But the involvement and influence of the OS developers remains unclear.
Again, there's just not enough data to make such strong claims. I think the paper could easily make recommendations, it could say that at least in some cases (as evidenced) Rust could be a reasonable choice, and it could make an argument for further work.
> This conclusion was reached with a single experiment.
No shit. This is the conclusion reached at the conclusion of this experiment. This part of your comment can be removed with no loss of clarity, I think.
I think you miss my point. I don't think that this conclusion can be reached with the (singular) experiments performed because there is a lack of data to draw it.
If I ran an experiment where I gave a cancer patient bread, and then they recovered from cancer, I couldn't then say: "It is concluded that <bread> is a sound choice today for <cancer treatment> in this domain.". You would rightfully jump up and down and demand further experiments to increase the confidence of the result before drawing the conclusion.
It could have been concluded instead that there is a case for further experiments to be conducted, or that Rust could be approaching a maturity where it could be considered for some firmware projects. But as it stands, the conclusion is far too strong given the experiments performed.
Really strange the the C JSON parser has to use malloc where the RUST version does not. As if it is not possible to write a JSON parser in C that does use malloc. I presume that the syntax of the commands that the device will accept is known, and than there is no reason why you have to build a DOM of the JSON before you can process it. Apparently, the RUST version can do it. I really begin to question the abilities of two teams if the one team failed to implement a JSON parser solution without using memory allocations.
Yeah, you can comfortably work with JSON in C directly on top of the string buffer containing it. Your representation for any JSON entity will just be const char pointer. It's possible to implement JSON path on top of this, and all kinds of niceties, and it's not slow.
You mean with the "two teams" that were tasked to develop the C / Rust versions?
Yeah of course. Then again - they were one person teams, where the C "team" had years of experience in stm32 / embedded C / stm32 cube development and churned out that handwritten state machine in just days. The Rust "team" was a pre-masters intern with only minimal embedded Rust experience. They ran into all the pitfalls with (async) embedded Rust, but corrected towards the end.
That does not seem like even close to a fair comparison and makes me wonder how valid the conclusion is. Effectively this is two times n=1, if you use 'teams' when you actually mean 'individuals' then that's not really proper reporting.
I do applaud you for having the same work done twice but it would have been far more meaningful to have two actual teams of seasoned developers do this sort of thing side-by-side. The biggest item on the checklist would be the number of undiscovered UB or UB related bugs in the C codebase and to compare that with the Rust codebase on 'defect escape rate' or some other meaningful metric.
I think there’s another hidden issue of testing how new devs use the language vs. those seasoned devs. I expect someone with a few months of experience would prefer Rust (fewer footguns) but someone with more experience would prefer C (the sharper knife). The flavour of the thing changes as we age.
The problem with C - and I'm saying this as a life-long C programmer and not exactly a fan of Rust - is that C is indeed very sharp but it will cut other people just as easily even though they are far downstream of the original programmer, as well as the users of those programs. And it is extremely hard to not accidentally fall for one of the many pitfalls of C.
I've got my own set of restrictions for when I'm coding in C based on many nights spent poring over various pieces of code and trying to find a way to do it better and safer without outright switching languages. I do believe it is possible. But at the end of all that you have essentially redefined the language in a way that probably no other C programmer would like or agree with, and it would still require very good discipline.
So having languages with fewer footguns is good, as long as the lack of one kind of footgun isn't replaced by a other kinds of footguns. It is one of the reasons I'm interested in the FIL-C project.
Yeah, a common stupid requirement. Perhaps a selling point for any solution would be to deploy a common serialization/de-serialization package that can be used on both the cloud and end point side.
Why? In IoT stuff, its very useful if you can talk to your devices via standard internet protocols, otherwise you have to introduce some pointless 'gateway' node for that.
I mean sometimes efficiency matters a lot, but a lot of other times, interoperability is more important.
Text based IO with microcontrollers over tty has been quite a standard thing even decades ago.
Note: I'm not using the same tooling, but CAN and I2S have worked well for years on STM-32/rust. You just need to interface with STM32's SAI (ditital audio peripheral) and CAN. There are high-quality portable libs for both the legacy "BX" CAN and FD-CAN, which will work on any STM-32 variant. The SAI will have to be HAL-specific, but I have used it on both G4 and H7 variants for PDM mic arrays.
1. So Ariel OS is based on Embassy - IIUC I2S and CAN has some support upstream. That can be used already, although not using Ariel's usually fully portable APIs.
2. Well, ST has released official Rust drivers for a bunch of their sensors. They're built on embedded-hal(-async), so can directly be used with Ariel OS. There is probably more.
I read the paper looking for what kinds of static analysis, fuzzing, sanitizers, formal tools, HIL testing, binary analysis were used - didn’t see anything.
I’d guess that’s an area where C tooling is pretty far ahead of Rust tooling at present?
I'll start of by saying I really hate C (also love it), and welcome improvements; but I have a few criticisms:
- Sensor agent is such a rancid name for a remote sensor that I feel a need to public say so. Please don't use marketing names for things that already have more descriptive names.
- Rust uses a full RTOS and C uses the mediocre ST HAL (vendor specific). Immediately apples to oranges. Also I've never heard of the C JSON library and it looks sketchy at a glance so that will also hurt the comparison.
- Streaming slow sensor data with a 160MHz 786KB/2MB MCU is not a good test in the slightest. You could probably use something like micro python here and be done. No one is reaching for bare metal C here. Also no one serious about performance is using JSON serdes. If you're using bare metal C, you're likely trying to push the limits of your hardware or doing something so simple that you won't be tempted to reach for terrible third party libraries.
- Does the Rust code base use the 'unsafe' anywhere, including the RTOS? If so, it's not memory safe without additional formal verification.
Overall I'd say this paper has approximately zero value wrt its stated goal of comparison.
I'm a big fan of Rust on embedded (and think embassy in particular is awesome, haven't tried this Ariel OS.)
I would say however that there's still toolchain issues here. There all kinds of MCUs that simply don't/won't have a viable compiler toolchain that would support Rust.
e.g. I recently came from a job where they built their own camera board around an older platform because it offered a compelling bundle of features (USB peripheral support and MIPI interface mainly). We were stuck with C/C++ as the toolchain there, as there was no reasonable way to make this work with Rust as it was a much older ARM ISA
I find this a bit disappointing. Why not publish it with the preprint. Now we have no way to establish the quality of the two solutions or whether it is even possible to improve one of the solutions. I wonder why the C variant could not implement a JSON parser without malloc and free, while the RUST variant could.
> Two teams concurrently developing the same functionality (one in C, one in Rust) are analyzed over a period of several months. A comparative analysis of their approaches, results, and iterative efforts is provided. The analysis and measurements on hardware indicate no strong reason to prefer C over Rust for microcontroller firmware on the basis of memory footprint or execution speed. Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context. It is concluded that Rust is a sound choice today for firmware development in this domain.
Rust is evolving far too fast to be used in code which needs to run for years to decades down the line.
Code doesn’t stop running on existing hardware when the language changes in a future compiler. You can still use the same old toolchain.
I’ve done a lot of embedded development in a past life. Keeping old tool chains around for each old platform was standard.
I would much rather go through the easy process of switching to an older Rust tool chain to build something than all of the games we played to keep entire VMs archived with a snapshot of a vendor tool chain that worked to build something.
They had some great stories trying to get something or other running again where they had to fly one of the original designers over to hand solder a board back into action.
How we do that today is a bit of an interesting problem I don’t think they’ve convincingly solved; basically maintaining nightly builds forever — a couple 1U’s of kubernetes in deep storage ain’t gonna do it, you’re not gonna be able to solder a xeon back to life..
I know I’d rather be trying get a load of c99 rebuilt for some mips or other after 20 years that some random version of rust.
So uh, will these ever make it to an auction site you think?
If you keep your old computer around, yes.
The good news is that C seems also contaminated with "move fast, break things " phylosophy. The modern code writer is not able to make things that last more than a couple of months.
Unless you find out the compiler was buggy and was producing faulty binaries, but the new compiler can no longer compile the old code.
The truth is: If the toolchain was good enough to ship your product, has time to go EOL, and then you do a patch that surfaces an esoteric toolchain bug, then the odds are that you'll know exactly what triggered the bug and you can work around it by writing different code.
Because even if the newer shinier compiler/toolchain had the issue fixed, most companies wouldn't upgrade to it at that point. It's almost never desirable to change your toolchain for a shipping product, you're just introducing more unknowns.
Now I’ve not extensively used Rust but almost everytime I did it ended up needing nightly to use some library or other.
To be fair it's not even a lower bound, as using a stable compiler doesn't imply the absence of nightly only feature (as in Cargo features, the ones you can enable on crates you depend on).
Where's the problem exactly?
Might not matter for a lot of embedded, but if you are doing something like exposing functionality via a webserver or something that would be network-connected, then security updates in third-party libraries may be important.
For example, it would be really easy for me to run old code that's pinned to something like Python 3.7, but if libraries have updated to Python 3.x without backwards compatibility, then I'm stuck using the out of date versions or just backporting myself.
I'm curious why I've seen this sentiment repeated in so many places, I learned Rust once 5 years ago and I haven't had to learn any new idioms and there have been no backwards incompatible changes to it that required migrating any of my code.
- a lot of code now uses mix of witness types and const generics
- with new borrow checker release they will do new iterators 2.0
Seems like coding on 5 year old Rust is like C++ 98.
This attitude works in carpentry, but not in software. You need to get proficient, but your tools will keep evolving, like everything else in the software world.
You can, in fact, learn C exactly once. Or any number of other languages. The entire argument being made here is that the world you're suggesting is a problem. Software developers should not have to continually relearn their tools and it is abnormal to suggest they should.
L1: The edition system allows Rust to literally mutate the language. 2024 edition (if you begin a new Rust project today) has different rules from 2021 Edition, from 2018 edition and the Rust 1.0 "2015 edition". These changes aren't exactly huge, but they are real and at corporate scale you would probably want to add say a one day internal seminar to learn what's new in a new edition if you want to adopt that edition. For example we hope 2027 edition will swap out the 1..=10 syntax to be sugar for the new core::range::RangeInclusive<i32> not today's core::ops::RangeInclusive<i32> and this swap delivers some nice improvements.
L2: Unlike C++ the Rust stdlib unconditionally grows for everybody in new compiler releases. So even if you stuck with 2015 Edition, all the time since Rust 1.0, when you use a brand new Rust compiler you get the standard library as it exists today in 2026, not how it was in 2015 when you began coding. If you decided you needed a "strip_suffix" method for the string slice reference type &str you might have written a Rust trait, say, ImprovedString and implemented it for &str to give it your strip_suffix method. Meanwhile in Rust 1.45 the Rust standard library &str also gained a method for the same purpose with the same name and so now what you've written won't compile due to ambiguity. You will need to modify your software to compile it on Rust 1.45 and later.
L3: Because Rust is a language with type inference, changes to what's possible which seem quite subtle and of no consequence for existing code may make something old you wrote now ambiguous because what once had a single obvious type is now ambiguous. This is more surprising than the L2 case because now it seems as though this should never have compiled at all. Type A and B already existed, before it inferred type A, now it insists B might be possible, but it may be quite a tangle to discover why B was not a possibility until this new version of Rust. If the compiler had rejected your code when you wrote it in 2015 as ambiguous you'd have grunted and written what you meant, but at this distance in time it may be hard to remember, did you mean B here?
Now the nonsense: There's a vague superstition that Rust is constantly changing while good old C is absolutely stable. Neither is true by orders of magnitude. If you really need certainty you should freeze actual hardware and software, or at the very least build a VM and then nothing changes because you changed nothing. If you'd have been comfortable upgrading to a new CC version, you shouldn't be scared about upgrading the Rust tools.
As for the broader crate ecosystem, if crates you depend on drop support for APIs you depend on, that could cause you to get stuck on older unsupported releases. Though that is no different of a problem than any other language.
That statement deserves support.
- a project with no Cargo.lock, where there have been breaking changes in a dependency that wasn't specific enough in Cargo.toml; fixing this requires some finessing of dependencies but is possible to get the project building without any code changes
- a project with proper dependency tree specified, but where a std change cause inference to break specific older versions of a crate in your tree (time 0.35 comes to mind); this requires similar changes to the above
- a project relies on UB on stable code that should always have been disallowed and since fixed; this is tricky, on a dependency, an updated version will likely exist, on your own project you'd have to either change your code or use the older toolchain, knowing that the code might not be doing what you want it to do (this happened a handful of times pre 1.20)
- an older project, with the proper dependency versions specified, being built on a newer platform; I saw this with someone trying to build a project untouched since 2018 on an ARM Mac: the toolchain for it didn't exist back then, and the macOS specific lib they were using didn't have any knowledge either. Newer versions of the library do, of course, but that required updating a set of libs that would be compatible too.
All of these cases are quite rare. You could encounter all of them at the same time, and that would be annoying, enough to have someone doing it for fun say "fuck it" and drop it. You can also get hit by a lightning.
But between Cargo.lock which should allow your project to build on newer toolchains, and access to all prior toolchains, your project should continue to build forever on the same platform.
Rustc does have fairly frequent (every ~18 months of so) minor breaking changes between versions. These are often related to type inference, usually only affect a very small number of crates, and are usually mitigated by publishing patch versions of those crates that don't run into the issue. But if you have the patch version locked with a lockfile then that won't help you, and there is increased likelihood of the build failing, so it's best to lock down the rustc version too.
Luckily pinning the rustc version is very easy to do.
---
On regular projects this kind of issue can usually also be fixed by upgrading to the latest rustc and running `cargo update`. But conservative embedded projects may have legitimate reasons for not wanting to upgrade rustc to the latest version, and parts of ecosystem's disregard for MSRVs means that running `cargo update` on an older rustc has a high chance of causing build breakage due to MSRV issues.
So, it can happen in any programming language, and to any large projects.
Rust allows me to handle this easily with rust.toolchain file, so, this concern is kinda overblown imo
Build against the lockfile to use the same versions.
Unless they were pulled from upstream, they won’t suddenly stop building against the same compiler version. Rustup makes it easy to switch compiler versions to get back to the same one you used, too.
Vendoring is the only solution to this but it's really discouraged in rust-land and there is no first-party support for it. You can kind of manually vendor your deps with cargo, and there are third party tools. But compare that to go-land where `go mod vendor` gets you 95-100% of the way there.
For Rust code for serious industrial use cases or firmwares, it's always best to minimize dependencies as much as possible to avoid this. Making local copies of dependencies is also a thing for certain use cases.
Rust on the contrary incentivises using dependencies, and especially embedded software is hard to write without using external packages (e.g. cortex-m-rt, bytemuck and many others)
imo it's just so much easier
[1] https://www.joelonsoftware.com/2002/01/06/fire-and-motion/
We have Rust code in a living code base that is more than 5 years old and it's required maybe one touch in the last 5 years to fix some issues due to stricter rules. It was simple enough it could have been automated.
Rust is fantastic for embedded. There are no hard obstacles. The reason to do it IMO is not memory safety, but because holistically the language and tools are (to me) nicer. Enums, namespacing, no headers, `cargo run --release` "just works". (I have found, at least in OSS, compiling C embedded project is a mess. Linker errors, you need to have a certain OS with certain dependencies, there are many scripts etc). Good error messages, easy to structure your programs in a reliable way etc. Overall, I just find it to be a better designed language.
I have found the most fundamental tooling for rust on Espressif's RiscV, and Cortex-M ARM for various STM-32 variants to be great. The cortex-m crate, defmt, and probe-rs, and the PAC project is fantastic.
On the down side, I have have to build my own tooling. I wrote and maintain my own HAL for STM32, and have had to write my own libraries for every piece of hardware. This comes with the territory of a new language, and suspect this will gradually improve this time - especially with vendor support. Because the fundamental libraries are around, this is just reading datasheets and making rust functions/structs etc that do the MMIO as described in the datasheets. Can be tedious (Especially if building a complete library instead of implementing what you need for a given project), but is not an obstacle.
My most complicated rust embedded firmware was a FPV-style UAS. I did it without an RTOS, using interrupt-based control flow.
Edit: With c, you can do memory safety analysis on all system libraries and entire Linux kernel. Some OS kernels, libs and languages do not have dynamic memory allocation at all!
Some languages are memory safe! Learn more about embedded programming!
My understand is that both these things are in work, and that neither of these things exist yet.
This conclusion was reached with a single experiment.
> Two teams concurrently developing the same functionality — one in C, one in Rust — are analyzed over a period of several months.
> Furthermore, Ariel OS is shown to provide an efficient and portable system runtime in Rust whose footprint is smaller than that of the state-of-the-art bare-metal C stack traditionally used in this context.
> The authors thank Davide Aliprandi and Davide Sergi of the STAIoTCraft team, and the wider Ariel OS team.
So one team had Ariel OS developer support, and it's unclear what support the other team had. Seems fair.
In Figure 12, they simply stop optimizing the code once desired rate is reached. Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
Additionally, there is a claim that "Ariel OS is shown to provide an efficient and portable system runtime" - but there are no real tests for portability are conducted. Worst still:
> Where C-based projects require a separate project setup and manual code copying per target, Rust on Ariel OS consolidates everything within a single project [..]
This claim is just not true. This sounds like somebody that is not as familiar with C.
Yes. The goal was to handle the maximum data rate of the used sensor, and stop there. Time was limited on both ends.
> Just at the end of the project the Rust firmware gets over a third performance boost, most likely from their OS developers.
The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
I understand, and I understand that there were limits to what could be done with the resources there were. What irks me is the strength of the claim made without enough evidence to make it.
> The ST intern found those boosts all by himself. They compared the exact MCU & peripheral initialization of the C and Rust firmwares, tightened I2C timings (where STM Cube has vendor tuned & qualified values), and enabled the MCU's instruction cache, which somehow is not default in Embassy's HAL. We were quite impressed actually, the last days before the deadline were quite productive, optimization wise.
Fair enough, hats off to the intern. This kind of thing is common in MCUs, even on low-end CPUs weird defaults can be selected. But the involvement and influence of the OS developers remains unclear.
Again, there's just not enough data to make such strong claims. I think the paper could easily make recommendations, it could say that at least in some cases (as evidenced) Rust could be a reasonable choice, and it could make an argument for further work.
No shit. This is the conclusion reached at the conclusion of this experiment. This part of your comment can be removed with no loss of clarity, I think.
If I ran an experiment where I gave a cancer patient bread, and then they recovered from cancer, I couldn't then say: "It is concluded that <bread> is a sound choice today for <cancer treatment> in this domain.". You would rightfully jump up and down and demand further experiments to increase the confidence of the result before drawing the conclusion.
It could have been concluded instead that there is a case for further experiments to be conducted, or that Rust could be approaching a maturity where it could be considered for some firmware projects. But as it stands, the conclusion is far too strong given the experiments performed.
Megatools is an example of such a code https://xff.cz/megatools/ / https://xff.cz/git/megatools/tree/lib/sjson.c
"Author's" is possessive; "authors" is plural.
Yeah of course. Then again - they were one person teams, where the C "team" had years of experience in stm32 / embedded C / stm32 cube development and churned out that handwritten state machine in just days. The Rust "team" was a pre-masters intern with only minimal embedded Rust experience. They ran into all the pitfalls with (async) embedded Rust, but corrected towards the end.
I do applaud you for having the same work done twice but it would have been far more meaningful to have two actual teams of seasoned developers do this sort of thing side-by-side. The biggest item on the checklist would be the number of undiscovered UB or UB related bugs in the C codebase and to compare that with the Rust codebase on 'defect escape rate' or some other meaningful metric.
I've got my own set of restrictions for when I'm coding in C based on many nights spent poring over various pieces of code and trying to find a way to do it better and safer without outright switching languages. I do believe it is possible. But at the end of all that you have essentially redefined the language in a way that probably no other C programmer would like or agree with, and it would still require very good discipline.
So having languages with fewer footguns is good, as long as the lack of one kind of footgun isn't replaced by a other kinds of footguns. It is one of the reasons I'm interested in the FIL-C project.
https://fil-c.org/
I mean sometimes efficiency matters a lot, but a lot of other times, interoperability is more important.
Text based IO with microcontrollers over tty has been quite a standard thing even decades ago.
2. Well, ST has released official Rust drivers for a bunch of their sensors. They're built on embedded-hal(-async), so can directly be used with Ariel OS. There is probably more.
I’d guess that’s an area where C tooling is pretty far ahead of Rust tooling at present?
- Sensor agent is such a rancid name for a remote sensor that I feel a need to public say so. Please don't use marketing names for things that already have more descriptive names.
- Rust uses a full RTOS and C uses the mediocre ST HAL (vendor specific). Immediately apples to oranges. Also I've never heard of the C JSON library and it looks sketchy at a glance so that will also hurt the comparison.
- Streaming slow sensor data with a 160MHz 786KB/2MB MCU is not a good test in the slightest. You could probably use something like micro python here and be done. No one is reaching for bare metal C here. Also no one serious about performance is using JSON serdes. If you're using bare metal C, you're likely trying to push the limits of your hardware or doing something so simple that you won't be tempted to reach for terrible third party libraries.
- Does the Rust code base use the 'unsafe' anywhere, including the RTOS? If so, it's not memory safe without additional formal verification.
Overall I'd say this paper has approximately zero value wrt its stated goal of comparison.
I would say however that there's still toolchain issues here. There all kinds of MCUs that simply don't/won't have a viable compiler toolchain that would support Rust.
e.g. I recently came from a job where they built their own camera board around an older platform because it offered a compelling bundle of features (USB peripheral support and MIPI interface mainly). We were stuck with C/C++ as the toolchain there, as there was no reasonable way to make this work with Rust as it was a much older ARM ISA
-> paper is not final. And IIUC ST will be releasing the code at some point.
https://info.arxiv.org/help/faq/whytex.html