Laws of Software Engineering

(lawsofsoftwareengineering.com)

312 points | by milanm081 3 hours ago

46 comments

  • GuB-42 9 minutes ago
    > Premature optimization is the root of all evil.

    There are few principle of software engineering that I hate more than this one, though SOLID is close.

    It is important to understand that it is from a 1974 paper, computing was very different back then, and so was the idea of optimization. Back then, optimizing meant writing assembly code and counting cycles. It is still done today in very specific applications, but today, performance is mostly about architectural choices, and it has to be given consideration right from the start. In 1974, these architectural choices weren't choices, the hardware didn't let you do it differently.

    Focusing on the "critical 3%" (which imply profiling) is still good advice, but it will mostly help you fix "performance bugs", like an accidentally quadratic algorithms, stuff that is done in loop but doesn't need to be, etc... But once you have dealt with this problem, that's when you notice that you spend 90% of the time in abstractions and it is too late to change it now, so you add caching, parallelism, etc... making your code more complicated and still slower than if you thought about performance at the start.

    Today, late optimization is just as bad as premature optimization, if not more so.

    • austin-cheney 1 minute ago
      The most misunderstood statement in all of programming by a wide margin.

      I really encourage people to read the Donald Knuth essay that features this sentiment. Pro tip: You can skip to the very end of the article to get to this sentiment without losing context.

      Here ya go: https://dl.acm.org/doi/10.1145/356635.356640

      Basically, don't spend unnecessary effort increasing performance in an unmeasured way before its necessary, except for those 10% of situations where you know in advance that crucial performance is absolutely necessary. That is the sentiment. I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.

    • tananaev 4 minutes ago
      With modern tools it should be pretty easy to build scalable solutions. I take premature optimization as going out of your way to optimize something that's already reasonable. Not that you should write really bad code as a starting point.
  • Aaargh20318 1 hour ago
    I’m missing Curly’s Law: https://blog.codinghorror.com/curlys-law-do-one-thing/

    “A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance, and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing, and should mean it all of the time.”

    • inetknght 1 hour ago
      > It must not be both a floor polish and a dessert topping.

      I worked as a janitor for four years near a restaurant, so I know a little bit about floor polishing and dessert toppings. This law might be a little less universal than you think. There are plenty of people who would happily try out floor polish as a dessert topping if they're told it'll get them high.

      • rapnie 31 minutes ago
        Borax is an example of a substance that is simultaneously used for skin care, household cleaning, as soldiering flux, and ant killer. But I guess it is a constant with variable effects. Hard to be found in local shops anymore.
      • otterley 23 minutes ago
        It’s a reference to a very old SNL sketch called “shimmer”. https://www.youtube.com/shorts/03lLPUYkpYM

        It probably won’t be up very long but it’s a classic.

      • js8 8 minutes ago
        I thought that you were about to write: "as a janitor in a restaurant, the dessert topping is sometimes used as a floor polish".
      • aworks 29 minutes ago
        I worked for awhile as a janitor in a college dorm. Not an easy job but it definitely revealed a side of humanity I might not have otherwise seen. Especially the clean out after students left for the year.
        • rapnie 25 minutes ago
          We had a large green plant growing in an unused fridge. Fungus yes, but this was a new experience. As students we learned a lot.
    • ipnon 1 hour ago
      I usually invoke this by naming with POSIWID.
    • huflungdung 1 hour ago
      [dead]
  • conartist6 2 hours ago
    Remember that these "laws" contain so many internal contradictions that when they're all listed out like this, you can just pick one that justifies what you want to justify. The hard part is knowing which law break when, and why
    • jimmypk 1 hour ago
      Postel's Law vs. Hyrum's Law is the canonical example. Postel says be liberal in what you accept — but Hyrum's Law says every observable behavior of your API will eventually be depended on by someone. So if you're lenient about accepting malformed input and silently correcting it, you create a user base that depends on that lenient behavior. Tightening it later is a breaking change even if it was never documented. Being liberal is how you get the Hyrum surface area.

      The resolution I've landed on: be strict in what you accept at boundaries you control (internal APIs, config parsing) and liberal only at external boundaries where you can't enforce client upgrades. But that heuristic requires knowing which category you're in, which is often the hard part.

      • zahlman 1 hour ago
        I've always thought of Hyrum's Law more as a Murphy-style warning than as actionable advice.
      • throwaway173738 1 hour ago
        I look at Postel’s law more as advice on how to parse input. At some point you’re going to have to upgrade a client or a server to add a new field. If you’ve been strict, then you’ve created a big coordination problem, because the new field is a breaking change. But if you’re liberal, then your systems ignore components of the input that they don’t recognize. And that lets you avoid a fully coordinated update.
    • AussieWog93 2 hours ago
      DRY is my pet example of this.

      I've seen CompSci guys especially (I'm EEE background, we have our own problems but this ain't one of them) launch conceptual complexity into the stratosphere just so that they could avoid writing two separate functions that do similar things.

      • busfahrer 2 hours ago
        I think I remember a Carmack tweet where he mentioned in most cases he only considers it once he reaches three duplicates
        • michaelcampbell 50 minutes ago
          The "Rule of 3" is a pretty well known rule of thumb; I suspect Carmack would admit it predates him by a fair bit.
        • mcv 1 hour ago
          I once heard of a counter-principle called WET: Write Everything Twice.
        • whattheheckheck 1 hour ago
          Why 3? What is this baseball?

          Take the 5 Rings approach.

          The purpose of the blade is to cut down your opponent.

          The purpose of software is to provide value to the customer.

          It's the only thing that matters.

          You can also philosophize why people with blades needed to cut down their opponents along with why we have to provide value to the customer but thats beyond the scope of this comment

          • ta20240528 1 hour ago
            "The purpose of software is to provide value to the customer."

            Partially correct. The purpose of your software to its owners is also to provide future value to customers competitively.

            What we have learnt is that software needs to be engineered: designed and structured.

            • nradov 6 minutes ago
              And yet some of the software most valuable to customers was thrown together haphazardly with nothing resembling real engineering.
      • aworks 23 minutes ago
        I worked for a company that also had hardware engineers writing RTL. Our software architect spent years helping that team reuse/automate/modularize their code. At a mininum, it's still just text files with syntax, despite rather different semantics.
      • zahlman 1 hour ago
        I've heard that story a few times (ironically enough) but can't say I've seen a good example. When was over-architecture motivated by an attempt to reduce duplication? Why was it effective in that goal, let alone necessary?
        • dasil003 42 minutes ago
          Buy me a beer and I can tell you some very poignant stories. The best ones are where there is a legitimate abstraction that could be great, assuming A) everyone who had to interact with the abstraction had the expertise to use it, B) the details of the product requirements conformed to the high level technical vision, now and forever, and C) migrating from the current state to the new system could be done in a bounded amount of time.

          My view is over-engineering comes from the innate desire of engineers to understand and master complexity. But all software is a liability, every decision a tradeoff that prunes future possibilities. So really you want to make things as simple as possible to solve the problem at hand as that will give you more optionality on how to evolve later.

        • mosburger 18 minutes ago
          I think there is often tension between DRY and "thing should do only one thing." E.g., I've found myself guilty of DRYing up a function, but the use is slightly different in a couple places, so... I know, I'll just add a flag/additional function argument. And you keep doing that and soon you have a messed up function with lots of conditional logic.

          The key is to avoid the temptation to DRY when things are only slightly different and find a balance between reuse and "one function/class should only do one thing."

        • caminante 32 minutes ago
          IMHO, it comes down to awareness/probability about the need to future proof or add defensive behavior.

          The spectrum is [YAGNI ---- DRY]

          A little less abstract: designing a UX comes to mind. It's one thing to make something workable for you, but to make it for others is way harder.

      • pydry 2 hours ago
        DRY is misunderstood. It's definitely a fundamental aspect of code quality it's just one of about 4 and maximizing it to the exclusion of the others is where things go wrong. Usually it comes at the expense of loose coupling (which is equally fundamental).

        The goal ought to be to aim for a local minima of all of these qualities.

        Some people just want to toss DRY away entirely though or be uselessly vague about when to apply it ("use it when it makes sense") and thats not really much better than being a DRY fundamentalist.

        • layer8 2 hours ago
          DRY is misnamed. I prefer stating it as SPOT — Single Point Of Truth. Another way to state it is this: If, when one instance changes in the future, the other instance should change identically, then make it a single instance. That’s really the only DRY criterion.
          • xnorswap 1 hour ago
            I like this a lot more, because it captures whether two things are necessarily the same or just happen to be currently the same.

            A common "failure" of DRY is coupling together two things that only happened to bear similarity while they were both new, and then being unable to pick them apart properly later.

          • mosburger 15 minutes ago
            I said this elsewhere in the comments, but I think there's sort of a fundamental tension that shows up sometimes between DRY and "a function/class should only do one thing." E.g., there might be two places in your code that do almost identical things, so there's a temptation to say "I know! I'll make a common function, I'll just need to add a flag/extra argument..." and if you keep doing that you end up with messy "DRY" functions with tons of conditional logic that tries to do too much.

            Yeah there are ways to avoid this and you need to strike balances, but sometimes you have to be careful and resist the temptation to DRY everything up 'cuz you might just make it brittler (pun intended).

          • Silamoth 58 minutes ago
            That’s how I understand it as well. It’s not about an abstract ideal of duplication but about making your life easier and your software less buggy. If you have to manually change something in 5 different places, there’s a good chance you’ll forget one of those places at some point and introduce a bug.
          • mcv 1 hour ago
            That's how I understood it. If you add a new thing (constant, route, feature flag, property, DB table) and it immediately needs to be added in 4 different places (4 seems to be the standard in my current project) before you can use it, that's not DRY.
            • mjr00 49 minutes ago
              > If you add a new thing (constant, route, feature flag, property, DB table) and it immediately needs to be added in 4 different places (4 seems to be the standard in my current project) before you can use it, that's not DRY.

              The tricky part is that sometimes "a new thing" is really "four new things" disguised as one. A database table is a great example because it's a failure mode I've seen many times. A developer has to do it once and they have to add what they perceive as the same thing four times: the database table itself, the internal DB->code translation e.g. ORM mapping, the API definition, and maybe a CRUD UI widget. The developer thinks, "oh, this isn't DRY" and looks to tools like Alembic and PostGREST or Postgraphile to handle this end-to-end; now you only need to write to one place when adding a database table, great!

              It works great at first, then more complex requirements come down: the database gets some virtual generated columns which shouldn't be exposed in code, the API shouldn't return certain fields, the UI needs to work off denormalized views. Suddenly what appeared to be the same thing four times is now four different things, except there's a framework in place which treats these four things as one, and the challenge is now decoupling them.

              Thankfully most good modern frameworks have escape valves for when your requirements get more complicated, but a lot of older ones[0] really locked you in and it became a nightmare to deal with.

              [0] really old versions of Entity Framework being the best/worst example.

              • mcv 41 minutes ago
                I believe that was the point of Ruby on Rails: that you really had to just create the class, and the framework would create the table and handle the ORM. Or maybe you still had to write the migration; it's been as while. That was pretty spectacular in its dedication to DRY, but also pretty extreme.

                But the code I'm talking about is really adding the same thing in 4 different places: the constant itself, adding it to a type, adding it to a list, and there was something else. It made it very easy to forget one step.

          • pydry 1 hour ago
            Renaming it doesnt change the nature of the problem.

            There should often be two points of truth because having one would increase the coupling cost more than the benefits that would be derived from deduplication.

      • iwontberude 1 hour ago
        Why do the have to be so smart but so annoying at the same time?
    • blandflakes 1 hour ago
      This was also true of Amazon's Leadership Principles. They are pretty reasonable guidelines, but in a debate, it really came down to which one you could most reasonably weaponize in favor of your argument, even to the detriment of several others.

      Which maybe is also fine, I dunno :)

      • rustyhancock 1 hour ago
        It's because they are heurists intended to be applied by knowledgeable and experienced humans.

        It can be quite hard to explain when a student asks why you did something a particular way. The truthful answer is that it felt like the right way to go about it.

        With some thought you can explain it partly - really justify the decision subconsciously made.

        If they're asking about a conscious decision that's rarely much more helpful that you having to say that's what the regulations, or guidelines say.

        Where they really learn is seeing those edge cases and gray areas

    • rapnie 20 minutes ago
      I like alternatives to formal IT lawfare, like CUPID [0] properties for Joyful coding by Dan North, as alternative to SOLID principles.

      [0] https://cupid.dev/

    • diehunde 19 minutes ago
      I guess that's why confirmation bias is also listed?
    • ghm2180 2 hours ago
      This is doubly true in Machine Learning Engineering. Knowing what methods to avoid is just as important to know what might work well and why. Importantly a bunch of Data Science techniques — and I use data science in the sense of making critical team/org decisions — is also as important for which you should understand a bit of statistics not only data driven ML.
      • Silamoth 56 minutes ago
        Statistics is absolutely fundamental to data science. But I’m not sure this relates to the above idea of “laws” being internally contradictory?
  • austin-cheney 18 minutes ago
    My own personal law is:

    When it comes to frameworks (any framework) any jargon not explicitly pointing to numbers always eventually reduces down to some highly personalized interpretation of easy.

    It is more impactful than it sounds because it implicitly points to the distinction of ultimate goal: the selfish developer or the product they are developing. It is also important to point out that before software frameworks were a thing the term framework just identifies a defined set of overlapping abstract business principles to achieve a desired state. Software frameworks, on the other hand, provide a library to determine a design convention rather than the desired operating state.

  • Kinrany 11 minutes ago
    SOLID being included immediately makes me have zero expectation of the list being curated by someone with good taste.
  • dataviz1000 2 hours ago
    I did not see Boyd’s Law of Iteration [0]

    "In analyzing complexity, fast iteration almost always produces better results than in-depth analysis."

    Boyd invented the OODA loop.

    [0]https://blog.codinghorror.com/boyds-law-of-iteration/

    • Silamoth 51 minutes ago
      That’s such a good one! I wish more people understood this. It seems management and business types always want some upfront plan. And I get it, to an extent. But we’ve learned this isn’t a very effective way to build software. You can’t think of all possible problems ahead of time, especially the first time around. Refactoring to solve problems with a flexible architecture it better than designing yourself into a rigid architecture that can’t adapt as you learn the problem space.
  • RivieraKid 2 hours ago
    Not a law but a a design principle that I've found to be one of the most useful ones and also unknown:

    Structure code so that in an ideal case, removing a functionality should be as simple as deleting a directory or file.

    • layer8 1 hour ago
      Functionalities aren’t necessarily orthogonal to each other; features tend to interact with one another. “Avoid coupling between unrelated functionalities” would be more realistic.
    • skydhash 5 minutes ago
      Now, I tend towards the C idiom of having few files and not a deep structure and away from the one class, one file of Java. Less files to rename when refactoring and less files to open trying to understand an implementation.
    • danparsonson 1 hour ago
      Features arise out of the composition of fundamental units of the system, they're not normally first class units themselves. Can you give an example?
    • kijin 1 hour ago
      What's the smallest unit of functionality to which your principle applies?

      For example, each comment on HN has a line on top that contains buttons like "parent", "prev", "next", "flag", "favorite", etc. depending on context. Suppose I might one day want to remove the "flag" functionality. Should each button be its own file? What about the "comment header" template file that references each of those button files?

      • jpitz 1 hour ago
        I think that if you continue along the logical progression of the parent poster, then maybe the smaller units of functionality would be represented by simple ranges of lines of text. Given that, deleting a single button would ideally mean a single contiguous deletion from a file, versus deleting many disparate lines.
      • sverhagen 1 hour ago
        Maybe the buttons shouldn't be their own files, but the backend functionality certainly could be. I don't do this, but I like the idea.
  • macintux 22 minutes ago
    Some similarly-titled (but less tidily-presented) posts that have appeared on HN in the past, none of which generated any discussion:

    * https://martynassubonis.substack.com/p/5-empirical-laws-of-s...

    * https://newsletter.manager.dev/p/the-unwritten-laws-of-softw..., which linked to:

    * https://newsletter.manager.dev/p/the-13-software-engineering...

  • fenomas 1 hour ago
    Nice to have these all collected nicely and sharable. For the amusement of HN let me add one I've become known for at my current work, for saying to juniors who are overly worried about DRY:

    > Fen's law: copy-paste is free; abstractions are expensive.

    edit: I should add, this is aimed at situations like when you need a new function that's very similar to one you already have, and juniors often assume it's bad to copy-paste so they add a parameter to the existing function so it abstracts both cases. And my point is: wait, consider the cost of the abstraction, are the two use cases likely to diverge later, do they have the same business owner, etc.

  • superxpro12 16 minutes ago
  • noduerme 36 minutes ago
    I'd like to propose a corollary to Gall's Law. Actually it's a self-proving tautology already contained with the term "lifecycle." Any system that lasts longer than a single lifecycle oscillates between (reducing to) simplicity and (adding) complexity.

    My bet is on the long arc of the universe trending toward complexity... but in spite of all this, I don't think all this complexity arises from a simple set of rules, and I don't think Gall's law holds true. The further we look at the rule-set for the universe, the less it appears to be reducible to three or four predictable mechanics.

  • dassh 1 hour ago
    Calling them 'laws' is always a bit of a stretch. They are more like useful heuristics. The real engineering part is knowing exactly when to break them.
  • ozgrakkurt 2 hours ago
    For anyone reading this. Learn software engineering from people that do software engineering. Just read textbooks which are written by people that actually do things
  • tmoertel 2 hours ago
    One that is missing is Ousterhout’s rule for decomposing complexity:

        complexity(system) =
            sum(complexity(component) * time_spent_working_in(component)
                for component in system).
    
    The rule suggests that encapsulating complexity (e.g., in stable libraries that you never have to revisit) is equivalent to eliminating that complexity.
    • stingraycharles 2 hours ago
      That’s not some kind of law, though. And I’m also not sure whether it even makes sense, complexity is not a function of time spent working on something.
      • tmoertel 2 hours ago
        First, few of the laws on that site are actual laws in the physics or mathematics sense. They are more guiding principles.

        > complexity is not a function of time spent working on something.

        But the complexity you observe is a function of your exposure to that complexity.

        The notion of complexity exists to quantify the degree of struggle required to achieve some end. Ousterhout’s observation is that if you can move complexity into components far away from where you must do your work to achieve your ends, you no longer need to struggle with that complexity, and thus it effectively is not there anymore.

        • wduquette 1 hour ago
          And in addition, the time you spend making a component work properly is absolutely a function of its complexity. Once you get it right, package it up neatly with a clean interface and a nice box, and leave it alone. Where "getting it right" means getting it to a state where you can "leave it alone".
      • CuriouslyC 1 hour ago
        I think the intent is that if you can cleanly encapsulate some complexity so that people working on stuff that uses it don't have to understand anything beyond a simple interface, that complexity "doesn't exist" for all intents and purposes. Obviously this isn't universal, but a fair percentage of programmers these days don't understand the hardware they're programming against due to the layers of abstractions over them, so it's not crazy either.
      • Brian_K_White 1 hour ago
        It's showing that all the complexity in the components are someone else's problem. Your only complexity is your own top layer and your interface with the components.
  • netdevphoenix 1 hour ago
    "This site was paused as it reached its usage limits. Please contact the site owner for more information."

    I wish AWS/Azure had this functionality.

  • r0ze-at-hn 2 hours ago
    Love the details sub pages. Over 20 years I collected a little list of specific laws or really observations (https://metamagic.substack.com/p/software-laws) and thought about turning each into specific detailed blog posts, but it has been more fun chatting with other engineers, showing the page and watch as they scan the list and inevitably tell me a great story. For example I could do a full writeup on the math behind this one, but it is way more fun hearing the stories about the trying and failing to get second re-writes for code.

    9. Most software will get at most one major rewrite in its lifetime.

  • ebonnafoux 30 minutes ago
    There is a small typos in The Ninety-Ninety Rule

    > The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.

    It should be 90% code - 10% time / 10% code - 90% time

    • Edman274 25 minutes ago
      It sounds like you are unfamiliar with the idea that software engineering efforts can be underestimated at the outset. The humorous observation here is that the total is 180 percent, which mean that it took longer than expected, which is very common.
      • ebonnafoux 10 minutes ago
        Oh OK, that is something I learn today.
  • dgb23 1 hour ago
    I like this collection. It's nicely presented and at least at a glance it adds some useful context to each item.

    While browsing it, I of course found one that I disagree with:

    Testing Pyramid: https://lawsofsoftwareengineering.com/laws/testing-pyramid/

    I think this is backwards.

    Another commenter WillAdams has mentioned A Philosophy of Software Design (which should really be called A Set of Heuristics for Software Design) and one of the key concepts there are small (general) interfaces and deep implementations.

    A similar heuristic also comes up in Elements of Clojure (Zachary Tellman) as well, where he talks about "principled components and adaptive systems".

    The general idea: You should greatly care about the interfaces, where your stuff connects together and is used by others. The leverage of a component is inversely proportional to the size of that interface and proportional to the size of its implementation.

    I think the way that connects to testing is that architecturally granular tests (down the stack) is a bit like pouring molasses into the implementation, rather than focusing on what actually matters, which is what users care about: the interface.

    Now of course we as developers are the users of our own code, and we produce building blocks that we then use to compose entire programs. Having example tests for those building blocks is convenient and necessary to some degree.

    However, what I want to push back on is the implied idea of having to hack apart or keep apart pieces so we can test them with small tests (per method, function etc.) instead of taking the time to figure out what the surface areas should be and then testing those.

    If you need hyper granular tests while you're assembling pieces, then write them (or better: use a REPL if you can), but you don't need to keep them around once your code comes together and you start to design contracts and surface areas that can be used by you or others.

    • nazgul17 1 hour ago
      I think the general wisdom in that scenario is to keep them around until they get in the way. Let them provide a bit of value until they start being a cost.
  • mojuba 2 hours ago
    > Get it working correctly first, then make it fast, then make it pretty.

    Or develop a skill to make it correct, fast and pretty in one or two approaches.

    • AussieWog93 2 hours ago
      I recently had success with a problem I was having by basically doing the following:

      - Write a correct, pretty implementation

      - Beat Claude Code with a stick for 20 minutes until it generated a fragile, unmaintainable mess that still happened to produce the same result but in 300ms rather than 2500ms. (In this step, explicitly prompting it to test rather than just philosophising gets you really far)

      - Pull across the concepts and timesaves from Claude's mess into the pretty code.

      Seriously, these new models are actually really good at reasoning about performance and knowing alternative solutions or libraries that you might have only just discovered yourself.

      • mojuba 1 hour ago
        However, a correct, pretty and fast solution may exist that neither of you have found yet.

        But yes, the scope and breadth of their knowledge goes far beyond what a human brain can handle. How many relevant facts can you hold in your mind when solving a problem? 5? 12? An LLM can take thousands of relevant facts into account at the same time, and that's their superhuman ability.

    • theandrewbailey 1 hour ago
      Modern SaaS: make it "pretty", then make it work, then make it "pretty" again in the next release. Make fast? Never.
  • Sergey777 1 hour ago
    A lot of these “laws” seem obvious individually, but what’s interesting is how often we still ignore them in practice.

    Especially things like “every system grows more complex over time” — you can see it in almost any project after a few iterations.

    I think the real challenge isn’t knowing these laws, but designing systems that remain usable despite them.

  • herodotus 32 minutes ago
    Knuth's Optimization Principle: The computer scientist Rod Burstall had a pithy way of saying this: "Efficiency is the enemy of clarity"
  • WillAdams 3 hours ago
    Visual list of well-known aphorisms and so forth.

    A couple are well-described/covered in books, e.g., Tesler's Law (Conservation of Complexity) is at the core of _A Philosophy of Software Design_ by John Ousterhout

    https://www.goodreads.com/en/book/show/39996759-a-philosophy...

    (and of course Brook's Law is from _The Mythical Man Month_)

    Curious if folks have recommendations for books which are not as well-known which cover these, other than the _Laws of Software Engineering_ book which the site is an advertisement for.....

  • wesselbindt 1 hour ago
    Two of my main CAP theorem pet peeves happen on this page:

    - Not realizing it's a very concrete theorem applicable in a very narrow theoretical situation, and that its value lies not in the statement itself but in the way of thinking that goes into the proof.

    - Stating it as "pick any two". You cannot pick CA. Under the conditions of the CAP theorem it is immediately obvious that CA implies you have exactly one node. And guess what, then you have P too, because there's no way to partition a single node.

    A much more usable statement (which is not a theorem but a rule of thumb) is: there is often a tradeoff between consistency and availability.

    • urxvtcd 39 minutes ago
      Well, ackchyually, you can not pick P, it's just not cheap. You could imagine a network behaving like a motherboard really.
  • JensRantil 33 minutes ago
  • arnorhs 1 hour ago
    Since the site is down, you can use the archive.org link:

    https://web.archive.org/web/20260421113202/https://lawsofsof...

  • tfrancisl 2 hours ago
    Remember, just because people repeated it so many times it made it to this list, does not mean its true. There may be some truth in most of these, but none of these are "Laws". They are aphorisms: punchy one liners with the intent to distill something so complex as human interaction and software design.
  • bpavuk 1 hour ago
    > This site was paused as it reached its usage limits. Please contact the site owner for more information.

    ha, someone needs to email Netlify...

  • Symmetry 1 hour ago
    On my laptop I have a yin-yang with DRY and YAGNI replacing the dots.
  • cogman10 26 minutes ago
    Uhh, I knew I wasn't going to like this one when I read it.

    > Premature Optimization (Knuth's Optimization Principle)

    > Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved.

    This example is the exact example I'd choose where people wrongly and almost obstinately apply the "premature optimization" principles.

    I'm not saying that you should write a custom hash table whenever you need to search. However, I am saying that there's a 99% chance your language has an inbuilt and standard datastructure in it's standard library for doing hash table lookups.

    The code to use that datastructure vs using an array is nearly identical and not the least bit hard to read or understand.

    And the reason you should just do the optimization is because when I've had to fix performance problems, it's almost always been because people put in nested linear searches turning what could have been O(n) into O(n^3).

    But further, when Knuth was talking about actual premature optimization, he was not talking about algorithmic complexity. In fact, that would have been exactly the sort of thing he wrapped into "good design".

    When knuth wrote about not doing premature optimizations, he was living in an era where compilers were incredibly dumb. A premature optimization would be, for example, hand unrolling a loop to avoid a branch instruction. Or hand inlining functions to avoid method call overhead. That does make code more nasty and harder to deal with. That is to say, the specific optimizations knuth was talking about are the optimizations compilers today do by default.

    I really hate that people have taken this to mean "Never consider algorithmic complexity". It's a big reason so much software is so slow and kludgy.

    • bigfishrunning 4 minutes ago
      Yeah, for every Knuth there are 10000 copies of schlemiel the painter
  • serious_angel 2 hours ago
    Great! Do principles fit? If so, considering presence of "Bus Factor", I believe "Chesterton's Fence" should be listed, too.
  • Waterluvian 42 minutes ago
    I think it would be cool to have these shown at random as my phone’s “screensaver”
  • vpol 2 hours ago
  • bronlund 1 hour ago
    Pure gold :) I'm missing one though; "You can never underestimate an end user.".
  • grahar64 3 hours ago
    Some of these laws are like Gravity, inevitable things you can fight but will always exist e.g. increasing complexity. Some of them are laws that if you break people will yell at you or at least respect you less, e.g. leave it cleaner than when you found it.
    • stingraycharles 2 hours ago
      Lots of them are also only vaguely related to software engineering, e.g. Peter Principle.

      It’s not a great list. The good old c2.com has many more, better ones.

    • layer8 1 hour ago
      Physical laws vs human laws.
  • d--b 2 hours ago
    It's missing:

    > Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

    https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

    • tgv 1 hour ago
      Shouldn't it also be able to read email? I think that was a law too.

      Anyway, the list seems like something AI scraped and has a strong bias towards "gotcha" comments from the likes of reddit.

  • duc_minh 1 hour ago
    Is it just me seeing the following?

    Site not available This site was paused as it reached its usage limits. Please contact the site owner for more information.

    • rtrigoso 1 hour ago
      not just you, I am getting the same error
  • James_K 1 hour ago
    I feel that Postel's law probably holds up the worst out of these. While being liberal with the data you accept can seem good for the functioning of your own application, the broader social effect is negative. It promotes misconceptions about the standard into informal standards of their own to which new apps may be forced to conform. Ultimately being strict with the input data allowed can turn out better in the long run, not to mention be more secure.
  • IshKebab 2 hours ago
    Calling these "laws" is a really really bad idea.
  • andreygrehov 46 minutes ago
    `Copy as markdown` please.
  • _dain_ 2 hours ago
    I have a lot of issues with this one:

    https://lawsofsoftwareengineering.com/laws/premature-optimiz...

    It leaves out this part from Knuth:

    >The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.

    Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.

    Moreover:

    >Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.

    I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.

    For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.

    Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.

    • hliyan 26 minutes ago
      I absolutely cannot stand people who recite this quote but has no knowledge of the sentences that come before or after it: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
    • dgb23 1 hour ago
      > In many cases, simpler code is faster, and fast code makes for simpler systems. (...)

      I wholeheartedly agree with you here. You mentioned a few architectural/backend issues that emerge from bad performance and introduce unnecessary complexity.

      But this also happens in UI: Optimistic updates, client side caching, bundling/transpiling, codesplitting etc.

      This is what happens when people always answer performance problems with adding stuff than removing stuff.

    • Xiaoher-C 1 hour ago
      [dead]
  • bakkerinho 1 hour ago
    > This site was paused as it reached its usage limits. Please contact the site owner for more information.

    Law 0: Fix infra.

    • andrerpena 1 hour ago
      This looks like a static website that could be served for free from Cloudflare Pages or Vercel, with a nearly unlimited quota. And still... It's been hugged to death, which is ironic, considering it's a software engineering website :).
      • mghackerlady 58 minutes ago
        Hell, something like this probably doesn't even need that. Throw it on a debian box running nginx or apache and you'll probably be set (though, with how hard bots have been scraping recently it might be harder than that)
    • asdfasgasdgasdg 1 hour ago
      Law 1: caching is 90-99% of performance.
      • arnorhs 1 hour ago
        are you saying performance is 90-99% caching? If so that is so obviously untrue.

        If you are saying you _can_ fix 90-99% of performance bottlenecks eventually with caching, that may be true, but doesn't sound as nice

    • hermaine 1 hour ago
    • jvanderbot 31 minutes ago
      Prior probability of a prompt-created website: 50%.

      Posterior probability of a prompt-created website: 99%.

    • the_arun 1 hour ago
      Laws are there to be broken.
    • kurnik 1 hour ago
      So somebody who doesn’t know how to properly host a static website wants to teach me about software engineering. Cool. 99% sure it’s a vibecoded container for AI slop anyway.
    • milanm081 1 hour ago
      Fixed, thanks!
    • esafak 1 hour ago
      "Performance doesn't matter!"
  • jdw64 7 minutes ago
    [dead]
  • milanm081 3 hours ago
    [dead]
  • threepts 2 hours ago
    I believe there should be one more law here, telling you to not believe this baloney and spend your money on Claude tokens.
  • Antibabelic 2 hours ago
    Software engineering is voodoo masquerading as science. Most of these "laws" are just things some guys said and people thought "sounds sensible". When will we have "laws" that have been extensively tested experimentally in controlled conditions, or "laws" that will have you in jail for violating them? Like "you WILL be held responsible for compromised user data"?
    • horsawlarway 2 hours ago
      At least for your last point... ideally never.

      Look, I understand the intent you have, and I also understand the frustration at the lack of care with which many companies have acted with regards to personal data. I get it, I'm also frustrated.

      But (it's a big but)...

      Your suggestion is that we hold people legally responsible and culpable for losing a confrontation against another motivated, capable, and malicious party.

      That's... a seriously, seriously, different standard than holding someone responsible for something like not following best practices, or good policy.

      It's the equivalent of killing your general when he loses a battle.

      And the problem is that sometimes even good generals lose battles, not because they weren't making an honest effort to win, or being careless, but because they were simply outmatched.

      So to be really, really blunt - your proposal basically says that any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data. That's not good policy, period.

      • fineIllregister 32 minutes ago
        We have HIPAA in the US for health care data. There have been no disastrous consequences to holding people and organizations responsible for breaches.
        • horsawlarway 16 minutes ago
          Sure, and in cases of negligence this is fine. The law even explicitly scales the punishment based on perceived negligence and almost always is only prosecuted in cases where the standards expectations aren't followed.

          Ex - MMG for 2026 was prosecuted because:

          - They failed to notify in response to a breach.

          - They failed to complete proper risk analysis as required by HIPAA

          They paid 10k in fines.

          It wasn't just "They had a data breach" (ops proposal...) it was "They failed to follow standards which led to a data breach where they then acted negligently"

          In the same way that we don't punish an architect if their building falls over. We punish them if the building falls over because they failed to follow expected standards.

      • Antibabelic 1 hour ago
        Incidents happen in the meat world too. Engineers follow established standards to prevent them to the best of their ability. If they don't, they are prosecuted. Nobody has ever suggested putting people in jail for Russia using magic to get access to your emails. However, in the real world, there is no magic. The other party "outmatches" you by exploiting typical flaws in software and hardware, or, far more often, in company employees. Software engineering needs to grow up, have real certification and standards bodies and start being rigorously regulated, unless you want to rely on blind hope that your "general" has been putting an "honest effort" and showing basic competence.
        • horsawlarway 29 minutes ago
          We already have similar legal measures in software for following standards. These match very directly to engineering standards in things like construction and architecture. These are clearly understood, ex SOC 2, PCI DSS, GDPR, CCPA, NIST standards, ISO 27001, FISMA... etc... Delve is an example (LITERALLY RIGHT NOW!) of these laws being applied.

          What we don't do in engineering is hold the engineer responsible when Russia bombs the bridge.

          What you're suggesting is that we hold the software engineer responsible when Russia bombs their software stack (or more realistically, just plants an engineer on the team and leaks security info, like NK has been doing).

          Basically - I'm saying you're both wrong about lacking standards, and also suggesting a policy that punishes without regard for circumstance. I'm not saying you're wrong to be mad about general disregard for user data, but I'm saying your "simple and clear" solution is bad.

          ... something something... for every complex problem there is an answer that is clear, simple, and wrong.

          France killed their generals for losing. It was terrible policy then and it's terrible policy now.

      • jcgrillo 33 minutes ago
        > any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data

        No. Not the company, holding companies responsible doesn't do much. The engineer who signed off on the system needs to be held personally liable for its safety. If you're a licensed civil engineer and you sign off on a bridge that collapses, you're liable. That's how the real world works, it should be the same for software.

        • horsawlarway 27 minutes ago
          Define "safety".
          • jcgrillo 19 minutes ago
            Obviously if someone dies or is injured a safety violation has occurred. But other examples include things like data protection failures--if for example your system violates GDPR or similar constraints it is unsafe. If your system accidentally breaks tenancy constraints (sends one user's data to another user) it is unsafe. If your system allows a user to escalate privileges it is unsafe.

            These kinds of failures are not inevitable. We can build sociotechnical systems and practices that prevent them, but until we're held liable--until there's sufficient selection pressure to erode the "move fast and break shit" culture--we'll continue to act negligently.

            • horsawlarway 13 minutes ago
              None of those are what OP proposed. Frankly, we also cover many of these practices just fine. What do you think SOC 2 type 2 and ISO 27001 are?

              It seems like your issue is that we don't hold all companies to those standards. But I'm personally ok with that. In the same way I don't think residential homes should be following commercial construction standards.

              • jcgrillo 4 minutes ago
                > None of those are what OP proposed.

                That doesn't worry me overly much.

                > What do you think SOC 2 type 2 and ISO 27001 are?

                They're compliance frameworks that have little to no consequences when they're violated, except for some nebulous "loss of trust" or maybe in extreme cases some financial penalties. The problem is the expectation value of the violation penalty isn't sufficient to change behavior. Companies still ship code which violates these things all the time.

                > It seems like your issue is that we don't hold all companies to those standards.

                Yes, and my issue is that we don't hold engineers personally liable for negligent work.

                > I don't think residential homes should be following commercial construction standards.

                Sure, there are different gradations of safety standards, but often residential construction plans require sign-off by a professional engineer. In the case when an engineer negligently signs off on an unsafe plan, that engineer is liable. Should be exactly the same situation in software.