The conversation is usually: devs can write their own tests. We don't need QA.
And the first part is true. We can. But that's not why we have (had) QA.
First: it's not the best use of our time. I believe dev and QA are separate skillset. Of course there is overlap.
Second, and most important: it's a separate person, an additional person who can question the ticket, and who can question my translation of the ticket into software.
And lastly: they don't suffer from the curse of knowledge on how I implemented the ticket.
I miss my QA colleagues. When I joined my current employer there were 8 or so. Initially I was afraid to give them my work, afraid of bad feedback.
Never have I met such graceful people who took the time in understanding something, and talking to me to figure out where there was a mismatch.
In my mind a good QA understands the feature we're working on, deploys the correct version, thoroughly tests the feature understanding what it's supposed and not supposed to do, and if they happen to find a bug, they create a bug ticket where they describe the environment in full and what steps are necessary to reproduce it.
For automation tests, very few are capable of writing tests that test the spec, not implementation, contain sound technical practices, and properly address flakiness.
For example it's very common to see a test that clicks the login button and instead of waiting for the login, the wait 20 seconds. Which is both too much, and 1% of the time too little.
Whenever I worked with devs, they almost always managed to do all this, sometimes they needed a bit of guidance, but that's it. Very very few QA ever did (not that they seemed to bothered by that).
A lot of QA have expressed that devs 'look down' on them. I can't comment on that, but the signal-to-noise ratio of bug tickets is so low, that often it's you have to do their job and repeat everything as well.
This has been a repeated experience for me with multiple companies and a lot of places don't have proper feedback loops, so it doesn't even bother them as they're not affected by the poor quality of bug reports, but devs have to spend the extra time.
I've worked with a handful of excellent QA. In my opinion - the best QA is basically a product manager lite. They understand the user, and they act from the perspective of the user when evaluating new features. Not the "plan" for the feature. The actual implementation provided by development.
This means they clarify edge cases, call out spots that are confusing or tedious for a user, and understand & test how features interact. They help take a first draft of a feature to a much higher level of polish than most devs/pms actually think through, and avoid all sorts of long term problems with shipping features that don't play nicely.
I think it's a huge mistake to ask QA to do automation tests - Planning for them? Sure. Implementation? No. That's a dev's job, you should assign someone with that skillset (and pay them accordingly).
QA is there to drive quality up for your users, the value comes from the opinions they make after using what the devs provide (often repeatedly, like a user) - not from automating that process.
yes - devs are great at coding so get them to write the tests and then I, a good tester, (not to be confused with QA) can work with them on what are good tests to write. With this in place I can confidently test to find the edge cases, usability issues etc
And when I find them we can analyze how the issue could have been caught sooner
Coz while devs with specialties usually get paid more than a generalist, for some reason testing as a specialty means getting a pay cut and a loss in respect and stature.
Hence my username.
I wouldnt ever sell myself as a test automation engineer but whenever i join a project the number one most broken technical issue in need of fixing is nearly always test automation.
I typically brand this work as architecture (and to be fair there is overlap) and try to build infra and tooling less skilled devs can use to write spec-matching tests.
Sadly if i called it test automation i'd have to take a pay cut and get paid less than those less skilled devs who need to be trained to do TDD.
> A lot of QA have expressed that devs 'look down' on them. I can't comment on that, but the signal-to-noise ratio of bug tickets is so low, that often it's you have to do their job and repeat everything as well.
When I was a lead, I pulled everyone, (QA, devs, and managers) into a meeting and made a presentation called "No Guessing Games". I started with an ambiguous ticket with truncated logs...
And then in the middle I basically explained what the division of labor is: QA is responsible for finding bugs and clearly communicating what the bug is. Bugs were not to be sent to development until they clearly explained the problem. (I also explained what the exceptions were, because the rule only works about 99.9% of the time.)
(I also pointed out that dev had to keep QA honest and not waste more than an hour figuring out how to reproduce a bug.)
The biggest determinant is company culture and treating QA as an integral part of the team, and hiring QA that understands the expectation thereof. In addition, having regular 1:1s both with the TL and EM to help them keep integrated with the team, provide training and development, and make sure they're getting the environment in which they can be good QA.
And work to onboard bad QA just as we would a developer who is not able to meet expectations.
I used to work with a QA person who really drove me nuts. They would misunderstand the point of a feature, and then write pages and pages of misguided commentary about what they saw when trying to test it. We'd repeat this a few times for every release.
This forced me to start making my feature proposals as small as possible. I would defensively document everything, and sprinkle in little summaries to make things as clear as possible. I started writing scripts to help isolate the new behavior during testing.
...eventually I realized that this person was somehow the best QA person I'd ever worked with.
how did misunderstanding a feature and writing pages on it help, not sure I follow the logic of why this made them a good QA person? Do you mean the features were not written well and so writing code for them was going to produce errors?
The lack of respect and commensurate compensation at a lot of companies doesn't help. QA is often viewed as something requiring less talent and often offshored which layers communication barriers on top of everything. I've met QA people with decent engineering skills that end up having the most knowledge about the application works in practice. Tell them a proposed change and they'll point out how it could go wrong or cause issues from a customer perspective.
Companies think QA is shit, so they hire shit QA, and they get shit QA results.
Then they get rid of QA, and then the devs get pissed because now support and dev has turned to QA and customers are wondering how the hell certain bugs got out the door.
I'm not sure it's a separate skillset. You need the other side's skills all the time in each of those positions.
But it's certainly a separate mindset. People must hold different values in each of them. One just can't competently do both at the same time. And "time" is quantized here in months-long intervals, because it takes many days to internalize a mindset, if one is flexible enough to ever do it.
Like all other job functions tangential to development- it can be difficult to organize the labor needed to accomplish this within a single team, and it can be difficult to align incentives when the labor is spread across multiple teams.
This gets more and more difficult with modern development practices. Development benefits greatly from fast release cycles and quick iteration- the other job functions do not! QA is certainly included there.
I think that inherit conflict is what is causing developers to increasingly managing their own operations, technical writing, testing, etc.
I've worked in enterprise software development with the full lifecycle for over 30 years.
I have found QA to be mostly unnecessary friction throughout my career, and I've never been more productive than when QA and writing tests became my responsibility.
This is usually what has happened during a release cycle.
1) Devs come up with a list of features and a timeline.
2) QA will go through the list and about 1/2 of the features will get cut because they claim they don't have time to test everything based on their timeline.
3) The cycle begins and devs will start adding features into the codebase and it's radio silence from the QA.
4) Throughout the release QA will force more features to get dropped. By the end of the release cycle, another 1/4 of the original number of features get dropped leaving about 1/4 of the original features that were planned. "It will get done in a dot release."
5) Near the end of the release, everything gets tested and a mountain of bugs come in near the deadline and everyone is forced to scramble. The deadline gets pushed back and QA pushes the blame onto the devs.
6) After everything gets resolved, the next release cycle begins.
This is at quite a few enterprise software companies that most people in Silicon Valley have heard of if you've been working for more than 10 years.
First of all, I've seen all type of teams be successful, ranging from zero QA at all, to massive QA teams with incredible power (eg. Format QA at Sony in Europe). I have absolutely seen teams with no QA deliver high quality full stop, the title is nonsense.
My firm belief is that QA can raise the ceiling of quality significantly if you know what you're doing, but there is also huge moral hazard of engineers dropping the ball on quality at implementation time and creating a situation where adding more QA resources doesn't actually improve quality, just communication churn and ticket activity. By the way the same phenomenon can happen with product people as well (and I've also seen teams without product managers do better than teams with them in certain circumstantes).
The most important anchor point for me is that engineering must fundamentally own quality. This is because we are closer to the implementation and can anticipate more failure modes than anyone else. That doesn't mean other roles don't contribute significantly to quality (product, design, QA, ops absolutely do), but it means we can't abdicate our responsibility to deliver high quality code and systems by leaning on some other function and getting lazy about how we ensure we are building right.
What level of testing is appropriate for engineers to do is quite project and product specific, but it is definitely greater than zero. This goes double in the age of AI.
Most orgs I've worked for are so growth and product-focused that if you try adjusting your estimates to include proper testing, you get push back, and you have to ARGUE your case as to why a feature will take two weeks instead of one.
This is the thing I hate the most about work, having to ARGUE with PMs because they can't accept an estimate, there's often some back-and-forth. "What if you do X instead?" "Team Y (always uses hacks and adds technical debt with every single feature they touch) did something similar in two days." But we're just communicating and adding transparency so that's good and it certainly doesn't matter that it starts taking up 4+ hours of your time in Slack conversations and meetings of people 'level setting' 'getting on the same page' trying to help you 'figure out' how to 'reduce scope' etc. etc.
Also, I think testing via unit or integration tests should be standard regardless, and that isn't what I am thinking about here. I'm thinking about QA, the way QA does it. You hammer your feature with a bunch of weird bullshit like false and unexpected inputs, what happens if I refresh the page in strange ways, what happens if i make an update and force the cache to NOT clear, what happens if I drop my laptop in a dumpster while making the request from firefox and safari at the same time logged in as the same user, what happens if I turn off my internet in the middle of a file upload, and so on. When devs say that devs should be responsible for testing, they usually mean the former (unit and integration tests), and not this separate skillset of coming up with a bunch of weird edge cases for your code. And yes, unit tests SHOULD hit the edge cases, but QA is just better at it. You usually don't have engineers testing what happens when you try sending in Mandarin characters as input (unless they live in China, I guess). All of that effort should bring up your estimates because it is non-trivial. This is what getting rid of QA means, not happy path end-to-end testing plus some unit and integration tests.
PM's are generally the most irritating people to deal with in any organization. This is coming from someone who has been one - effective ones are very obviously effective, but the vast majority are glorified note takers and ticket pushers with very little ability to get anything done, whether due to lack of talent or lack of empowerment in the organization or both. I find arguing with them pointless.
The ticket pushers also range from annoying or even threatening, to "helpful ticket pushers" who maybe don't understand everything, but they keep track of tickets, documents, links to other projects and so on and make sure nothing is forgotten.
> Most orgs I've worked for are so growth and product-focused that if you try adjusting your estimates to include proper testing, you get push back, and you have to ARGUE your case as to why a feature will take two weeks instead of one.
Yeah this one pisses me off too. No, PM, you do not know how long it should take to implement a feature I get paid to work on and you don't.
Good PMs take your feedback and believe you. Bad PMs do the opposite.
The stupid fast tempo of our industry grinds my gears.
When I worked defense we moved slowly and methodically. It almost felt too slow. Now in the private sector I move like triple the speed but we often need to go back and redo and refactor things. I think it averages out to a similar rate of progress but in defense at least I had my sanity.
While good points are made, I worry this gives the wrong impression. The paper doesn't say it is impossible, just hard. I have, very successfully, worked with dev owned testing.
Why it worked: the team set the timelines for delivery of software, the team built their acceptance and integration tests based on system inputs and outputs based on the edges of their systems, the team owned being on-call, and the team automated as much as possible (no repeatable manual testing aside from sanity checks on first release).
There was no QA person or team, but there was a quality focused dev on the team whose role was to ensure others kept the testing bar high. They ensured logs, metrics, and tests met the team bar. This role rotated.
There was a ci/cd team. They made sure the test system worked, but teams maintained their own ci configuration. We used buildkite, so each project had its own buildkite.yml.
The team was expected by eng leaders to set up basic testing before development. In one case, our team had to spend several sprints setting up generators to make the expected inputs and sinks to capture output. This was a flagship project and lots of future development was expected. It very much paid off.
Our test approach was very much "slow is smooth and smooth is fast." We would deploy multiple times a day. Tests were 10 or so minutes and very comprehensive. If a bug got out, tests were updated. The tests were very reliable because the team prioritized them. Eventually people stopped even manually verifying their code because if the tests were green, you _knew_ it worked.
Beyond our team, into the wider system, there was a light weight acceptance test setup and the team registered tests there, usually one per feature. This was the most brittle part because a failed test could be because another team or a system failure. But guess what? That is the same as production if not more noisy. So we had the same level of logging, metrics, and alerts (limited to business hours). Good logs would tell you immediately what was wrong. Automated alerts generally alerted the right team, and that team was responsible for a quick response.
If a team was dropping the ball on system stability, that reflected bad on the team and they were to prioritize stability. It worked.
I've worked in a strong dev-owned testing team too. The culture was a sort of positive can-I-catch-you-out competitiveness that can be quite hard to replicate, and there was no concept of any one person taking point on quality.
If as a developer you want to be seen as someone advancing and taking ownership and responsibility, testing must be part of the process. Sending an untested product or a product that you as a software engineer do not monitor, essentially means you can never be sure you created an actual correct product. That is no engineering. If the org guidelines prevent it, some cultural piece prevents it.
Adding QA outside, which tests software regularly using different approaches, finding intersections etc. is a different topic. Both are necessary.
The problem in big companies is that as a developer, you are usually several layers of people removed from the people actually using the product. Yes you can take ownership and implement unit tests and integration tests and e2e tests in your pipeline, to ensure the product works exactly as you intended. But that doesn't mean it works as management or marketing or the actual user intended.
A nice piece that outlines all the challenges, the opportunities, and the cultural and social adjustments that need to be made within organizations to maximize the chance of left-shifted testing being successful.
IMPO, as a developer, I see QA's role as being "auditors" with a mandate to set the guidelines, understand the process, and assess the outcomes. I'm wary of the foxes being completely responsible for guarding the hen-house unless the processes are structured and audited in a fundamentally different way. That takes fundamental organizational change.
Many many things that are imposed like this will fail.
Its not willful non-compliance even, its just that its hard for people to do things differently, while still being the same people in the same teams, making the same products, with the same timelines...
Context is key here, lots of people see a thing that works well and think they can copy the activities of the successful team, without realising they need to align the mindset.. and the activities will follow. The activities might be different, and thats OK! In a different context, you'd expect that.
I'd argue that in most contexts you don't need a QA team at all, and if you do have one, then it will look a lot different to what you might think. For example, it would be put after a release, not before it.. QA teams are too slow to deal with 2000+ releases a year - not their fault, they are human.. need to reframe the value statement.
As a developer, I frequently tell higher ups that "I have a conflict of interest" when it comes to testing. Even though I fully attempt to make perfect software, often I have blind spots or assumptions that an independent tester finds.
That being said: Depending on what you're making and what platform(s) you target, developer-owned testing either is feasible or not. For example, if you're making a cross-platform product, it's not feasible for a developer to regression test on Windows 10, 11, MacOS, 10 distros of Linux. In contrast, if you're targeting a web API, it's feasible for a developer to write tests at the HTTP layer against a real database.
This paper has 7 references and 4 of them are to a single google blog post that treats test flakiness as an unavoidable fact of life rather than a class of bug which can and should be fixed.
Aside from the red flag of one blog post being >50% of all citations it is also the saddest blog post google ever put their name to.
My experience with this was great. It went really well. We also did our own ops with in a small boundary of systems organized based on domain. I felt total ownership for it, could fix anything in it, deploy anything with any release strategy, monitor anything, and because of that had very little anxiety about being on call for it, best environment I ever worked in.
> The problem is not that dev-owned testing is a flawed idea, but that it is usually poorly planned
In our case there was zero plan. One day they just let our entire QA team go. Literally no direction at all on how to deal with not having QA.
It's been close to a year and we're still trying to figure out how to keep things from going on fire.
For a while we were all testing each other's work. They're mad that this is slowing down our "velocity", and now they're pushing us to test our own work instead...
Testing your own work is the kind of thing an imbecile recommends. I tested it while I wrote it. I thought it was good. Done. I have all the blind spots I had when I wrote it "testing it" after the fact.
No, they got it published in ACM SIGSOFT Software Engineering Notes.
That's one of the things that publication is for.
The paper is a well-supported (if not well-proofread) position paper, synthesizing the author's thoughts and others' prior work but not reporting any new experimental results or artifacts. The author isn't an academic, but someone at Amazon who has written nearly 20 articles like this, many reporting on the intersection of academic theory and the real world, all published in Software Engineering Notes.
As an academic (in systems, not software engineering) who spent 15 years in industry before grad school, I think this perspective is valuable. In addition academics don't get much credit for this sort of article, so there are a lot fewer of them than there ought to be.
If your review was based on features shipped, and your bosses let you send PRs with no tests, would you? And before you say "no" - would you still do that if your company used stack ranking, and you were worried about being at the bottom of the stack?
Developers may understand that "XYZ is better", but if management provides enough incentives for "not XYZ", they're going to get "not XYZ".
That actually wasn't why I didn't write tests a lot of the time.
What stopped me was that after a year of writing tests, I was moved to a higher priority project, and the person who followed me didn't write tests.
So when I came back, many of the tests were broken. I had to fix all those in order to get new ones to not be a bother.
Repeat again, but this time I came back and the unit testing suite had fundamentally altered its nature. None of the tests worked and they all needed to be rewritten for a new paradigm.
I gave up on tests for that system at that point. It simply wasn't worthwhile. Management didn't care at all, despite how many times I told them how much more reliable it made that system, and it was the only system that survived the first giant penetration test with no problems.
That doesn't mean I quite testing. I still wrote tests whenever I thought it would help me with what I was currently working on. And that was quite often. But I absolutely didn't worry about old tests, and I didn't worry about making sure others could use my tests. They were never going to try.
The final straw, less than a year before I was laid off, was when they decided my "storybook" tests weren't worth keeping in the repo and deleted them. That made me realized exactly how much they valued unit tests.
That isn't to say they had no tests. There was a suite of tests written by the boss that we were required to run. They were all run against live or dev servers with a browser-control framework, and they were shaky for years. But they were required, so they were actually kept working. Nobody wrote new tests for it until something failed and caused a problem, though.
tl;dr - There are a lot of reasons that people choose not to write tests, and not just for job security.
Depends on how easily the failure is connected back to you personally. If you introduce a flaw this year and it breaks the system in two years, it won't fall back on you but the poor sap that triggered your bug.
It depends on the application but there are lots of situations where a proper test suite is 10x or more the development work of the feature. I've seen this most commonly with "heavy" integrations.
A concrete example would be adding say saml+scim to a product; you can add a library and do a happy path test and call it a day. Maybe add a test against a captive idp in a container.
But testing all the supported flows against each supported vendor becomes a major project in and of itself if you want to do it properly. The number of possible edge cases is extreme and automating deployment, updates and configuration of the peer products under test is a huge drag, especially if they are hostile to automation.
Once, for a very very critical part of our product, apart from the usual tests, I ended up writing another implementation of the thing, completely separately from the original dev, before looking at his code. We then ran them side by side and ensured that all of their outputs matched perfectly.
The "test implementation" ended up being more performant, and eventually the two implementations switched roles.
The article argues that Dev-Owned testing isn't wrong but all the arguments it presents support that it is.
I always understood shift-left as doing more tests earlier. That is pretty uncontroversial and where the article is still on the right track. It derails at the moment it equates shift-left with dev-owned testing - a common mistake.
You can have quality owned by QA specialists in every development cycle and it is something that consistently works.
You do everything the same as today. Then you turn it over to QA who keep finding weird things that you never thought of. QA finds more than half your written bugs (of course I don't write a bug everytime a unit test fails when doing TDD, but sometimes I find a bug in code I wrote a few weeks ago and I write that up so I can focus on the story I'm doing today and not forget about the bug)
QA should not be replacing anything a developer does, it should be a supplement because you can't think of everything.
We also use QA because we are making multi-million dollar embedded machines. One QA can put the code of 10 different developers on the machine and verify it works as well in the real world as it does in software simulation.
They find all the things the devs and their automated tests missed, then they mentor the devs in how to test for these and they work out how the bug could have been found earlier. Rinse and repeat until the tester is struggling to find issues and has coached the devs out of his job
First they came with the NoOps movement, and you were happy cause those damned ops people were always complaining and slowing you down. I can manage my infra!
Then, they came with the dev-owned testing and fired all the QAs, and you were happy because they were always breaking your app and slowing you down. I can write my tests!
Now, they are coming with LLM agents and you don't own the product...
I have worked with bad ops people who didn't let anything get done, and good ops people who knew how to do tricky things and kept the system working so I didn't have to care. I have worked with good and bad QA testers. Guess who I'm glad are gone.
The paper highlights the problem in two words of the first sentence of the abstract: "shrink QA".
Corporations do it to save money,
and accept the loss of quality as the cost of doing business.
Therein lies part of the reason for the sad state of software today.
And the first part is true. We can. But that's not why we have (had) QA.
First: it's not the best use of our time. I believe dev and QA are separate skillset. Of course there is overlap.
Second, and most important: it's a separate person, an additional person who can question the ticket, and who can question my translation of the ticket into software.
And lastly: they don't suffer from the curse of knowledge on how I implemented the ticket.
I miss my QA colleagues. When I joined my current employer there were 8 or so. Initially I was afraid to give them my work, afraid of bad feedback.
Never have I met such graceful people who took the time in understanding something, and talking to me to figure out where there was a mismatch.
And then they were deemed not needed.
In my mind a good QA understands the feature we're working on, deploys the correct version, thoroughly tests the feature understanding what it's supposed and not supposed to do, and if they happen to find a bug, they create a bug ticket where they describe the environment in full and what steps are necessary to reproduce it.
For automation tests, very few are capable of writing tests that test the spec, not implementation, contain sound technical practices, and properly address flakiness.
For example it's very common to see a test that clicks the login button and instead of waiting for the login, the wait 20 seconds. Which is both too much, and 1% of the time too little.
Whenever I worked with devs, they almost always managed to do all this, sometimes they needed a bit of guidance, but that's it. Very very few QA ever did (not that they seemed to bothered by that).
A lot of QA have expressed that devs 'look down' on them. I can't comment on that, but the signal-to-noise ratio of bug tickets is so low, that often it's you have to do their job and repeat everything as well.
This has been a repeated experience for me with multiple companies and a lot of places don't have proper feedback loops, so it doesn't even bother them as they're not affected by the poor quality of bug reports, but devs have to spend the extra time.
I've worked with a handful of excellent QA. In my opinion - the best QA is basically a product manager lite. They understand the user, and they act from the perspective of the user when evaluating new features. Not the "plan" for the feature. The actual implementation provided by development.
This means they clarify edge cases, call out spots that are confusing or tedious for a user, and understand & test how features interact. They help take a first draft of a feature to a much higher level of polish than most devs/pms actually think through, and avoid all sorts of long term problems with shipping features that don't play nicely.
I think it's a huge mistake to ask QA to do automation tests - Planning for them? Sure. Implementation? No. That's a dev's job, you should assign someone with that skillset (and pay them accordingly).
QA is there to drive quality up for your users, the value comes from the opinions they make after using what the devs provide (often repeatedly, like a user) - not from automating that process.
Hence my username.
I wouldnt ever sell myself as a test automation engineer but whenever i join a project the number one most broken technical issue in need of fixing is nearly always test automation.
I typically brand this work as architecture (and to be fair there is overlap) and try to build infra and tooling less skilled devs can use to write spec-matching tests.
Sadly if i called it test automation i'd have to take a pay cut and get paid less than those less skilled devs who need to be trained to do TDD.
When I was a lead, I pulled everyone, (QA, devs, and managers) into a meeting and made a presentation called "No Guessing Games". I started with an ambiguous ticket with truncated logs...
And then in the middle I basically explained what the division of labor is: QA is responsible for finding bugs and clearly communicating what the bug is. Bugs were not to be sent to development until they clearly explained the problem. (I also explained what the exceptions were, because the rule only works about 99.9% of the time.)
(I also pointed out that dev had to keep QA honest and not waste more than an hour figuring out how to reproduce a bug.)
The problem was solved!
The biggest determinant is company culture and treating QA as an integral part of the team, and hiring QA that understands the expectation thereof. In addition, having regular 1:1s both with the TL and EM to help them keep integrated with the team, provide training and development, and make sure they're getting the environment in which they can be good QA.
And work to onboard bad QA just as we would a developer who is not able to meet expectations.
This forced me to start making my feature proposals as small as possible. I would defensively document everything, and sprinkle in little summaries to make things as clear as possible. I started writing scripts to help isolate the new behavior during testing.
...eventually I realized that this person was somehow the best QA person I'd ever worked with.
Companies think QA is shit, so they hire shit QA, and they get shit QA results.
Then they get rid of QA, and then the devs get pissed because now support and dev has turned to QA and customers are wondering how the hell certain bugs got out the door.
> end up having the most knowledge about the application works in practice
The best I've worked with had this quality, and were fearless advocates for the end-user. They kept everyone honest.
I'm not sure it's a separate skillset. You need the other side's skills all the time in each of those positions.
But it's certainly a separate mindset. People must hold different values in each of them. One just can't competently do both at the same time. And "time" is quantized here in months-long intervals, because it takes many days to internalize a mindset, if one is flexible enough to ever do it.
This gets more and more difficult with modern development practices. Development benefits greatly from fast release cycles and quick iteration- the other job functions do not! QA is certainly included there.
I think that inherit conflict is what is causing developers to increasingly managing their own operations, technical writing, testing, etc.
I have found QA to be mostly unnecessary friction throughout my career, and I've never been more productive than when QA and writing tests became my responsibility.
This is usually what has happened during a release cycle.
1) Devs come up with a list of features and a timeline.
2) QA will go through the list and about 1/2 of the features will get cut because they claim they don't have time to test everything based on their timeline.
3) The cycle begins and devs will start adding features into the codebase and it's radio silence from the QA.
4) Throughout the release QA will force more features to get dropped. By the end of the release cycle, another 1/4 of the original number of features get dropped leaving about 1/4 of the original features that were planned. "It will get done in a dot release."
5) Near the end of the release, everything gets tested and a mountain of bugs come in near the deadline and everyone is forced to scramble. The deadline gets pushed back and QA pushes the blame onto the devs.
6) After everything gets resolved, the next release cycle begins.
This is at quite a few enterprise software companies that most people in Silicon Valley have heard of if you've been working for more than 10 years.
First of all, I've seen all type of teams be successful, ranging from zero QA at all, to massive QA teams with incredible power (eg. Format QA at Sony in Europe). I have absolutely seen teams with no QA deliver high quality full stop, the title is nonsense.
My firm belief is that QA can raise the ceiling of quality significantly if you know what you're doing, but there is also huge moral hazard of engineers dropping the ball on quality at implementation time and creating a situation where adding more QA resources doesn't actually improve quality, just communication churn and ticket activity. By the way the same phenomenon can happen with product people as well (and I've also seen teams without product managers do better than teams with them in certain circumstantes).
The most important anchor point for me is that engineering must fundamentally own quality. This is because we are closer to the implementation and can anticipate more failure modes than anyone else. That doesn't mean other roles don't contribute significantly to quality (product, design, QA, ops absolutely do), but it means we can't abdicate our responsibility to deliver high quality code and systems by leaning on some other function and getting lazy about how we ensure we are building right.
What level of testing is appropriate for engineers to do is quite project and product specific, but it is definitely greater than zero. This goes double in the age of AI.
This is huge. I was selling software to help QA. I saw a CEO demand a Head of QA guarantee their super buggy app be free of bugs by a certain date.
This is terrible. She didn’t write the thing. Total responsibility without authority trap. She was, not at all to my surprise, fired.
I think the deal fell through and I don’t know how else things ended up with them.
QA’s job is signal. If you’re getting clear signal, they’re doing their job.
This is the thing I hate the most about work, having to ARGUE with PMs because they can't accept an estimate, there's often some back-and-forth. "What if you do X instead?" "Team Y (always uses hacks and adds technical debt with every single feature they touch) did something similar in two days." But we're just communicating and adding transparency so that's good and it certainly doesn't matter that it starts taking up 4+ hours of your time in Slack conversations and meetings of people 'level setting' 'getting on the same page' trying to help you 'figure out' how to 'reduce scope' etc. etc.
Also, I think testing via unit or integration tests should be standard regardless, and that isn't what I am thinking about here. I'm thinking about QA, the way QA does it. You hammer your feature with a bunch of weird bullshit like false and unexpected inputs, what happens if I refresh the page in strange ways, what happens if i make an update and force the cache to NOT clear, what happens if I drop my laptop in a dumpster while making the request from firefox and safari at the same time logged in as the same user, what happens if I turn off my internet in the middle of a file upload, and so on. When devs say that devs should be responsible for testing, they usually mean the former (unit and integration tests), and not this separate skillset of coming up with a bunch of weird edge cases for your code. And yes, unit tests SHOULD hit the edge cases, but QA is just better at it. You usually don't have engineers testing what happens when you try sending in Mandarin characters as input (unless they live in China, I guess). All of that effort should bring up your estimates because it is non-trivial. This is what getting rid of QA means, not happy path end-to-end testing plus some unit and integration tests.
Yeah this one pisses me off too. No, PM, you do not know how long it should take to implement a feature I get paid to work on and you don't.
Good PMs take your feedback and believe you. Bad PMs do the opposite.
When I worked defense we moved slowly and methodically. It almost felt too slow. Now in the private sector I move like triple the speed but we often need to go back and redo and refactor things. I think it averages out to a similar rate of progress but in defense at least I had my sanity.
Why it worked: the team set the timelines for delivery of software, the team built their acceptance and integration tests based on system inputs and outputs based on the edges of their systems, the team owned being on-call, and the team automated as much as possible (no repeatable manual testing aside from sanity checks on first release).
There was no QA person or team, but there was a quality focused dev on the team whose role was to ensure others kept the testing bar high. They ensured logs, metrics, and tests met the team bar. This role rotated.
There was a ci/cd team. They made sure the test system worked, but teams maintained their own ci configuration. We used buildkite, so each project had its own buildkite.yml.
The team was expected by eng leaders to set up basic testing before development. In one case, our team had to spend several sprints setting up generators to make the expected inputs and sinks to capture output. This was a flagship project and lots of future development was expected. It very much paid off.
Our test approach was very much "slow is smooth and smooth is fast." We would deploy multiple times a day. Tests were 10 or so minutes and very comprehensive. If a bug got out, tests were updated. The tests were very reliable because the team prioritized them. Eventually people stopped even manually verifying their code because if the tests were green, you _knew_ it worked.
Beyond our team, into the wider system, there was a light weight acceptance test setup and the team registered tests there, usually one per feature. This was the most brittle part because a failed test could be because another team or a system failure. But guess what? That is the same as production if not more noisy. So we had the same level of logging, metrics, and alerts (limited to business hours). Good logs would tell you immediately what was wrong. Automated alerts generally alerted the right team, and that team was responsible for a quick response.
If a team was dropping the ball on system stability, that reflected bad on the team and they were to prioritize stability. It worked.
Hands down the best dev org I have part of.
Adding QA outside, which tests software regularly using different approaches, finding intersections etc. is a different topic. Both are necessary.
IMPO, as a developer, I see QA's role as being "auditors" with a mandate to set the guidelines, understand the process, and assess the outcomes. I'm wary of the foxes being completely responsible for guarding the hen-house unless the processes are structured and audited in a fundamentally different way. That takes fundamental organizational change.
QA wants things to break.
What worked for me, devs write ALL the tests, QA does selective code reviews of those tests making devs write better tests.
I also wrote the failure of Dev-Owned Testing: "Tests are bad for developers" https://www.amazingcto.com/tests-are-bad-for-developers/
"It was clearly a top-down decision"
Many many things that are imposed like this will fail.
Its not willful non-compliance even, its just that its hard for people to do things differently, while still being the same people in the same teams, making the same products, with the same timelines...
Context is key here, lots of people see a thing that works well and think they can copy the activities of the successful team, without realising they need to align the mindset.. and the activities will follow. The activities might be different, and thats OK! In a different context, you'd expect that.
I'd argue that in most contexts you don't need a QA team at all, and if you do have one, then it will look a lot different to what you might think. For example, it would be put after a release, not before it.. QA teams are too slow to deal with 2000+ releases a year - not their fault, they are human.. need to reframe the value statement.
That being said: Depending on what you're making and what platform(s) you target, developer-owned testing either is feasible or not. For example, if you're making a cross-platform product, it's not feasible for a developer to regression test on Windows 10, 11, MacOS, 10 distros of Linux. In contrast, if you're targeting a web API, it's feasible for a developer to write tests at the HTTP layer against a real database.
Aside from the red flag of one blog post being >50% of all citations it is also the saddest blog post google ever put their name to.
There is very little of interest in this paper.
In our case there was zero plan. One day they just let our entire QA team go. Literally no direction at all on how to deal with not having QA.
It's been close to a year and we're still trying to figure out how to keep things from going on fire.
For a while we were all testing each other's work. They're mad that this is slowing down our "velocity", and now they're pushing us to test our own work instead...
Testing your own work is the kind of thing an imbecile recommends. I tested it while I wrote it. I thought it was good. Done. I have all the blind spots I had when I wrote it "testing it" after the fact.
That's one of the things that publication is for.
The paper is a well-supported (if not well-proofread) position paper, synthesizing the author's thoughts and others' prior work but not reporting any new experimental results or artifacts. The author isn't an academic, but someone at Amazon who has written nearly 20 articles like this, many reporting on the intersection of academic theory and the real world, all published in Software Engineering Notes.
As an academic (in systems, not software engineering) who spent 15 years in industry before grad school, I think this perspective is valuable. In addition academics don't get much credit for this sort of article, so there are a lot fewer of them than there ought to be.
Developers may understand that "XYZ is better", but if management provides enough incentives for "not XYZ", they're going to get "not XYZ".
What stopped me was that after a year of writing tests, I was moved to a higher priority project, and the person who followed me didn't write tests.
So when I came back, many of the tests were broken. I had to fix all those in order to get new ones to not be a bother.
Repeat again, but this time I came back and the unit testing suite had fundamentally altered its nature. None of the tests worked and they all needed to be rewritten for a new paradigm.
I gave up on tests for that system at that point. It simply wasn't worthwhile. Management didn't care at all, despite how many times I told them how much more reliable it made that system, and it was the only system that survived the first giant penetration test with no problems.
That doesn't mean I quite testing. I still wrote tests whenever I thought it would help me with what I was currently working on. And that was quite often. But I absolutely didn't worry about old tests, and I didn't worry about making sure others could use my tests. They were never going to try.
The final straw, less than a year before I was laid off, was when they decided my "storybook" tests weren't worth keeping in the repo and deleted them. That made me realized exactly how much they valued unit tests.
That isn't to say they had no tests. There was a suite of tests written by the boss that we were required to run. They were all run against live or dev servers with a browser-control framework, and they were shaky for years. But they were required, so they were actually kept working. Nobody wrote new tests for it until something failed and caused a problem, though.
tl;dr - There are a lot of reasons that people choose not to write tests, and not just for job security.
A concrete example would be adding say saml+scim to a product; you can add a library and do a happy path test and call it a day. Maybe add a test against a captive idp in a container.
But testing all the supported flows against each supported vendor becomes a major project in and of itself if you want to do it properly. The number of possible edge cases is extreme and automating deployment, updates and configuration of the peer products under test is a huge drag, especially if they are hostile to automation.
The "test implementation" ended up being more performant, and eventually the two implementations switched roles.
Rarely
Do people send PRs with just enough mostly useless tests, just to tick the DoD boxes.
All the time.
I always understood shift-left as doing more tests earlier. That is pretty uncontroversial and where the article is still on the right track. It derails at the moment it equates shift-left with dev-owned testing - a common mistake.
You can have quality owned by QA specialists in every development cycle and it is something that consistently works.
QA should not be replacing anything a developer does, it should be a supplement because you can't think of everything.
We also use QA because we are making multi-million dollar embedded machines. One QA can put the code of 10 different developers on the machine and verify it works as well in the real world as it does in software simulation.
You can be both but I have yet to meet someone who is equally good in both mindsets.
Then, they came with the dev-owned testing and fired all the QAs, and you were happy because they were always breaking your app and slowing you down. I can write my tests!
Now, they are coming with LLM agents and you don't own the product...
Like heck I was.
Corporations do it to save money, and accept the loss of quality as the cost of doing business. Therein lies part of the reason for the sad state of software today.