Proving program correctness is not easy. For example, one must prove the correctness of the entire dependency supply chain that the target program might directly or indirectly call. Sure, the proofs of those dependencies are to a degree reusable, but would likely need to be re-proved for each distinct set of parameters.
And on the other hand, TDD shows that having expectations of the results before coding the procedure is a good thing. Not necessarily proving the implementation correct but TDD is more likely to produce correct code.
Personally, I've found automation testing of any kind (unit, integration, God forbid UI) is the biggest waste of time in modern software engineering practices. I've never, not once in nearing 20 years of professional software development in companies big and small, well-known and not, have ever seen it be worth the monumental amount of time they waste.
I could go on and on about this but damn just feels good to "say" aloud.
There’s like this three way war in testing between people who think they’re terrible and write tests that are self fulfilling prophecies, people who think terrible tests aren’t bad and good tests are “too much work”, and a small minority of people that think different tests are just different work and only take a bit of practice to do decently.
It took me two mentors to feel like I could do tests worth having. Two more to feel I was good at it. And one more to realize everyone is terrible at them and I just suck less.
There is definitely something wrong with testing. But nobody has cracked the code so we do this even though we know it’s often suboptimal. I recently hunted down guy #5 and asked him about my conclusion and he agreed.
In my experience, tests help me refactor stuff and detect when I have broken some expected inplementation.
For example, I was refactoring a SQL query builder and tests told me my JOINs where no longer containing ON clause. Might seem trivial but multiply this by a large codebase and benefits increase.
Another thing is that when I write functions with tests in mind, they tend to be more maintenable and simpler because it's hard to test functions that to a ton of things and have many side effects.
About three to five times a year the tests help me make a feature much more useful because I realize that there’s an interesting corner case that’s easy to check and easy to implement.
Lots of times people don’t ask for things because they guess wrong about where the difficult parts will be and they learn from our pushing back to ask for less.
Especially for UI, the amount of hours saved from e2e tests with playwright was huge for me. Tests for iOS Safari and Chrome Desktop, all with their own layout quirks from unexpected ways a layout adjustment plays out on a DPI x1 1920x1080 screen with mouse clicks vs DPI x2.75 900x1600 smartphone screen with touch inputs.
Especially here in Japan I have seen whole firms hired to UI tests a million different variations of these each release candidate and that is "the biggest waste of time in modern software engineering" I have seen so far. You can drown in trying to automate all forms of testing and human testing is important. But so is automating the basic sanity checks. Every tool can of course be misused in pursuit of an impractical extreme of either.
So you are saying that test driven development is:
a) not useful for you and your use cases
b) never useful in any use cases
c) sometimes useful but not for you
I am generally against the brainless insistence on just writing tests without thinking about the value vs cost (especially on unit tests, which are usually the lowest value and highest cost IMO), but I have found them extremely valuable on occasion. Sometimes for being able to hammer some bit of tricky code through a bunch of different scenarios while figuring out the edge cases, others for validating top-level requirements, and especially for helping prevent regressions while making other changes (which will 100% happen once you have more than one person working on a more than trivial codebase).
Sounds like you had to maintain tests with the wrong kind of coverage. IMO there is an art to crafting tests that are economical.
They should cover core features that pay for the business, ideally as coarsely as practical. Some coupling with the implementation is inevitable, so I prefer it be at the highest level that can be maintained.
Have you never seen tests catch bugs that otherwise would have gone into production?
I've found unit tests valuable as executable documentation for inner interfaces that manual testing can't reach easily
Say I've got some algorithm like a binary search that you can implement in a page of code but it's gonna be buried three layers deep in the business logic. You could expose a debug command and test it manually, or you could throw a few unit tests on it to hit all the edge cases, make sure it gives the expected outputs for given inputs, and then you know there aren't major bugs there
"In X years .." is the surest sign of complacency, overconfidence, and excuses on the path towards hubris. And the "I've never needed a seatbelt before" fallacy is a really terrible habit.
There is nuance between Cucumber bureaucracy, and proper testing practices that check real bits.
Superficial article waxing vague nonsense like "clean", which is meaningless.
Necessary and sufficient tests are essential, otherwise code is of limited value. At a minimum, unit test complicated bits and integration test the big, common uses. Add tests for fixed bugs to never repeat them. Without tests, refactoring becomes really risky and damn hard. For extra confidence: benchmark, fuzz, and property test, and sometimes consider formal methods like theorem-proving to validate behavior falls within bounds where applicable. It's easy to go overboard on process or act as a cowboy coder who doesn't do things properly.
In my experience of all the kinds of tests one can write, it's easiest to see that regression tests carry their own weight. But they have a tendency to bitrot; after enough refactors and rewriting of core elements and dropping of features they may need to be thrown out. Hopefully your bug tracker is in good shape so you can recapture the test's intent in your new context when you rewrite it, or else make the case that the bug is no longer relevant and discard the test for good.
Proving program correctness is not easy. For example, one must prove the correctness of the entire dependency supply chain that the target program might directly or indirectly call. Sure, the proofs of those dependencies are to a degree reusable, but would likely need to be re-proved for each distinct set of parameters.
And on the other hand, TDD shows that having expectations of the results before coding the procedure is a good thing. Not necessarily proving the implementation correct but TDD is more likely to produce correct code.
Unit testing is very good at what it does in spite of not being exhaustive. You need the right examples, not all the examples.
Where testing is weak is entire systems. As you go up the integration scale, tests get more flimsy.
Personally, I've found automation testing of any kind (unit, integration, God forbid UI) is the biggest waste of time in modern software engineering practices. I've never, not once in nearing 20 years of professional software development in companies big and small, well-known and not, have ever seen it be worth the monumental amount of time they waste.
I could go on and on about this but damn just feels good to "say" aloud.
There’s like this three way war in testing between people who think they’re terrible and write tests that are self fulfilling prophecies, people who think terrible tests aren’t bad and good tests are “too much work”, and a small minority of people that think different tests are just different work and only take a bit of practice to do decently.
It took me two mentors to feel like I could do tests worth having. Two more to feel I was good at it. And one more to realize everyone is terrible at them and I just suck less.
There is definitely something wrong with testing. But nobody has cracked the code so we do this even though we know it’s often suboptimal. I recently hunted down guy #5 and asked him about my conclusion and he agreed.
In my experience, tests help me refactor stuff and detect when I have broken some expected inplementation.
For example, I was refactoring a SQL query builder and tests told me my JOINs where no longer containing ON clause. Might seem trivial but multiply this by a large codebase and benefits increase.
Another thing is that when I write functions with tests in mind, they tend to be more maintenable and simpler because it's hard to test functions that to a ton of things and have many side effects.
About three to five times a year the tests help me make a feature much more useful because I realize that there’s an interesting corner case that’s easy to check and easy to implement.
Lots of times people don’t ask for things because they guess wrong about where the difficult parts will be and they learn from our pushing back to ask for less.
> God forbid UI
Especially for UI, the amount of hours saved from e2e tests with playwright was huge for me. Tests for iOS Safari and Chrome Desktop, all with their own layout quirks from unexpected ways a layout adjustment plays out on a DPI x1 1920x1080 screen with mouse clicks vs DPI x2.75 900x1600 smartphone screen with touch inputs.
Especially here in Japan I have seen whole firms hired to UI tests a million different variations of these each release candidate and that is "the biggest waste of time in modern software engineering" I have seen so far. You can drown in trying to automate all forms of testing and human testing is important. But so is automating the basic sanity checks. Every tool can of course be misused in pursuit of an impractical extreme of either.
So you are saying that test driven development is: a) not useful for you and your use cases b) never useful in any use cases c) sometimes useful but not for you
I am generally against the brainless insistence on just writing tests without thinking about the value vs cost (especially on unit tests, which are usually the lowest value and highest cost IMO), but I have found them extremely valuable on occasion. Sometimes for being able to hammer some bit of tricky code through a bunch of different scenarios while figuring out the edge cases, others for validating top-level requirements, and especially for helping prevent regressions while making other changes (which will 100% happen once you have more than one person working on a more than trivial codebase).
Sounds like you had to maintain tests with the wrong kind of coverage. IMO there is an art to crafting tests that are economical.
They should cover core features that pay for the business, ideally as coarsely as practical. Some coupling with the implementation is inevitable, so I prefer it be at the highest level that can be maintained.
Have you never seen tests catch bugs that otherwise would have gone into production?
You are doing it wrong.
I dump highly complex modified code straight into production and haven't had a single production bug for 6+ years.
The reason why is automatic tests.
I've found unit tests valuable as executable documentation for inner interfaces that manual testing can't reach easily
Say I've got some algorithm like a binary search that you can implement in a page of code but it's gonna be buried three layers deep in the business logic. You could expose a debug command and test it manually, or you could throw a few unit tests on it to hit all the edge cases, make sure it gives the expected outputs for given inputs, and then you know there aren't major bugs there
"In X years .." is the surest sign of complacency, overconfidence, and excuses on the path towards hubris. And the "I've never needed a seatbelt before" fallacy is a really terrible habit.
There is nuance between Cucumber bureaucracy, and proper testing practices that check real bits.
Obviously?
Superficial article waxing vague nonsense like "clean", which is meaningless.
Necessary and sufficient tests are essential, otherwise code is of limited value. At a minimum, unit test complicated bits and integration test the big, common uses. Add tests for fixed bugs to never repeat them. Without tests, refactoring becomes really risky and damn hard. For extra confidence: benchmark, fuzz, and property test, and sometimes consider formal methods like theorem-proving to validate behavior falls within bounds where applicable. It's easy to go overboard on process or act as a cowboy coder who doesn't do things properly.
> Add tests for fixed bugs to never repeat them
In my experience of all the kinds of tests one can write, it's easiest to see that regression tests carry their own weight. But they have a tendency to bitrot; after enough refactors and rewriting of core elements and dropping of features they may need to be thrown out. Hopefully your bug tracker is in good shape so you can recapture the test's intent in your new context when you rewrite it, or else make the case that the bug is no longer relevant and discard the test for good.