Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is exactly my problem with TDD. Note this problem is not only in SW. For any development you do, you could start with designing tests. You can do for some HW for sure. If you want to apply TDD to any other development, you see pretty fast, what the problem is: you are going to design lots of tests, that a at the end will not be used. A total waste. Also with TDD often it will be centered in quantity of tests and not so much quality.

What I find is much much better approach is what I call "detached test development" (DTD). The idea is: 2 separate teams get the requirements; one team writes code, the other write tests. They do not talk to each other! Fist when a test is not passed, they have to discuss: is the requirement not clear enough? What is the part that A thought about, but not B? Assignment of tests and code can be mixed, so a team makes code for requirements 1 through 100, and tests for 101 to 200, or something like that. I had very very good results with such approach.



Who starts with designing just the tests? I have no idea how this is an association with TDD.

TDD is a feedback cycle, you write small increments of tests before writing a small bit of a code. You don't write a bunch of tests upfront, that'd be silly. The whole point is to integrate small amounts of learning as you go, which help guide the follow-on tests, as well as the actual implementation, not to mention questions to need to ask the broader business.

Your DTD idea has been tried a lot in prior decades. In fact, as a student I was on one of those testing teams. It's a terrible idea, throwing code over a wall like that is a great way to radically increase the latency of communication, and to have a raft of things get missed.

I have no idea why there's such common misconceptions of what TDD is. Maybe folks are being taught some really bad ideas here?


> Also with TDD often it will be centered in quantity of tests and not so much quality.

100%. Metrics of quality are really really hard to define in a way that are both productive and not gamified by engineers.

> What I find is much much better approach is what I call "detached test development" (DTD)

I'm a test engineer and some companies do 'embed' an SDET like the way you mention within a team - it's not quite that clear cut, they can discuss, but it's still one person implementing and another testing.

I'm always happy to see people with thoughts on testing as a core part of good engineering rather than an afterthought/annoyance :)


What you described is a quite common role of QA automation team, but it does not really replace TDD. Separate team working on a test can do it only relying on a remote contract (e.g. API, UI or database schema), they cannot test local contracts like a public interface of a class, because that would require the that code already to be written. In TDD you often write the code AND the test at the same time, integrating the test and the code in compile time.


>2 separate teams get the requirements; one team writes code, the other write tests.

This feels a bit like when you write a layer of encapsulation to try to make a problem easier only to discover that all of the complexity is now in the interface. Isn't converting the PO's requirements into good, testable requirements the hard technical bit?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: