I suppose that's one way to go about it. But a better way is to first write only one test. That fails, you do the bare minimum to get it to pass. Then you write the next test.
I've paired with a guy who says that what I'm suggesting will never work. So I say that means my suggestion will get us a failing test, proving him right. And then maybe his suggestion makes the test pass, maybe it doesn't and we figure out together how to get the test to pass.
Hopefully getting the new test to pass doesn't cause any of the previous tests to fail. But if that happens, we know right away. By the time we turn things over to QA, they can concentrate on the more interesting edge cases.