Files changed (1)
While there is value in doing manual testing in the REPL or by executing your application by hand, you really also want a suite of **automated tests**. Automated means in practice, that you have written code that tests your application. You can run these tests and the result will tell you if and which tests have failed or passed. This makes your tests reproducible with minimum effort. You want to develop this test suite as you develop your application. If you test before your actual code or after is really up to you. There is one thing though that I want to point out. There is a general problem with tests, well a few of those but one is particularly important now: **How do you make sure, that your test code is correct?** It doesn't make much sense to put trust in your code because
-of your shiny test-suite, when the test-suite itself is incorrect. Possible all tests pass where they shouldn't or they don't pass but really should. While you could write tests for your tests, you may immediately see that this is a recursive problem and might lead to endless tests testing tests testing tests ....
+of your shiny test-suite, when the test-suite itself is incorrect. Possibly all tests pass where they shouldn't or they don't pass but really should. While you could write tests for your tests, you may immediately see that this is a recursive problem and might lead to endless tests testing tests testing tests ....
This is one reason why doing tests **before** code might be helpful. This discipline is called [TDD](https://en.wikipedia.org/wiki/Test-driven_development). It suggests a work-flow, that we refer to as **"Red-Green-Refactor"**. **Red** means that we start with a failing test. **Green** means that we implement as much of the application code, that is needed to make this test pass. **Refactor** is changing details of your code without effecting the overall functionality. I don't want to go into details, but there is one aspect that is particularly useful.
If we start with a **red test**, we at least have some good evidence that our tests exercises portions of our code that don't yet work as expected, because otherwise the test would succeed. Also