1. David Krentzlin
  2. lisp-unleashed blog

Commits

certainty  committed 902fb27

fixed some typos

  • Participants
  • Parent commits 70d063c
  • Branches default

Comments (0)

Files changed (1)

File _posts/2014-04-05-tesing_your_chicken_code.md

View file
 
 Hello everybody and welcome back. In this article I attempt to introduce you to the very excellent [test egg](http://wiki.call-cc.org/eggref/4/test), which is a great way to do your unit-testing in CHICKEN.
 
-It will start with a gentle introduction to unit testing in general, before we dive into the **test egg** itself. I'll show you a few **best practices** that will help you to benifit most from your test code. After I've outlined the bells and whistles that come with **test**, I'll introduce you to **random testing** on top of **test**. Finally I'll give you some hints on how a useful Emacs setup to do CHICKEN unit tests may look like.
+It will start with a gentle introduction to unit testing in general, before we dive into the **test egg** itself. I'll show you a few **best practices** that will help you to benefit most from your test code. After I've outlined the bells and whistles that come with **test**, I'll introduce you to **random testing** on top of **test**. Finally I'll give you some hints on how a useful Emacs setup to do CHICKEN unit tests may look like.
 
 You can either read this article as a whole, which is what I recommend, or cherry-pick the parts that you're interested in. Both will hopefully work out.
 
 Closely related to this are regression tests, that are used to detect bugs that have been fixed but which now pop up again, after you have changed some portion of your code. Regression tests are an important part of a test suite. Once you discover a bug you generally write a test that reproduces it. This test will be naturally failing as the system/code under test doesn't behave as expected. The next step is to fix this bug and make the test pass. This way you have made sure that this particular bug has been fixed.
 This must be contrasted with the sort of tests you use to test your features. While those are
 estimates, testing for bugs and fixing them can act as a proof. Of course there is no rule without an exception. [Bugs tend to come in clusters](http://testingreflections.com/node/7584) and can be grouped into categories or families. This means in practice that you may have fixed this particular bug but you're advised to look
-for a generalisation of that bug that might accure elsewhere. Also you likely want to check
+for a generalization of that bug that might occur elsewhere. Also you likely want to check
 the code that surrounds the part that caused the bug. It has been shown empirically that it is
 likely to contain bugs as well. For a critical view on this theory you might want to have
 a look at [this](http://www.developsense.com/blog/2009/01/ideas-around-bug-clusters).
 
-Also tests often are a form of documentation. They describe the expected bevaviour of your code and thus give strong hints about how it shall be used. Often you find that the documentation
+Also tests often are a form of documentation. They describe the expected behavior of your code and thus give strong hints about how it shall be used. Often you find that the documentation
 of a project isn't very well. If it at least has a thorough and good test-suite you can quickly learn the most important aspects of the library.
 
 There are many more testing categories that all have their particular value. The literature is
 the quality of your tests. You can easily see that you can have tests that execute all of your code paths but simply do not verify their outputs. In this case you have 100% coverage, but actually
 0 confidence that the code is correct.
 
-While code coverage gives you a qualitive measure of your test code there is also a quantitive measure. That is the code to test ratio. It's a simple as it can be, it just tells you the proportion of your code and tests. Most people tend to agree that a ratio of 1:2 is about good. That means you have twice as much tests as you've got actual code. In my oppinion that very much depends on the kind of project. If you happen to have many internal helper procedures and very few procedures that belong to the public API, then you most likely won't reach that ratio. If your code is mostly public API though that it may be actually close to the truth. Each procedure is likely to have at least two tests. Again my advice is not to use that as an absolute measure but only as a guideline on to verify that you're on the right track.
+While code coverage gives you a qualitative measure of your test code there is also a quantitative measure. That is the code to test ratio. It's a simple as it can be, it just tells you the proportion of your code and tests. Most people tend to agree that a ratio of 1:2 is about good. That means you have twice as much tests as you've got actual code. In my opinion that very much depends on the kind of project. If you happen to have many internal helper procedures and very few procedures that belong to the public API, then you most likely won't reach that ratio. If your code is mostly public API though that it may be actually close to the truth. Each procedure is likely to have at least two tests. Again my advice is not to use that as an absolute measure but only as a guideline on to verify that you're on the right track.
 
 Let's resume after that little detour.
 
-Another aspect that must be emphasized is that tests can never prove the absence of bugs, possibly with the exception of regression tests. If tests have been written **after** a certain bug accured you have a high probability that this specific bug has been fixed. Apart from these though, there is by no means a proof that tests can give you of the correctness of your code.
+Another aspect that must be emphasized is that tests can never prove the absence of bugs, possibly with the exception of regression tests. If tests have been written **after** a certain bug occur you have a high probability that this specific bug has been fixed. Apart from these though, there is by no means a proof that tests can give you of the correctness of your code.
 
 Tests are not a silver bullet and are not a replacement for good design and solid software engineering skills. Having great many tests that verify features of your application is comforting and all, but be assured that there will be a time when a bug pops up in your application. This means that all your tests didn't do anything to prevent this bug. You're on your own now.
 Now you actually have to understand your system, reason about it and figure out what went wrong. This is another crucial part of developing
 
 * **It is more work then just writing your application code**
 
-  This one is true. Writing tests is an investment. It does cost more time, more money, more ene  rgy etc. But as with all good investments, they better pay off in the end. It turns out that
+  This one is true. Writing tests is an investment. It does cost more time, more money, more energy  rgy etc. But as with all good investments, they better pay off in the end. It turns out that
   most of the time this is indeed the case. The longer a project exists the more often you or someone else comes back to your code and changes it. This involves fixing bugs, adding new features, improving performance, you name it. For all those cases, you will spend significantly less time
   if you have a test-suite that helps you to ensure that all those changes didn't break anything.
 
 * **It's hard to break the thing that you just carefully built**
 
   It's just not fun to try to destroy what you just build. Suppose you build a procedure that has
-  been really hard to accomplish. Now you're supposed to find a possible invokation in which it
-  misbehaves. If you succeed you will have to get back at it and fix it. Which will again be very hard eventually. There is an inner barrier, that subconciously holds you back. I think we all   agree that having found this misbehavior is better than keeping it burried, but the back of your
-  mind, might see this slightly different, especially when it's friday afternoon at 6pm.
+  been really hard to accomplish. Now you're supposed to find a possible invocation in which it
+  misbehaves. If you succeed you will have to get back at it and fix it. Which will again be very hard eventually. There is an inner barrier, that subconsciously holds you back. I think we all   agree that having found this misbehavior is better than keeping it buried, but the back of your
+  mind, might see this slightly different, especially when it's Friday afternoon at 6pm.
 
 * **It's not fun**
 
 Of course there may be many more reasons. Just take these as an excerpt.
 
 
-#### Ok, I want to test. How do I do it?
+#### OK, I want to test. How do I do it?
 
 While there is value in doing manual testing in the REPL or by executing your application by hand, you really also want a suite of **automated tests**. Automated means in practice that you have written
-code that tests your application. You can run these tests and the result will tell you if and which tests have failed or passed. This makes your tests reproducable with minimum effort. You want to develop this test suite as you develop your application. If you test before your actual code or after is really up to you. There is one thing though that I want to point out. There is a general problem with tests, well a few of those but one is particulary important now: How do you make sure that your test code is correct? It doesn't make much sense to put trust in your code because
-of your shiny test-suite when the tests in there are incorrect. This means they pass but shouldn't or they don't pass but really should. While you could write tests for your tests, you may immediatly see that this is a recursive problem and might lead to endless tests testing tests testing tests ....
+code that tests your application. You can run these tests and the result will tell you if and which tests have failed or passed. This makes your tests reproducible with minimum effort. You want to develop this test suite as you develop your application. If you test before your actual code or after is really up to you. There is one thing though that I want to point out. There is a general problem with tests, well a few of those but one is particularly important now: How do you make sure that your test code is correct? It doesn't make much sense to put trust in your code because
+of your shiny test-suite when the tests in there are incorrect. This means they pass but shouldn't or they don't pass but really should. While you could write tests for your tests, you may immediately see that this is a recursive problem and might lead to endless tests testing tests testing tests ....
 
-This is one reason why doing tests before code might be helpful. This discipline is called [TDD](https://en.wikipedia.org/wiki/Test-driven_development). It suggests a workflow that we refer to as **"Red-Green-Refactor"**. **Red** means that we start with a failing test. **Green** means that we implement as much of the application code, that is needed to make this test pass. **Refactor** is changing details of your code without effecting the overall functionality. I don't want to go into details but there is one aspect that is particulary useful. When we start with a **red test**, we at least
+This is one reason why doing tests before code might be helpful. This discipline is called [TDD](https://en.wikipedia.org/wiki/Test-driven_development). It suggests a work-flow that we refer to as **"Red-Green-Refactor"**. **Red** means that we start with a failing test. **Green** means that we implement as much of the application code, that is needed to make this test pass. **Refactor** is changing details of your code without effecting the overall functionality. I don't want to go into details but there is one aspect that is particularly useful. When we start with a **red test**, we at least
 have some good evidence that our tests test portions of our code that don't yet work as expected.
 We have some confidence that we're testing the right thing before we make the test pass.
 Contrast this with tests that you do after your code. You don't ever know if the tests would
 to make sure that the tests work correctly, so that they don't need a test-suite for a test-suite for a test-suite ....
 There are other aspects of TDD that I don't cover here, like responding to difficult tests by changing your application code instead of
 the tests. There is many more and I invite you to have a look at this methodology even if you don't apply it.
-Personally I do test before and I do test after and also while I develop application code. I try though to test first, if it's feasable.
+Personally I do test before and I do test after and also while I develop application code. I try though to test first, if it's feasible.
 
 There are many best practices when it comes to testing. I can not name and explain all of them here. One reason is that I certainly don't know them all and the other is that there are to many that are very well explained elsewhere.
 A few of them are very essential though and I have often seen people violating them which made their tests brittle.
 you have three invariants that you can test for a given function, then you likely want three tests for them. The reason may not be
 obvious but it should become clear in a moment. There should be only one reason for your tests to fail. The next step after you noticed a failing test is to find out
 what went wrong. If there are multiple possibilities why the test have failed because you verified three invariants in one test, you
-have to investigate all three paths. Having one test for each of the invariants makes this task trivial, you immediatly see what the
+have to investigate all three paths. Having one test for each of the invariants makes this task trivial, you immediately see what the
 culprit is. The other aspect is that it tends to keep your test code small, which means that you have fewer code to maintain and
-fewer places you can be wrong in one test. The attentive reade might have noticed that a consequence from this guideline is, that you
+fewer places you can be wrong in one test. The attentive reader might have noticed that a consequence from this guideline is, that you
 have more tests. This is totally true so you want to make sure that they execute fast. A typical test suite of unit-tests often contains
 a rather large amount of small tests.
 
 **«Keep your tests independent»**
 
 This just means that tests should be implemented in such a way that only the code inside the test you're looking at can make
-the test fail or pass. It must not depend on other tests. This is likely to accure when your code involves mutation of shared state.
+the test fail or pass. It must not depend on other tests. This is likely to occur when your code involves mutation of shared state.
 Suddenly you may find that your test only passes if you run the entire suite but fails if you run it in isolation. This is obviously a
 bad thing as it makes your tests unpredictable. One way to automatically detect these kinds of dependencies is to randomize the
 order in which tests are executed. This is useful as sometimes you're simply not aware of one test depending on another.
 
 **«Keep your tests simple»**
-Naturally tests are a critical part of your system. They are the safety net. You don't want them to contain bugs. Keeping them simple also means that it is easier to make them correct. Secondly they are easier to comprehend. Testcode should state as clear as possible what it is supposed to do.
+Naturally tests are a critical part of your system. They are the safety net. You don't want them to contain bugs. Keeping them simple also means that it is easier to make them correct. Secondly they are easier to comprehend. Test-code should state as clear as possible what it is supposed to do.
 
 
 **«Keep your tests fast»**
-This turns out to be a crucial feature of your test suite as well. If your tests are slow they will disrupt your workflow. Ideally testing and writing code is smothely intertwined. You test a little, then you code a little, then you repeat. If you have to wait for a long time for your tests to finish there will be some point where you don't run them regulary anymore. Of course you can trim down your test-suite to just the tests that are currently important, but after you've finished the implementation of a particular procedure you will likely want to run the entire suite.
+This turns out to be a crucial feature of your test suite as well. If your tests are slow they will disrupt your work-flow. Ideally testing and writing code is smoothly intertwined. You test a little, then you code a little, then you repeat. If you have to wait for a long time for your tests to finish there will be some point where you don't run them regularly anymore. Of course you can trim down your test-suite to just the tests that are currently important, but after you've finished the implementation of a particular procedure you will likely want to run the entire suite.
 
 These are all just general guidelines that apply to unit-tests in general. There are specific Do and Don'ts that apply to other kinds
 of tests that I don't want to cover here. I hope this little introduction gave you enough information to go on with the rest of the article and you now have a firm grip of what I'm going to be talking about.
 
 ### Putting the test egg to work
 
-You're still here and not bored away by the little introduction. Very good since this is finally where the fun starts and we will be seeing actual code. CHICKEN is actually a good environment to do testing. Almost every egg is covered by unit tests and within the community there seems to be a general agreement that tests are useful. Additionally tests for CHICKEN extensions are encouraged particulary. We have a great continous integration (CI) setup, that will automatically run the unit tests of your eggs, even on different platforms and CHICKENS. You can find more information on [tests.call-cc.org](http://tests.call-cc.org/). I'll tell you a little more about this later. For now just be assured that you're in good company.
+You're still here and not bored away by the little introduction. Very good since this is finally where the fun starts and we will be seeing actual code. CHICKEN is actually a good environment to do testing. Almost every egg is covered by unit tests and within the community there seems to be a general agreement that tests are useful. Additionally tests for CHICKEN extensions are encouraged particularly. We have a great continuous integration (CI) setup, that will automatically run the unit tests of your eggs, even on different platforms and CHICKENS. You can find more information on [tests.call-cc.org](http://tests.call-cc.org/). I'll tell you a little more about this later. For now just be assured that you're in good company.
 
-Let's continue our little journey now. We'll be implementing the well known [stack](https://en.wikipedia.org/wiki/Stack_(abstract_data_type) and build a suite of unit tests for it. This is a fairly simple task and allows us to concentrate on the tests. You can find all the code that is sused here at:
+Let's continue our little journey now. We'll be implementing the well known [stack](https://en.wikipedia.org/wiki/Stack_(abstract_data_type) and build a suite of unit tests for it. This is a fairly simple task and allows us to concentrate on the tests. You can find all the code that is used here at:
 
 #### Prerequisites
 
 
 This is the standard layout of a scheme project for CHICKEN. There are projects that have additional folders
 and structure their files differently but the majority of projects look like this, so it is a good practice
-to follow it. You may noticed that this is also the standard layout of CHICKEN eggs. They contain egg specific files like *.release-info, *.meta and *.setup but appart from that, they look very much like this. Another reason to arrange your tests the way I showed you is that CHICKEN's CI at [salmonella](https://tests.call-cc.org) expects this layout. You can benefit from this service once you follow this convention. It's time to give **mario** and **peter** a big thank you, as they made it possible.
+to follow it. You may noticed that this is also the standard layout of CHICKEN eggs. They contain egg specific files like *.release-info, *.meta and *.setup but apart from that, they look very much like this. Another reason to arrange your tests the way I showed you is that CHICKEN's CI at [salmonella](https://tests.call-cc.org) expects this layout. You can benefit from this service once you follow this convention. It's time to give **mario** and **peter** a big thank you, as they made it possible.
 
 #### Basic layout of the test file
 
  **(test description expected expression)**
 
 It takes multiple arguments. The first argument is a description string that gives a hint about
-what this particalar test attempts to verify. The next argument is the **expected value**. It can be any scheme value. The last argument is the scheme expression that shall be tested. It will be evaluated and compared with the **expected value**.
+what this particular test attempts to verify. The next argument is the **expected value**. It can be any scheme value. The last argument is the scheme expression that shall be tested. It will be evaluated and compared with the **expected value**.
 This is actually the long form. You can get by with the shorter form that omits the description string like so:
 
 ~~~ clojure
 </pre>
 
 
-Ok, going back to the example above. I've added a little test that attempts to verify
+OK, going back to the example above. I've added a little test that attempts to verify
 that a stack that has been created with make-stack is initially empty.
 Let's run the tests now. You can do this by changing into the tests directory and running
 the file with csi.
 -- done testing stack --------------------------------------------------------
 </pre>
 
-This looks better. You can see that all tests we've written are now passing, as indicated by the green PASS on the right side. We've written anough code to make the tests pass, but it's easy to
-see that these tests are lieing. stack-empty? always returns #t regardless of the content of a given stack. So le's add a test that verifies that a non-empty stack is indeed not empty. Our make-stack procedure allows us to specify initial elements of the stack so we have all we need to create our tests.
+This looks better. You can see that all tests we've written are now passing, as indicated by the green PASS on the right side. We've written enough code to make the tests pass, but it's easy to
+see that these tests are lying. stack-empty? always returns #t regardless of the content of a given stack. So let's add a test that verifies that a non-empty stack is indeed not empty. Our make-stack procedure allows us to specify initial elements of the stack so we have all we need to create our tests.
 
 ~~~ clojure
 (use test)
 -- done testing stack --------------------------------------------------------
 </pre>
 
-The output tells us that one of our tests has passed and one has failed. The red FAIL indicates that an assertion didn't hold. I this case stack-empty? returned #t for the non-empty stack. This is expected as stack-empty? doesn't do anything useful yet. This is the last possible result-type of a test. Contrast a FAIL with ERROR please. ERROR indicates that a condition has been signaled wheras FAIL indicates that an assertion did not hold.
+The output tells us that one of our tests has passed and one has failed. The red FAIL indicates that an assertion didn't hold. I this case stack-empty? returned #t for the non-empty stack. This is expected as stack-empty? doesn't do anything useful yet. This is the last possible result-type of a test. Contrast a FAIL with ERROR please. ERROR indicates that a condition has been signaled whereas FAIL indicates that an assertion did not hold.
 Let's quickly fix this and make all tests pass. stack.scm now looks like this:
 
 ~~~clojure
 -- done testing stack --------------------------------------------------------
 </pre>
 
-Look how groups are nicely formatted and seperate your test output into focused chunks that
+Look how groups are nicely formatted and separate your test output into focused chunks that
 deal with one aspect of your API. Of course we see an ERROR indicating a condition as we didn't
 yet implement the **stack-push!** procedure. Let's fix this now.
 
       'one
       (let ((stack (make-stack 'one)))
         (stack-top stack)))
-   (test "returns thet top-most element"
+   (test "returns the top-most element"
       'two
       (let ((stack (make-stack 'one 'two)))
         (stack-top stack))))
   (car (stack-elements stack)))
 ~~~
 
-These tests all pass sofar and we've added a few more tests for the stack-top API.
+These tests all pass so far and we've added a few more tests for the stack-top API.
 Let's take a closer look at that procedure. It behaves well when the stack is non-empty, but what should happen if the stack is empty? Let's just signal a condition that indicates that taking
 the top item of an empty stack is an error. The test egg gives us another form that allows
 us to assert that a condition has been signaled. Let's see what this looks like.
      'one
      (let ((stack (make-stack 'one)))
        (stack-top stack)))
-  (test "returns thet top-most element"
+  (test "returns the top-most element"
      'two
      (let ((stack (make-stack 'one 'two)))
        (stack-top stack)))
 TEST_FILTER="empty stack is an error" csi -s run.scm
 </pre>
 
-This will only run the tests which include the given text their description. This can actually be a regular expression, so it is much more versatile than it appears now. There is also the variable TEST_GROUP_FILTER which allows you to run only test-groups that match the filter. Howver in the current implementation of tests it seems not to be possible to filter groups within other groups. So setting TEST_GROUP_FILTER="stack-top" doesn't currently work. It will not run any tests since the filter doesn't match the surrounding group "stack". It would be a nice addition though.
+This will only run the tests which include the given text their description. This can actually be a regular expression, so it is much more versatile than it appears now. There is also the variable TEST_GROUP_FILTER which allows you to run only test-groups that match the filter. However in the current implementation of tests it seems not to be possible to filter groups within other groups. So setting TEST_GROUP_FILTER="stack-top" doesn't currently work. It will not run any tests since the filter doesn't match the surrounding group "stack". It would be a nice addition though.
 
 The output with the filter expression looks like this:
 
 </pre>
 
 **Please pay close attention to the output.** The test passes!
-How can that be? We didn't even implement the part of the code yet that signals an error in the case of an empty stack. This is a good example why it is good to write your tests first. If we had written the code after we would've never noticed that the tests succeed even without the proper implementation, which pretty much renders this particular test useless. It does more harm than good because it lies to you. This test passes because it is an error to take the **car** of the empty list. Obviously just checking that an error accured is not enough. We should verify that a particular error has accured. The test library doesn't provide a procdure or macro that does this so we have to come up with our own. We need a way to tell if and which condition has been signaled in a given expression. For this purpose I'll add a little helper to the very top
+How can that be? We didn't even implement the part of the code yet that signals an error in the case of an empty stack. This is a good example why it is good to write your tests first. If we had written the code after we would've never noticed that the tests succeed even without the proper implementation, which pretty much renders this particular test useless. It does more harm than good because it lies to you. This test passes because it is an error to take the **car** of the empty list. Obviously just checking that an error occurred is not enough. We should verify that a particular error has occurred. The test library doesn't provide a procedure or macro that does this so we have to come up with our own. We need a way to tell if and which condition has been signaled in a given expression. For this purpose I'll add a little helper to the very top
 of the test file and update the tests to use that little helper.
 
 ~~~ clojure
 
 
 #### current-test-applier
-This is a parameter that allows you to hook into the testing machinery. The test applier is a procedure that receives the expected value and the code that produces the actual value as arguments (along with some other arguments) and is expected to run the verification and return a result that is understood by the **current-test-handler**. The cases in which you need this are possibley rare but be assured that they exist. For the details of that api please have a look at test's code.
+This is a parameter that allows you to hook into the testing machinery. The test applier is a procedure that receives the expected value and the code that produces the actual value as arguments (along with some other arguments) and is expected to run the verification and return a result that is understood by the **current-test-handler**. The cases in which you need this are possibly rare but be assured that they exist. For the details of that API please have a look at test's code.
 
 #### current-test-handler
 This procedure receives the result of the application of **current-test-applier** to its arguments. It is responsible for the reporting in the default implementation of test. This is the place where the result is written to the standard output. This is actually quite a useful thing.
-You might consider a hypthetical case where you want to inform some GUI, for example Emacs,
+You might consider a hypothetical case where you want to inform some GUI, for example Emacs,
 about the results of your tests. You can easily do this with this hook, just add a custom handler that does this. One thing that I was thinking about to add was a little extension that
-would allow to plug in multiple listeners. It would still call the original test-handler but also add a little API to add listeners to the PASS,FAIL and ERROR event. All registered listeneres would be invoked in order. I did not yet implement it but it's pretty straight forward.
+would allow to plug in multiple listeners. It would still call the original test-handler but also add a little API to add listeners to the PASS,FAIL and ERROR event. All registered listeners would be invoked in order. I did not yet implement it but it's pretty straight forward.
 
 #### current-test-filter
 We've seen a version of this already. It's a list of predicates that are invoked with each test
 is used in the default implementation is TEST_GROUP_FILTER.
 
 #### current-test-skipper
-This is related to filtering. It specifies a list of predicates that determine whiche tests
+This is related to filtering. It specifies a list of predicates that determine which tests
 should **not** be run. The default implementation of these read the TEST_REMOVE environment variable.
 
 #### current-test-verbosity
 This controls the verbosity of the tests. If it is set to #t it prints full diagnostic output.
 If it is set to #f however it will only print a "." for passed and an "x" for failed tests.
-This is useful when you have many tests and are only interested in the overall outcome and not the details. This parameter can also be controled with the TEST_QUIET environment variable.
+This is useful when you have many tests and are only interested in the overall outcome and not the details. This parameter can also be controlled with the TEST_QUIET environment variable.
 
 #### colorful output
-By default the test egg will try to determine if your terminal supports colors and will use them in case it does. You can however explicitely turn colors on and off with the TEST_USE_ANSI environment variable. Set it to 0 to disable colors and use 1 in order to enable colors.
+By default the test egg will try to determine if your terminal supports colors and will use them in case it does. You can however explicitly turn colors on and off with the TEST_USE_ANSI environment variable. Set it to 0 to disable colors and use 1 in order to enable colors.
 
 
 ### Best practices when using the test egg
 
 #### enclose your tests in test-begin and test-end
 
-This is especially useful for bigger test-suites that easily fill more than one page on your screen. If you don't enclose your tests this way you risk to miss failing tests as they flit accross the screen unnoticed. You could also use TEST_QUIET=1 as you know by now, but that won't give you nice statistics
+This is especially useful for bigger test-suites that easily fill more than one page on your screen. If you don't enclose your tests this way you risk to miss failing tests as they flit across the screen unnoticed. You could also use TEST_QUIET=1 as you know by now, but that won't give you nice statistics
 
 #### use test-exit
 
-Salmonella (CHICKEN's CI worker bee) will run your tests and check the exit code to determine whether they passed or failed. If you don't add this line you will leave no clue and the poor salmonell may report passing tests when there really something is badly broken. Also some other tools like the chicken-test-mode, which I will introduce later, determines the status of your tests this way. Apart from that it's good practice in a unixy environment.
+Salmonella (CHICKEN's CI worker bee) will run your tests and check the exit code to determine whether they passed or failed. If you don't add this line you will leave no clue and the poor salmonella may report passing tests when there really something is badly broken. Also some other tools like the chicken-test-mode, which I will introduce later, determines the status of your tests this way. Apart from that it's good practice in a UNIXy environment.
 
 #### use (use) to load code of your egg
 
 
 #### use test filters and skippers to focus
 
-This one is just one thing I do regulary. If you don't want to follow it it's perfectly fine.
+This one is just one thing I do regularly. If you don't want to follow it it's perfectly fine.
 In order to run the tests that I'm currently working on and nothing else I put the string WIP into their description.
 
 ~~~clojure
-(test "WIP: this is the test i'm working on" #t #t)
+(test "WIP: this is the test I'm working on" #t #t)
 ~~~
 
 Then I run the tests like so:
 
 ### Random testing with test-generative
 
-What we have done sofar was thinking about which properties of our code we want to test and then
-creating inputs and validitions than encode these properties. This is the somewhat classic approach that works really well and should be the foundation of your test suite. However there is another way to do your testing. It involves thinking about invariants of your procedures. Invariants are properties of your code that are always true. For example we can assert that for every non-empty list, taking the cdr of that list produces a list that is smaller than the original list.
+What we have done so far was thinking about which properties of our code we want to test and then
+creating inputs and validations than encode these properties. This is the somewhat classic approach that works really well and should be the foundation of your test suite. However there is another way to do your testing. It involves thinking about invariants of your procedures. Invariants are properties of your code that are always true. For example we can assert that for every non-empty list, taking the cdr of that list produces a list that is smaller than the original list.
 
 ~~~clojure
 (let ((ls (list 1 2 3)))
 -- done testing fast-mul -----------------------------------------------------
 </pre>
 
-Ohoh, as you can see our optimization isn't actually valid for flonums. The flonum generator
+Oh oh, as you can see our optimization isn't actually valid for flonums. The flonum generator
 also generated +nan.0 which is a special flonum that doesn't produce 0 when it is multiplied with 0. IEEE requires NaN to be propagated. In fact this optimization is only valid for fixnums. Thanks to our automated tests we found out about that case and will refuse to try to be smarter than core.
 
 There are more applications to these kinds of tests. They often serve as a good basis for a thorough test-suite. They're easy to build and quite reliable.
 
-### Integrating tests into your Emacs workflow
+### Integrating tests into your Emacs work-flow
 
-You're still here? That's good. We're half way through already! I'm just kidding. This is the last section, in which I want to tell you about some ideas that allow you to integrate testing into your development workflow using our great Emacs editor.
+You're still here? That's good. We're half way through already! I'm just kidding. This is the last section, in which I want to tell you about some ideas that allow you to integrate testing into your development work-flow using our great Emacs editor.
 If you don't use Emacs, you won't gain much from this paragraph. In that case I'd like you
 to go on with the [Wrap up](#wrap-up).
 
 Having all the great tools to do your testing is valuable but you also want to have a way
-to integrate testing into your workflow. In particular you might want to be able to run
+to integrate testing into your work-flow. In particular you might want to be able to run
 your test-suite from within Emacs and work on the test results. I created a little extension
 for Emacs that aims to provide such an integration. It is currently work in progress but I use
-it regularily already. You can find **chicken-test-mode** [here](https://bitbucket.org/certainty/chicken-test-mode/overview).
+it regularly already. You can find **chicken-test-mode** [here](https://bitbucket.org/certainty/chicken-test-mode/overview).
 
 #### What does it give you?
 
 This mode is roughly divided into two parts. One part gives you functions that allow you to run your test-suite. The other part deals with the navigation within your test-output. Let us dive in and put the mode to practice. Suppose you have installed the mode according to the little help text that is in the header of the mode's source file.
-We further assume that we're working on the stack example from the beginning of this article. I have opened up a buffer that holds the scheme implemenantion file of the stack. We need to adjust the test file to load the implementation differently.
+We further assume that we're working on the stack example from the beginning of this article. I have opened up a buffer that holds the scheme implementation file of the stack. We need to adjust the test file to load the implementation differently.
 
 ~~~clojure
  (use test)
  ; .... tests follow
 ~~~
 
-This enables us to run the tests from within emacs without problems.
+This enables us to run the tests from within Emacs without problems.
 
 #### Running tests within Emacs
 
-With these definitions in place I can now issue the command **C-c t t** which will run the tests, open up the CHICKE-test buffer and put the output of your tests there. In my setup it looks like this:
+With these definitions in place I can now issue the command **C-c t t** which will run the tests, open up the CHICKEN-test buffer and put the output of your tests there. In my setup it looks like this:
 
 <a href="/assets/images/posts/testing_chicken/run-tests.png">
   <img src="/assets/images/posts/testing_chicken/run-tests_thumb.png">
 </a>
 
-You can click on the image to load it fullsize. You see two buffers opened now. The buffer
+You can click on the image to load it full-size. You see two buffers opened now. The buffer
 on the left side holds the application code and the buffer on the right side holds the output
-of the tests. What you can not see here is that there will be a minibuffer message telling you
+of the tests. What you can not see here is that there will be a mini-buffer message telling you
 whether the tests have all passed or if there were failures.
 
 #### Navigating the test output
   <img src="/assets/images/posts/testing_chicken/run-tests-w-failures_thumb.png">
 </a>
 
-The first failing tests has been selected and the line it accures in has been highlighted.
+The first failing tests has been selected and the line it occurs in has been highlighted.
 You can jump straight to the next failed test by hitting **n** in the test buffer.
 Likewise you can hit **p** to jump to the **p**revious failing test. Lastly you can hit
 **l** to jump to the last failing test.