Commits

Anonymous committed a6854b7

finished polishing

Comments (0)

Files changed (1)

_posts/2014-04-05-tesing_your_chicken_code.md

 ### Bells and whistles of the test egg
 
 The test egg is very configurable. It gives you a knob for almost every aspect of it. I often found myself wanting features from
-test when I had to realize that it is already there. Test's author **Alex Shinn** did a very good job.
+test when I realize that it is already there. Test's author **Alex Shinn** did a very good job.
 
-There are a few parameters that you want to be aware of
+There are a few parameters that you want to be aware of.
 
 #### current-test-epsilon
-This is used for comparison of flonums. As you may know it's not a good idea to use exact comparison on inexact numbers. The test egg uses a sensible default but you may want to set your own if you really need it.
+
+This is used for comparison of flonums. As you may know, it's not a good idea to use exact comparison on inexact numbers. The test egg uses a sensible default for this parameter, but you may want to set your own if you really need it.
 
 #### current-test-comparator
-This allows you to specify the procedure that is used to compare the expected value to the actual value. This defaults to ***equal?**.
+This allows you to specify the procedure, that is used to compare the expected value to the actual value. It defaults to ***equal?**.
 
 
 #### current-test-applier
-This is a parameter that allows you to hook into the testing machinery. The test applier is a procedure that receives the expected value and the code that produces the actual value as arguments (along with some other arguments) and is expected to run the verification and return a result that is understood by the **current-test-handler**. The cases in which you need this are possibly rare but be assured that they exist. For the details of that API please have a look at test's code.
+
+This is a parameter, that allows you to hook into the testing machinery. The test applier is a procedure that receives the expected value and the code that produces the actual value as arguments (along with some other data) and is expected to run the verification and return a result that is understood by the **current-test-handler**. The cases in which you need to use this parameter are possibly rare but be assured that they exist. For the details of that API please have a look at test's code.
 
 #### current-test-handler
-This procedure receives the result of the application of **current-test-applier** to its arguments. It is responsible for the reporting in the default implementation of test. This is the place where the result is written to the standard output. This is actually quite a useful thing.
-You might consider a hypothetical case where you want to inform some GUI, for example Emacs,
-about the results of your tests. You can easily do this with this hook, just add a custom handler that does this. One thing that I was thinking about to add was a little extension that
-would allow to plug in multiple listeners. It would still call the original test-handler but also add a little API to add listeners to the PASS,FAIL and ERROR event. All registered listeners would be invoked in order. I did not yet implement it but it's pretty straight forward.
+
+This procedure receives the result of the application of **current-test-applier** to its arguments. It is responsible for the reporting in the default implementation of test. This is the place where the test results are written to the standard output. It's actually quite a useful thing.
+You might consider a hypothetical case where you want to inform some GUI, about the results of your tests.
+You can easily do this with this hook, just add a custom handler that does this.
+One thing that I was thinking about to add, was a little extension that would allow you to plug in multiple listeners. It would still call the original test-handler but also notify listeners when tests PASS,FAIL or have an ERROR. You could have many listeners and all of them would be invoked in order.
 
 #### current-test-filter
 We've seen a version of this already. It's a list of predicates that are invoked with each test
-and only the tests that produce #t will be run. This defaults to an implementation that retrieves filters from the TEST_FILTER environment variable.
+and only the tests that produce #t will be run. This defaults to an implementation that retrieves filters from the **TEST_FILTER** environment variable.
 
 #### current-test-group-filter
 This is the same as above but does the filtering on test-groups. The environment variable that
-is used in the default implementation is TEST_GROUP_FILTER.
+is used in the default implementation is **TEST_GROUP_FILTER**.
 
 #### current-test-skipper
 This is related to filtering. It specifies a list of predicates that determine which tests
-should **not** be run. The default implementation of these read the TEST_REMOVE environment variable.
+should **not** be run. The default implementation of it takes the **TEST_REMOVE** environment variable into account.
 
 #### current-test-verbosity
 This controls the verbosity of the tests. If it is set to #t it prints full diagnostic output.
-If it is set to #f however it will only print a "." for passed and an "x" for failed tests.
-This is useful when you have many tests and are only interested in the overall outcome and not the details. This parameter can also be controlled with the TEST_QUIET environment variable.
+If it is set to #f, however, it will only print a "." for passed and an "x" for failed tests.
+This is useful when you have many tests and are only interested in the overall outcome and not the details. This parameter can also be controlled with the **TEST_QUIET** environment variable.
 
 #### colorful output
-By default the test egg will try to determine if your terminal supports colors and will use them in case it does. You can however explicitly turn colors on and off with the TEST_USE_ANSI environment variable. Set it to 0 to disable colors and use 1 in order to enable colors.
-
+By default the test egg will try to determine if your terminal supports colors and will use them in case it does. You can however explicitly turn colors on and off with the **TEST_USE_ANSI** environment variable. Set it to 0 to disable colors and use 1 in order to enable colors.
 
 ### Best practices when using the test egg
 
 #### put your tests into tests/run.scm
 
-It is common practice to put the tests into "tests/run.scm" relative to your project's root. Especially for eggs this is a good practice to follow since CHICKENs CI will expect your tests to be exactly there. It will run your tests automatically and report the status of your egg back at [tests.call-cc.org](https://tests.call-cc.org)
+It is common practice to put the tests into "tests/run.scm" relative to your project's root. Especially for eggs this is a good convention to follow since CHICKEN's CI will expect your tests to be exactly there. It will run your tests automatically and report the status of your egg back at [tests.call-cc.org](https://tests.call-cc.org).
 
 #### enclose your tests in test-begin and test-end
 
-This is especially useful for bigger test-suites that easily fill more than one page on your screen. If you don't enclose your tests this way you risk to miss failing tests as they flit across the screen unnoticed. You could also use TEST_QUIET=1 as you know by now, but that won't give you nice statistics
+This is especially useful for bigger test-suites, that easily fill more than one page of your screen. If you don't enclose your tests this way, you risk to miss failing tests as they flit across the screen unnoticed. You could also use TEST_QUIET=1 as you know by now, but that won't give you nice statistics.
 
 #### use test-exit
 
-Salmonella (CHICKEN's CI worker bee) will run your tests and check the exit code to determine whether they passed or failed. If you don't add this line you will leave no clue and the poor salmonella may report passing tests when there really something is badly broken. Also some other tools like the chicken-test-mode, which I will introduce later, determines the status of your tests this way. Apart from that it's good practice in a UNIXy environment.
+Salmonella (CHICKEN's CI worker bee) will run your tests and check the exit code to determine whether they passed or failed. If you don't add this line you will leave no clue and the poor salmonella may report passing tests when there really something is badly broken. Also some other tools like the chicken-test-mode, which I will introduce later, determines the status of your tests this way. Apart from that, it's good practice in a UNIXy environment.
 
 #### use (use) to load code of your egg
 
-If you're testing an egg you should use **(use)** to load your code. Again this is due to the way salmonella works. It will install your egg before it executes your test code so you're safe to just **(use)** it. As an aside: Salmonella will also change into your tests directory before it runs your tests.
+If you're testing an egg you should use **(use)** to load your code. Again this is due to the way salmonella works. It will install your egg before it executes your test code, so you're safe to just **(use)** it. As an aside: Salmonella will also change into your tests directory before it runs your tests.
 
 
 #### use test filters and skippers to focus
 
-This one is just one thing I do regularly. If you don't want to follow it it's perfectly fine.
+This one thing I do regularly. If you don't want to follow it it's perfectly fine.
 In order to run the tests that I'm currently working on and nothing else I put the string WIP into their description.
 
 ~~~clojure
-(test "WIP: this is the test I'm working on" #t #t)
+ (test "WIP: this is the test I'm working on" #t #t)
 ~~~
 
 Then I run the tests like so:
 TEST_FILTER="WIP" csi -s run.scm
 </pre>
 
-This is a pretty easy way to do it that worked out pretty well.
+This is a pretty easy way to do it and it worked out pretty well.
 You could use other indicators that allow filtering. For example you could mark slow tests with SLOW or tests that use an external API with NEEDS_API_FOO.
 
 
 creating inputs and validations than encode these properties. This is the somewhat classic approach that works really well and should be the foundation of your test suite. However there is another way to do your testing. It involves thinking about invariants of your procedures. Invariants are properties of your code that are always true. For example we can assert that for every non-empty list, taking the cdr of that list produces a list that is smaller than the original list.
 
 ~~~clojure
-(let ((ls (list 1 2 3)))
-  (test-assert "taking the cdr produces a smaller list"
-    (< (length (cdr ls)) ls)))
+ (let ((ls (list 1 2 3)))
+   (test-assert "taking the cdr produces a smaller list"
+     (< (length (cdr ls)) ls)))
 ~~~
 
-The **test-assert** form make invariants explicit. Once you have your invariants you can feed
+The **(test-assert)** form make invariants explicit. Once you have your invariants you can feed
 data to your procedures and run them to see if they hold. Thinking of data that can be fed
 into procedures can be a tedious task. Wouldn't it be nice to have a way to generate the data
-and just concentrate on your invariants. There is a little library [test-generative](https://wiki.call-cc.org/eggref/4/test-generative) that allows you to do this. It extends the test egg so that you can use generated data in order to find an invocation that violates some invariant. This style of testing is quite common in the haskell world. The most famous implementation of this approach is the [quick-check library](http://hackage.haskell.org/package/QuickCheck).
+and just concentrate on your invariants. There is a little library [test-generative](https://wiki.call-cc.org/eggref/4/test-generative) that allows you to do this. It extends the test egg, so that you can use generated data in order to find an application that violates some invariant. This style of testing is quite common in the haskell world. The most famous implementation of this approach is the [quick-check library](http://hackage.haskell.org/package/QuickCheck).
 
 #### Eliminating the programmer
 
-It is sometimes good to let computers generate the data for our tests. This is simply because we as the designer of our API are much more likely to think within the constraints of the library. It's harder for us to think of cases where it would break. I think you have experienced many times with your own code. You seem to have thought of every possible input that would break your code, but as soon as someone else uses your procedure he/she finds a way to pass data that reveals a misbehavior of it.
+It is sometimes good to let computers generate the data for our tests. This is simply because we as the designer of our API are much more likely to think within the constraints of the library. It's harder for us to come up with cases where it would break. I can imagine, you have experienced that many times with your own code. You seem to have thought of every possible input that would break your code, but as soon as someone else uses your procedure he/she finds a way to pass data that reveals a misbehavior of it.
 
 
 #### Random testing in practice
 Let me show you how testing with test-generative looks like. Suppose you have the following test file.
 
 ~~~clojure
-(use test test-generative)
+ (use test test-generative)
 
-(test-begin "random testing")
+ (test-begin "random testing")
 
-(test-generative ((number (lambda () (random 10000))))
-  (test-assert (negative? (* -1 number))))
+ (test-generative ((number (lambda () (random 10000))))
+   (test-assert (negative? (* -1 number))))
 
-(test-end "random-testing")
+ (test-end "random-testing")
 
-(test-exit)
+ (test-exit)
 ~~~
 
-You know the basic skeleton of a test file and the test-assert form by now, so let's concentrate on the new part. There is a **test-generative** form that binds a random number between 0 and 10000 to the variable number and runs one assertion with it.
+You know the basic skeleton of a test file and the test-assert form by now, so let's concentrate on the new part. There is a **(test-generative)** form, that binds a random number between 0 and 10000 to the variable number and runs one assertion with it.
 
 The general definition of test-generative is:
 
 **(test-generative (bindings ...) test-code ...)**
 
-It looks very much like a let and in fact that's on purpose. Bindings declare variable names that should be bound to the generated values. The right hand side of a binding expression must be a thunk. The value of this thunk is bound to the variable for exactly one iteration. What is an iteration? Well, the test-generative form will run the tests it encloses not only once but many times. Each run is called an iteration. The actual amount of iterations can be configured using the **current-test-generative-iterations** parameter. It defaults to 100 which means that your test-code will be exercised 100 times with 100 possibly different values for the given variables.
+It looks very much like a let and in fact that's on purpose. Bindings declare variable names that should be bound to the generated values. The right hand side of a binding expression must be a thunk. The value of this thunk is bound to the variable for exactly one iteration. What is an iteration? Well, test-generative will run the tests it encloses not only once but many times. Each run is called an iteration. The actual amount of iterations can be configured using the **current-test-generative-iterations** parameter. It defaults to 100, which means that your test-code will be exercised 100 times with 100 possibly different values for the given variables.
 
-That particular test verifies one invariant. It states that for every variant in the given range the result of multiplying that number with -1 results in a negative number. Let's see what happens:
+That particular test verifies one invariant. It states that for every number in the given range the result of multiplying that number with -1 results in a negative number. Let's see what happens:
 
 <pre>
 -- testing random-testing ----------------------------------------------------
 -- done testing random-testing -----------------------------------------------
 </pre>
 
-Have a look at the output. It seems as if test-generative has proven us wrong. Indeed not every number multiplied by -1 results in a negative number. The additional data that is printed for every failing test now contains two more keys.
+It seems as if test-generative has proven us wrong. Indeed not every number multiplied by -1 results in a negative number. The additional data that is printed for every failing test now contains two more keys.
 
 * **iteration:**
 This is the iteration in which the test failed. In the example above it took 43 tries to find
 a falsification
 
 * **seeds:**
-These are the variables the values they were bound to when the test has failed. It our example
+These are the variables and the values they were bound to when the test has failed. In our example
 this is the variable **number** and it was bound to **0**.
 
-Zero is a number that is not negative when it is multiplied by -1. Let's fix that assertion to match the reality.
+Zero is a number, that is not negative when it is multiplied by -1. Let's fix that assertion to match the reality.
 
 ~~~clojure
-(use test test-generative)
+ (use test test-generative)
 
-(test-begin "random testing")
+ (test-begin "random testing")
 
-(test-generative ((number (lambda () (random 10000))))
-  (let ((number* (* -1 number)))
-    (test-assert (or (zero? number*) (negative? number*)))))
+ (test-generative ((number (lambda () (random 10000))))
+   (let ((number* (* -1 number)))
+     (test-assert (or (zero? number*) (negative? number*)))))
 
-(test-end "random-testing")
+ (test-end "random-testing")
 
-(test-exit)
+ (test-exit)
 ~~~
 
-Now we're asserting that every positive number within the range multiplied by -1 is either 0 or negative. Let's see the output:
+Now we're asserting that every number within the range multiplied by -1 is either 0 or negative. Let's see the output:
 
 <pre>
 -- testing random-testing ----------------------------------------------------
 
 That looks very good. All tests are green. You may notice that you only get one output per test and not 100. The tests are invoked multiple times but you will only ever see a report once.
 
-You may notice that having to come up with generator procedures for every kind of data you need can quickly become messy and you probably repeat yourself alot across your test files.
-As it turns out there already is a library that gives you generators for various scheme types.
+You may notice that having to come up with generator procedures for every kind of data you need, can quickly become messy and you probably repeat yourself alot across your test files.
+As it turns out there already is a library, that gives you generators for various scheme types.
 It's called [data-generators](https://wiki.call-cc.org/eggref/4/data-generators) and the generators it provides are compatible with the test-generative interface. The tests above could be rewritten using data-generators as follows:
 
 ~~~clojure
-(use test test-generative data-generators)
+ (use test test-generative data-generators)
 
-(test-begin "random-testing")
+ (test-begin "random-testing")
 
-(test-generative ((number (gen-uint32)))
-  (let ((new-number (* -1 number)))
-    (test-assert (or (zero? new-number) (negative? new-number)))))
+ (test-generative ((number (gen-uint32)))
+   (let ((new-number (* -1 number)))
+     (test-assert (or (zero? new-number) (negative? new-number)))))
 
-(test-end "random-testing")
+ (test-end "random-testing")
 
-(test-exit)
+ (test-exit)
 ~~~
 
-This simply generates a positive 32 bit fixnum to run the tests on.
+This simply generates a positive 32bit fixnum in each iteration using the (gen-uint32) generator.
 
 #### purity
 
-You may notice that this kind of testing imposes some restrictions on your code. As the tests are executed multiple times you want to avoid to test procedures with side-effects in this style. As a rule of thumb you should only ever test pure code with test-generative.
+You may notice, that this kind of testing imposes some restrictions on your code. As the tests are executed multiple times, you want to avoid to test procedures with side-effects. As a rule of thumb you should only ever test pure code with test-generative.
 
 #### Model based testing
 
-Model based testing is a very nice approach to testing. The idea is very simple. A procedure is validated against a model. That means that for every input the results of the procedure under test and the model are equal. Often you have a procedure that is correct but slow. Then you can use the slow model to verify the behavior of your faster versions. Let's take the following procedure.
+Model based testing is a very nice approach to testing. The idea is very simple. A procedure is validated against a model. That means that for every input the results of the procedure under test and the model are expected to be equal. Often you have a procedure that is correct but slow. Then you can use the slow model to verify the behavior of your faster versions. Let's take the following example:
 
 ~~~clojure
-(define (palindrome? input)
- (string=? input (string-reverse input)))
+ (use srfi-13)
+
+ (define (palindrome? input)
+  (string=? input (string-reverse input)))
 ~~~
 
-This is the definition of a procedure that checks if a given string is a palindrome. It just checks if the reverse of the string equals the string itself. That is an almost literal translation of the definition of a palindrome. It's easy to see that it is "obviously" correct, so it's a good candidate for our model. Let's first test it against some palindromes. data-generators doesn't give us a palindrome generator but all the primitives needed to build one.
+This is the definition of a procedure, that checks if a given string is a palindrome. It just verifies if the reverse of the string equals the string itself. That is an almost literal translation of the definition of a palindrome. It's easy to see that it is "obviously" correct, so it's a good candidate to be a model procedure. Let's first test it against some palindromes. Data-generators doesn't give us a palindrome generator but all the primitives needed to build one.
 
 ~~~clojure
-(use test test-generative data-generators srfi-13)
+ (use test test-generative data-generators srfi-13)
 
-(define (palindrome? input)
-  (string=? input (string-reverse input)))
+ (define (palindrome? input)
+   (string=? input (string-reverse input)))
 
 
-(define (gen-palindrome)
- (gen-transform (lambda (str) (string-append str (string-reverse str))) (gen-string-of (gen-char (range #\a #\z)))))
+ (define (gen-palindrome)
+  (gen-transform (lambda (str) (string-append str (string-reverse str))) (gen-string-of (gen-char (range #\a #\z)))))
 
-(test-begin "palindrome")
+ (test-begin "palindrome")
 
- (test-generative ((str (gen-palindrome)))
-  (test-assert (palindrome? str)))
+  (test-generative ((str (gen-palindrome)))
+   (test-assert (palindrome? str)))
 
-(test-end "palindrome")
+ (test-end "palindrome")
 
-(test-exit)
+ (test-exit)
 ~~~
 
-Now you're free to add more sophisticated algorithms and test against that model like so:
+This shows how to build a custom generator that builds palindromes for us. It does this by simply generating a string and then appending the reverse of that string to it.
+With these definitions we can codify the invariant that our faster algorithm should behave like our model.
 
 ~~~clojure
-(define (fast-palindrome? input)
-  (cond
-   ((string-null? input) #t)
-   (else
-    (do ((i 0 (add1 i))
-         (j   (sub1 (string-length input)) (sub1 j)))
-        ((or (not (char=? (string-ref input i) (string-ref input j)))
-             (>= i j))
-         (<= j i))))))
+ (define (fast-palindrome? input)
+   (cond
+    ((string-null? input) #t)
+    (else
+     (do ((i 0 (add1 i))
+          (j   (sub1 (string-length input)) (sub1 j)))
+         ((or (not (char=? (string-ref input i) (string-ref input j)))
+              (>= i j))
+          (<= j i))))))
 
-(test-generative ((str (gen-sample-of (gen-string-of (gen-char)) (gen-palindrome))))
-  (test-assert (eq? (palindrome? str) (fast-palindrome? str))))
+ (test-generative ((str (gen-sample-of (gen-string-of (gen-char)) (gen-palindrome))))
+   (test-assert (eq? (palindrome? str) (fast-palindrome? str))))
 ~~~
 
-This piece of code generates either a palindrome or some other random string, that is very likely not a palindrome, and asserts that fast-palindrome? should produce the same result as palindrome? when applied to that input. And indeed our implementation of fast-palindrome does that.
+We've added a faster version of palindrome? and added the tests that are needed to signify the invariant. Mind the generator that now not only generates palindromes but also
+strings that are likely to not be a palindrome. For all these inputs we want fast-palindrome? to deliver the same result as palindrome?. The output indeed shows that they do. I'll leave it
+out thoug as it is nothing new.
 
-Often times you will test against some procedure that has already been defined by someone else and that you put great trust into. Let's suppose we want to write a faster multiplication procedure that attempts to optimize a few cases of the original implementation.
+Often times you will test against some procedure, that has already been defined by someone else and that you put great trust into. For example let's suppose we want to write a faster multiplication procedure that attempts to optimize by adding a fast path in case on of the arguments is 0.
 
 ~~~clojure
 (define (fast-* x y)
 -- done testing fast-mul -----------------------------------------------------
 </pre>
 
-Oh oh, as you can see our optimization isn't actually valid for flonums. The flonum generator
-also generated +nan.0 which is a special flonum that doesn't produce 0 when it is multiplied with 0. IEEE requires NaN to be propagated. In fact this optimization is only valid for fixnums. Thanks to our automated tests we found out about that case and will refuse to try to be smarter than core.
+Oh oh, as you can see our optimization isn't actually valid for flonums. The flonum generator also generated +nan.0 which is a special flonum that doesn't produce 0 when it is multiplied with 0. IEEE requires NaN to be propagated. In fact this optimization is only valid for fixnums. Thanks to our automated tests we found out about that case and will refuse to try to be smarter than core.
 
-There are more applications to these kinds of tests. They often serve as a good basis for a thorough test-suite. They're easy to build and quite reliable.
+There are more applications for these kinds of tests and they often serve as a good basis for a thorough test-suite.
 
 ### Integrating tests into your Emacs work-flow
 
-You're still here? That's good. We're half way through already! I'm just kidding. This is the last section, in which I want to tell you about some ideas that allow you to integrate testing into your development work-flow using our great Emacs editor.
+You're still here? That's good. We're half way through already! I'm just kidding. This is the last section, in which I want to tell you about some ideas, that allow you to integrate testing into your development work-flow using our great Emacs editor.
 If you don't use Emacs, you won't gain much from this paragraph. In that case I'd like you
 to go on with the [Wrap up](#wrap-up).
 
 Having all the great tools to do your testing is valuable but you also want to have a way
-to integrate testing into your work-flow. In particular you might want to be able to run
+to integrate testing into your development work-flow. In particular you might want to be able to run
 your test-suite from within Emacs and work on the test results. I created a little extension
 for Emacs that aims to provide such an integration. It is currently work in progress but I use
 it regularly already. You can find **chicken-test-mode** [here](https://bitbucket.org/certainty/chicken-test-mode/overview).
 
 #### What does it give you?
 
-This mode is roughly divided into two parts. One part gives you functions that allow you to run your test-suite. The other part deals with the navigation within your test-output. Let us dive in and put the mode to practice. Suppose you have installed the mode according to the little help text that is in the header of the mode's source file.
-We further assume that we're working on the stack example from the beginning of this article. I have opened up a buffer that holds the scheme implementation file of the stack. We need to adjust the test file to load the implementation differently.
+This mode is roughly divided into two parts. One part gives you functions that allow you to run your test-suite. The other part deals with the navigation within your test-output. Let us dive in and put the mode to practice. Suppose you have installed the mode according to the little help text, that is in the header of the mode's source file.
+I further assume that we're working on the stack example from the beginning of this article. I have opened up a buffer that holds the scheme implementation file of the stack. We need to adjust the test file to load the implementation differently.
 
 ~~~clojure
  (use test)
 
 #### Navigating the test output
 
-You can now switch to the test buffer (C-x o). Inside that buffer you have various possibilities to navigate. Just hitting **S-p** will allow you to step through each test backwards.
-Hitting **S-n** will do the same thing but forward. Things get more interesting when there are failures. So let's quickly introduce some failures and see what we can do.
+You can now switch to the test buffer (C-x o). Inside that buffer you have various possibilities to navigate. Just hitting **P** will allow you to step through each test backwards.
+Hitting **N** will do the same thing but forward. Things get more interesting when there are failures. So let's quickly introduce some failures and see what we can do.
 
 ~~~clojure
-(use test)
-(load-relative "../stack.scm")
+ (use test)
+ (load-relative "../stack.scm")
 
-(test-begin "stack")
+ (test-begin "stack")
 
-(test-group "make-stack"
- ; ... tests
- )
+ (test-group "make-stack"
+  ; ... tests
+  )
 
-(test-group "stack-push!"
- ; ... tests
- )
+ (test-group "stack-push!"
+  ; ... tests
+  )
 
-(test "this test shall fail" #t #f)
+ (test "this test shall fail" #t #f)
 
-(test-group "stack-top"
- ; ... tests
- )
+ (test-group "stack-top"
+  ; ... tests
+  )
 
-(test "this test shall fail too" #t #f)
+ (test "this test shall fail too" #t #f)
 
-(test-end "stack")
+ (test-end "stack")
 
-(test-exit)
+ (test-exit)
 ~~~
 
-As you can see I added two failing tests. When I run the tests again now, the buffer opens up and shows me the output that of course contains our errors.
-I change to the buffer **(C-x o)** and hit **f**. This will bring me to the **f**irst failing test. In my case it looks like this:
-
+As you can see, I added two failing tests. When I run the tests again, the buffer opens up and shows me the output which contains failures now.
+I can change to the buffer **(C-x o)** and hit **f** which will bring me to the **f**irst failing test. In my case it looks like this:
 
 <a href="/assets/images/posts/testing_chicken/run-tests-w-failures.png">
   <img src="/assets/images/posts/testing_chicken/run-tests-w-failures_thumb.png">
 </a>
 
 The first failing tests has been selected and the line it occurs in has been highlighted.
-You can jump straight to the next failed test by hitting **n** in the test buffer.
+You can jump straight to the **n**ext failing test by hitting **n**.
 Likewise you can hit **p** to jump to the **p**revious failing test. Lastly you can hit
 **l** to jump to the last failing test.
 If you're done you can just hit **q** to close the buffer.
 
 #### More possibilities to run tests
 
-Beside running the full test-suite you can also apply a filter an run only those tests that
-match the filter. Let's suppose that we only run the tests that contain the text "top-most element". In reality you might want to mark your tests specially as I have already described in the best practices section. To run tests filtered you can type **C-t f** which will ask you for the filter to apply. This looks like this:
+Beside running the full test-suite, you can also apply a filter and run only those tests that
+match the it. Let's suppose that we only want to run the tests that contain the text "top-most". In reality you might want to mark your tests specially, as I have already described in the best practices section. To run tests filtered you can type **C-t f** which will ask you for the filter to apply. It looks like this:
 
 <a href="/assets/images/posts/testing_chicken/run-tests-w-filter.png">
   <img src="/assets/images/posts/testing_chicken/run-tests-w-filter_thumb.png">
 </a>
 
-Mind the mini-buffer. It asks for the filter to use. Now once I hit enter I get the filtered results that look like this:
+Mind the mini-buffer. It asks for the filter to use. Now once you hit enter you get the filtered results that look like this:
 
 
 <a href="/assets/images/posts/testing_chicken/run-tests-w-filter-apply.png">
   <img src="/assets/images/posts/testing_chicken/run-tests-w-filter-apply_thumb.png">
 </a>
 
-There is a little bit more like removing tests and running filters on test-groups. Check out
-the project to learn about all its features. I'm currently thinking on how to implement a function that allows you to test the procedure under point. That would be relatively easy for tests
-that don't use description strings but use the short test form which will pretty print the expression. With that in place I could run a filtered tests that only includes tests that have the name of the procedure in them. It's not exactly elegant but it may work. In the mean time the things that are already there hopefully help.
+There is a little bit more like removing tests and running filters on test-groups. Check out the project to learn about all its features. I'm currently thinking on how to implement a function that allows you to test the procedure under point. That would be relatively easy for tests, that don't use description strings but use the short test form which will pretty print the expression. With that in place you could run a filtered tests, that only includes tests, that have the name of the procedure in their description. It's not exactly elegant but it may work. In the mean time the things that are already are hopefully helpful.
 
 ### Wrap up
 
-Wow, you've made it through the text. It has been a long one, I know. I have hope that I did not bore you to death. You've learned alot about CHICKEN's test culture and the tools you have
-at your disposal to address your very own testing needs. I have hope that the information provided here serves as a good introduction to these tools. Please feel free to contact me if that's not the case or if things are just plain wrong.
+Wow, you've made it through the article. It has been a long one, I know. I have hope that I did not bore you to death. You've learned alot about CHICKEN's test culture and the tools you have
+at your disposal to address your very own testing needs. I hope that the information provided here serves as a good introduction. Please feel free to contact me if that's not the case or if things are just plain wrong.
 
 # References