Files changed (2)
-a way to temporarily change the value of that variable during the execution of a block and have then the original
+a way to temporarily change the value of that variable during the execution of a block and have then the original
-Parameters have other cool properties and features. For example they're interacting nicely with threads, support guard-procedures and such.
+Parameters have other cool properties and features. For example they're interacting nicely with threads, support guard-procedures and such.
But let's get back to ruby now. The following piece of code illustrates what a ruby-version of this could look like:
+I guess it is really hard to admit that a project has failed. That's probably why many projects are taken
+One of my failed projects is a little scheme library called [missbehave](http://wiki.call-cc.org/eggref/4/missbehave).
+I intended to provide a testing framework that could be used in [TDD](http://en.wikipedia.org/wiki/Test-driven_development) and especially in [BDD](http://en.wikipedia.org/wiki/Behavior-driven_development). It was inspired and mostly architectured after the really neat [rspec-library](http://rspec.info). If you're a ruby programmer
+Well the most obvious thing I realized was that even I as the developer of the library didn't use it much.
+I used it to some extend, but whenver I wanted to make sure things work and I had to get things done, I switched to the [defacto
+Let me walk you through the parts of the library that are the reason for its failure. There are things that I really like about
+describing the services a particular '''object''' provides. It is common to test behavior with objects that have not been implemented at that
+point. This is done by using test doubles or mocks which are used as replacement for the actual thing that will be implemented later.
+Often these mocks represent [depended-on components (DOCs)](http://xunitpatterns.com/DOC.html) that the [system under test (SUT)](http://xunitpatterns.com/SUT.html) interacts with. So if we want not make sure that the SUT behaves as expected, we can not do that by just looking
+at its direct output; instead we have to verify it through indirect output performed on the DOC. A method call does not just return
+a value (if it does), but also invokes methods on DOCs, that have often been injected. See also [dependency injection DI](http://xunitpatterns.com/Dependency%20Injection.html).
+This type of testing is called [behavior verification](http://xunitpatterns.com/Behavior%20Verification.html).
+kinds of functions. We are in the fortunate position to be able to determine the correctness of a function just by looking
+at the return value of the function. Indirect outputs would normally be side-effects in this context.
+That doesn't mean that scheme programs don't have side-effects, but they are rare and generall discouraged.
+That again means, that I have provided a library that eases the testing/development of a small fraction of the code that you typically produce in scheme.
+Functional programs aren't about behavior, but rather about values and computation. That doesn't mean that functional systems don't have behavior, but they don't interest us much when we apply tests to the system.
+This lowered the overall trust in the library, and trust is an essential property of a tool that you use to make sure
+Since scheme programs are usually not build in a OO fashion with compound objects and all that stuff, I provided a way
+to verify that a certain function has been called. Additionally you could verify that it has been called a certain amount
+A point that is irrelevant to the user of the library is that the implementation of procedure expectations is somewhat hacky and brittle.
+As I explained in behavior verification it is common to introduce test doubles so the library added a possibility to mock procedures.
+I essentially redefined the procedures to have the desired behavior. Also I made again heavy use of the [advice egg](http://wiki.call-cc.org/eggref/4/advice) to do this. See the following example that stubs the result of (car).
+Procedure stubs aren't that useful since in functional languages we are more concerned about the outcome, rather than if a procedure
+was invoked. Most likely we will have an interface that accepts a procedure or uses a paramater. For both cases we can
+provide implementations that fit in our tests, without resorting to replacing a function's implementation. That's a natural
+A key part of the library are contexts. Contexts are a snapshot of the world in a given state. They supported hooks that could
+be used to setup a certain state of the world at a given point in time or rather at a given time inside the test cycle.
+In traditional test frameworks this is were your setup and teardown code resides. The following example illustrates this:
+As it turns out, this feature is really bad since it embraces mutable state and what it's even worse, it hides when the mutation happens.
+It's way clearer to just use let-bindings to share values accross examples and use an explicit set! if you must.
+This is something that turned out to complicate things. The library comes with a binary that is used to run missbehave tests. This means that
+you can not just run the test file itself using csi or something. That also means that you can't compile your test file. This is really unfortunate
+as the chicken CI expects the tests to work in a certain way and without going through some hoops it was not possible to run missbehave in the
+context of [salmonella](http://tests.call-cc.org/). I added a way to dot that later, as the following example shows:
+Not exactly short, but it did work to some degree. The more problematic part was, again, an implementation defail. I had to go through some hoops
+to make the runner work. It used some hacks in conjunction with eval that I'm not very proud of. You can check the [sourcecode](https://bitbucket.org/certainty/missbehave/src/578b051764092dab0c5bd9c7d66640f44d281c25/behave.scm?at=default#cl-231) if you want to see it.
+The last problem is that the way it was designed, it didn't work well (read: "didn't work at all") in the REPL and thus you could
+Now that I've showed you the bad parts, it's time to look at the things that I didn't mess up totally. There are some things that are valuable and
+nice to have. Indeed some of these things will make it into a new library that intents to honor the language more. It's a work in progress, but
+1. they are a means to extend the test library. That's a very lispy approach as lisp itself is intended to be extended
+The following code snippet shows these matchers and compares it to the equivalent tests using the [test egg](http://wiki.call-cc.org/eggref/4/test).
+The library provided a way to attach meta data to examples and contexts. The user could then use filters to run only examples that
+have corresponding meta data. This is a valuable feature as it gives you fine grained control on which tests are run.
+For example you might have platform dependent tests, that you only want to run on the matching platform. You could tag your tests
+with the OS they support and run them filtered. Another example would be fast and slow tests, where you generally want to run the slow tests
+during CI but not so much during development. I think this is really useful, but it should be opt-in. And it should be orthogonal to the
+other features. In missbehave the syntax for examples and contexts supported a variation that was used to declare metadata.
+In that regard this feature was bound to the syntax of these things. What I want instead is to let this be composable and usable "a la carte".
+That means you want to be able to mix and match contexts and meta-data and examples and meta-data without requiring them to know from each other.
+So that's completely orthogonal to the notion and syntax of contexts and examples. Also I want meta data to compose in the way that
+nested meta data "adds up", so that the inner most expression holds the union of all meta data surrounding it.
+Pending tests are extremely valuable and I don't quite understand why they are not supported by the test egg, or at least not directly.
+As the name suggests you can temporarily disable the execution of tests by marking them pending. The point is that these tests aren't run,
+but they are reported as being pending, so that you know that they are actually there. This means, that you can't accidently forget them.
+In missbehave you can define a pending tests in two ways. The first way is to mark it explicitly as pending as the following example shows:
+As you see you could add a call to pending at any point in the expectation which would make the expectation exit early and skip the
+verification machinery. The second way is to make an example implicitly pending by omitting the body.
+This is especially nice, if you start by outlining the things you intend to test and then you fill in the actual code.
+So this is really something that is valuable and will be added to veritas as well, but in a slightly different way.
+Again I want it to be usable a-la-carte and compose well. This is what it will probably look like in veritas:
+As I wrote before, I have learned from my failures and work on a testing library that incorporates the good parts and throws away the bad parts.
+This library will be called veritas and is a work in progress. It will further more encourage the use of quick-check like
+automated value generators as well as using the REPL as a host to run tests interactively. I'll post about it once it's ready.
+I hope you enjoyed this little journey through all my failures. It has certainly been a pleasure for me and a healthy way to look at the "monster" I've made.
+I'm sure there is still alot to learn for me and I'm open to it. I want to thank all the helpful people that provided valuable feedback for this post