David's Blog

Integrated Tests - The Other Side Of The Scam

At GeeCon Prague 2015, I saw a great talk by J.B. Rainsberger about integrated tests. It was basically a live version of his essay Integrated Tests Are a Scam, that I had read earlier. But It was great to hear his reasoning live, in a narrated way.

I use the term integrated test to mean any test whose result (pass or fail) depends on the correctness of the implementation of more than one piece of non-trivial behavior.

J.B. Rainsberger

The Scam: Original Version

The gist of his talk (and his essay, which you should read), is: If you start to depend on integrated tests, you need to write more and more of them over time. Since they don't put as much positive pressure on your design as unit tests would do, your code quality will suffer, more defects will escape your net of tests, and you'll need more integrated tests to reproduce them.

The driving force here is: Positive design pressure caused by unit tests. We are missing this positive pressure with integrated tests, and this brings us into a tail spin.

The Problem with that Version

Now, some people seem to have a hard time to understand this narrative. After the talk, I have heard from several people things like:

"Integrated tests might not be the best possible solution, but they are not so bad. Better have some tests than none."

"This sounds nice in theory, but I don't think his solution (unit tests with mocks) will work in the real world. It is not any better than having the integrated tests."

"But we have to test if all components are working together, don't we?"

Some Don't Feel the Pressure

I almost dismissed what they said as "some people always complain". But then I started thinking. For me, the reasoning in the talk was absolutely plausible and the conclusion to use unit tests and mocks seemed just right. Why was this not the case for other people?

And I think I found a possible explanation...

Some people don't feel the positive design pressure from their tests. They write tests, but don't take too much care to write good unit tests. Often, they don't refactor enough. Then, after a very small change, 20 tests break, and they complain that testing is waste. I have seen this scenario, and I did those things myself.

In other words: TDD is hard. You have to learn it and train it and take the time to do it right. But when you do it right, you can get great benefits from it.

Alternative Version of the Scam

Even if you do TDD like that, integrated tests are still a scam: They still lead you down this vicious circle where you need more of them the more you have.

  1. We already have some integrated tests
  2. They don't catch many errors (because you never have enough integrated tests)
  3. When an error occurs, we try to write a test
  4. Since we already have some integrated tests, and we do not refactor enough, some parts of our system are really hard to test
  5. So, let's just add some code to an existing test or write another integrated test, and then fix the bug
  6. We have less time to write unit tests or improve our design
  7. Go to 1

I know that one can explain this cycle with "the positive design pressure is missing". But it feels differently for the people in the cycle, who are not "listening to their tests" anyway. Because if they would, they would not be in this cycle.

Second Vicious Circle

There is a second vicious circle that often seems to happen at the same time:

  1. We already have some integrated tests
  2. We don't want to cover a line of code with different tests, because we think this would be waste
  3. So we don't write unit tests, since we already covered the code with integrated tests ("There is already a test for this class, why do you write another one?")
  4. Oh shit, writing those integrated tests is hard
  5. Also, they never catch regressions (see above)
  6. So we write less of them, and make them bigger
  7. Bigger tests cover more lines -> Go to 1

Trust in Automated Testing Suffers

So, writing integrated tests is hard. And those tests seldom catch real regressions. I mean, they do, but they'll also often fail when we make a perfectly legitimate change. So, over time, people will start to consider automated testing a waste.

Also, integrated tests often become flaky. There are a simply too many reasons why they might fail, so they'll fail "for no reason" from time to time. You'll hear your team members say: "Yeah, the nightly build is red, but whatever. This test simply fails once or twice a month."

Over time, the team trusts the test suite less and less. You can easily get into a situation where a large percentage of the builds fail because of flaky tests, and nobody fixes them, because fixing those tests is hard, and they never catch any regressions anyway.

Manual Testing

When trust in automated testing decreases, teams often rely more and more on manual testing. "Red-Green-Refactor" becomes (if we ever had it) "Change-Test-Debug". They'll have testers (on the team or in an external testing department) who will "validate" the results once the programmers are "finished". At best, they will automate some test cases "through the GUI", but often they will just click through the program.

This manual testing and automating through the UI slows down all feedback cycles considerably. Later feedback means that fixing the problems we find becomes more expensive, because someone has to go back and change something that was supposed to be "finished".


You cannot write all the integrated tests you need to verify that your software works correctly under all circumstances. Even covering all the code paths is incredibly hard. So, if you rely too much on them, you will increase the likelihood that some defects will slip all your safety nets and "escape" to production.

Integrated tests are self-replicating: When you have more, you'll need them even more. Then they often become flaky and trust in your test suite starts to suffer. But when you rely too much on manual testing as a result, you slow down your feedback cycles, making your development unresponsive and expensive.

So: Be wary of integrated tests!

Now subscribe to my newsletter so you don't miss Part 2: "The Mock Objects Trap"!

You might be also interested in:

Posting Type: 

My New Playground Project: "exampilistic"

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

So I am working on "exampilistic" again... Some time ago (actually a looooong time), I wrote about some some ideas how one could improve "Specification by Example tools (especially FitNesse) and how I would implement such a tool. Even though I had working code back then, my approach didn't really work the way I wanted it to. So I put my code on Github and stopped working on it.

Now I am playing around with those ideas again, but I want to do some things quite differently. In this post, I want to tell you about the design goals for the new tool and what I want to do differently this time.

Side Note: I think that "Specification by Example" (or Behavior Driven Development or Agile Acceptance Testing) is a really useful and important technique. If you want to know more, try one of the existing tools, like FitNesse or Contact Me.

Major Design Goals

  • Wiki. I like the "Wiki" part of FitNesse - This works very well for me. I can click through the documentation and move stuff around. I can create living documentation that everyone can read, browse and edit - Even non-technical users. I think the reading and browsing part is harder with tools like SpecFlow or Cucumber, where your specifications are in source files...
  • Separation of tests and pages. I don't like the fact that in FitNesse, a page is a test. I want a page to include one or more tests. This will enable me to move tests around and link them from multiple pages. First, the test is only linked from a user story. Then, when the functionality is complete, we write the documentation and link the test there too...
  • Generators for the glue code. With FitNesse (and some other BDD / Specificaiton by Example tools), the tests and the glue code can easily get out of sync. So I want to generate all the glue code. This will be quite hard, because the generators need some refactoring capability.
  • No test runner. FitNess is a test runner. You can click a button, and it will run all your tests. In exampilistic, you will not be able to do this. It will generate language specific test code for you: For Java, it will generate JUnit tests, for Clojure it will generate clojure.test tests, and so on.
  • Specialized editors for the tests. The table syntax in FitNesse is a bit hard to get right. As is the format for cucumber. SpecFlow and Jnario, OTOH, have great editors for the tests. So, exampilistic will also have specialized editors for different kinds of tests, that support you to get the syntax right.
  • Test refactoring support. This is related to the generators: When someone does a minor change to a test (like, re-naming a column), it should be possible to just generate the glue code and run the test again, without intervention from a developer. Many current tools don't support this.

Minor Design Goals

  • Clojure / ClojureScript. Actually, when I started again, I didn't want to create a new BDD tool at first. I only wanted to get better with Clojure and ClojureScript. And I wanted to try Re-Frame. But I don't really like Code Katas or example projects - I always have the feeling that I don't learn much from them. So I decided to work on a larger project for some time, and "exampilistic" came to my mind.
  • Markdown. Well, everyone knows it. And it's easy to learn. And I found a markdown parser library that's easy to use from Clojure and ClojureScript.
  • Interactivity. For the specialized editors, I need a lot of interactivity. During my first try (some years ago), I was using AJAX for this. That didn't work too well. Now I want to try with a "Single Page Application" using Re-Frame and React.
  • Search engine optimization. Despite the interactivity, I want to be able to render static pages on the server using the same code I use on the client. For search engine optimization and because it is a nice challenge (That I have already solved).

Differences to Last Time

  • Clear separation of responsibilities. In my last attempt, I tried to do too much in the Wiki itself. It would be a test runner (like FitNesse). And I would store all content as Java Classes (to ensure the test code is always in sync with the wiki). This time I want to do it differently: The wiki is just there for the test definition. The glue code will be generated by a separate code generator. And the test runner is whatever test runner you use in your language (e.g. JUnit when you use Java).
  • Probably no automatic feedback from code to wiki. In my last attemt, when you would refactor the code (e.g. rename a class), you would also see that change in the Wiki (because the wiki stored all pages as java classes). I don't think this is really needed. And if I need it, I will create a separate tool that will feed back information from the code to the Wiki. Because: Clear separation of responsibilities.


"exampilistic" is, so far, only a toy project. But so far it also looks very promising. My progress is better than I would have expected. I also think I learned a lot from my first attempt, and this time, the design goals are mostly sound (But I'm sure some of them will change when I learn more during developing functionality).

It is still to early to show you some code or screenshots, but if I manage to get to a point where some core functionality (like editing a single wiki page) is fully usable, I will make the project open source. And also show you some screenshots.

Are you interested in managing IT and software projects or teams? Are you in Europe in Autumn 2016? We are preparing a great conference for you: Advance IT Conference

Posting Type: 

Clojure "CompilerException: java.lang.Exception: Nothing specified to load, compiling: [file name]"

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

Today I had a strange exception when compiling some crossover code for Clojure and ClojureScript:

Caused by: java.lang.Exception: Nothing specified to load
Reloading Clojure file "/[path]/[file].cljc" failed.
clojure.lang.Compiler$CompilerException: java.lang.Exception: Nothing specified to load, compiling:([other path]/[other file].cljc:1:1)

There was no further information in the error message, and I didn't find out anything useful by googling. During my trial and error (with previous versions of the code), I found out that the problem was in:

(ns reframe.serverside
    (:require #?(:cljs [re-frame.core :refer [dispatch-sync]])))

When compiling this code to ClojureScript, everything works find, because the "require" conatins an entry: "re-frame.core". But when this file is compiled to Clojure, the "require" is empty, and the compiler does not seem to like this. Here is the fixed code:

(ns reframe.serverside
    #?(:cljs (:require [re-frame.core :refer [dispatch-sync]])))

I just remove the whole ":require" when compiling to Clojure. (Anyway, a better error message would have been really useful here...)

Are you interested in managing IT and software projects or teams? Are you in Europe in Autumn 2016? We are preparing a great conference for you: Advance IT Conference

Posting Type: 

Server-Side and Client-Side Rendering Using the Same Code With Re-Frame

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

In the last few days, I was playing around with Re-Frame. Check it out, it is awesome. Even if you're not planning to ever use clojure - Even if you're not even doing web development - At least read the Readme.md. It's the best Readme.md I've ever read!

Anyway, I was playing around with Re-Frame. And after I got the simple example running, I tried to make it do something it's not supposed to do: Render a page on the server side, using the same source code as on the client side. If that works, it would be great: You could create a single page application that renders everything on the client, but still provide basic functionality when JavaScript is switched off. And it would be great for SEO: Search engines get plain HTML and don't have to interpret JavaScript to see the content.

The things I tried basically work now, but it's not 100% usable yet. Here's my current solution, and what already works...

I will not explain how re-frame or ClojureScript work in this post, so you probably should read the re-frame Readme.md before continuing...

Simple Example Rendered on the Server

So, I started with the simple Re-Frame example and tried to render the page on the server.

With Clojure 1.7's Reader Conditionals I can now write code that can run on the server (as Clojure code on the JVM) and also on the client (ClojureScript compiled to JavaScript). Basically, everything that's in a ".cljc" file is available on the server and on the client, while ".clj" is only available on the server and ".cljs" is only available on the client.

I created the following project structure:

    +---clojure       <-- Clojure code to run on the server (.clj)
    +---clojurescript <-- ClojureScript code to run on the client (.cljs)
    \---crossover     <-- Crossover code to run on server and client (.cljc)

Then I simply copied the ClojureScript code from the simple example to src/main/clojurescript and created a hiccup view in src/main/clojure that created a page that looks exactly like the HTML page from the simple example. After everything was running (which required a tweak or two), I started to move stuff around.

Crossover Code

I moved all the "view" functions from the ClojureScript file to a new crossover file called "renderer.cljc". The clojurescript then should only call one function from the renderer. The blue code is unchanged, I have only changed the bold lines and removed all the "View Components":


(ns simpleexample.core (:require-macros [reagent.ratom :refer [reaction]]) (:require [reagent.core :as reagent] [re-frame.core :refer [register-handler path register-sub dispatch dispatch-sync subscribe]]
[simpleexample.page.renderer :as renderer]))
(def initial-state {:timer (js/Date.) :time-color "#f34"}) (defonce time-updater (js/setInterval #(dispatch [:timer (js/Date.)]) 1000)) ;; -- Event Handlers ---------------------------------------------------------- (register-handler ;; setup initial state :initialize ;; usage: (submit [:initialize]) (fn [db _] (merge db initial-state))) ;; what it returns becomes the new state (register-handler :time-color ;; usage: (submit [:time-color 34562]) (path [:time-color]) ;; this is middleware (fn [time-color [_ value]] ;; path middleware adjusts the first parameter value)) (register-handler :timer (fn ;; the first item in the second argument is :timer the second is the ;; new value [db [_ value]] (assoc db :timer value))) ;; return the new version of db ;; -- Subscription Handlers --------------------------------------------------- (register-sub :timer (fn [db _] ;; db is the app-db atom (reaction (:timer @db)))) ;; wrap the computation in a reaction (register-sub :time-color (fn [db _] (reaction (:time-color @db)))) ;; -- Entry Point ------------------------------------------------------------- (defn ^:export run [] (dispatch-sync [:initialize])
(reagent/render [renderer/simple-example]
(js/document.getElementById "app")))

Now, obviously, this didn't work out of the box. The crossover code (in the .cljc files) was missing all of the Re-Frame dependencies. Also, the hiccup collection used by Re-Frame is slightly different than what hiccup expects on the server (since Re-Frame has to process the template to handle subscriptions and other things). And, third, the ClojureScript event handler for the input field obviously doesn't work when rendering on the server.

I could provide the missing dependencies for ClojureScript with a reader conditional, but I'd also have to provide an alternative implementation when compiling as Clojure code. Fortunately, this is quite easy. Then I created a function that translates from a Re-Frame hiccup collection to a regular hiccup collection, that is only used on the server (but I need it only in the server-side hiccup view - see later). And third, I decided to use an empty event handler function when rendering on the server.


(ns simpleexample.page.renderer
;; Import Re-Frame dependencies when compiling as ClojureScript
  #?(:cljs (:require [re-frame.core :refer [dispatch-sync subscribe]])))

;; Use this app-db instead of the Re-Frame app-db when compiling as Clojure
#?(:clj (def app-db
          (atom {:time-color (atom "#77f")
                 :timer (atom "--.--.----")})))

;; Use this Re-Frame's "subscribe" when compiling as Clojure
#?(:clj (defn- subscribe [v]
          (get @app-db (first v))))

;; -- View Components ---------------------------------------------------------
(defn greeting
  [:h1 message])

(defn clock
  (let [time-color (subscribe [:time-color])
        timer (subscribe [:timer])]

    (fn clock-render
      (let [time-str (str @timer)
            style {:style {:color @time-color}}]
        [:div.example-clock style time-str]))))

(defn color-input
  (let [time-color (subscribe [:time-color])]

    (fn color-input-render
       "Time color: "
       [:input {:type "text"
                :value @time-color
                ;; Use empty on-change handler when compiling to Clojure
                :on-change #?(:clj  ""
                              :cljs #(dispatch-sync
                                       [:time-color (-> % .-target .-value)]))}]])))

(defn simple-example
   [greeting "Hello world, it is now"]

And here's how I translate (simple-example) on the server to create the static content:


(ns exampilistic.wiki.page.views
     [page :refer [html5]]
     [element :refer [javascript-tag]]
     [page :refer [include-js]]]
    [exampilistic.wiki.page.renderer :as renderer]))

;; Create a regular hiccup collection from a Re-Frame hiccup collection
(declare reframe>hiccup)

(defn- map-reframe>hiccup
  (cond (keyword? reframe-hiccup-element) reframe-hiccup-element
        (map? reframe-hiccup-element) reframe-hiccup-element
        (vector? reframe-hiccup-element) (if (fn? (first reframe-hiccup-element))
                       (let [result (apply (first reframe-hiccup-element) (rest reframe-hiccup-element))]
                         (if (fn? result)
                           (apply result [])
                       (reframe>hiccup reframe-hiccup-element))
        :else (throw (IllegalStateException. (str "Illegal element " reframe-hiccup-element)))))

(defn- reframe>hiccup
  (into []
        (map map-reframe>hiccup reframe-elements)))

;; create the hiccup collections that represent the static page
(defn show-page [page-name]
     [:title "Replace me"]]
     [:div {:id :app}
      (reframe>hiccup (renderer/simple-example))]
     (include-js "/js/goog/base.js")
     (include-js "/js/main/main.js")
     (javascript-tag "goog.require(\"simpleexample.core\"); window.onload = function () { simpleexample.core.run(); }")]))


Creating the static HTML on the server is already working. Just uncomment "window.onload = ..." in show-page and you'll still see the correct content on the client - But without the dynamic behavior. And the server side rendering uses (almost) exactly the same code as the client side rendering - So we can be sure to see (almost) the same page as if the client would have rendered it - Except for the date, which will only be rendered by the client.

What's still missing is that the server should make sure that the client uses the same values in it's app-db as the server used for rendering the static content. I think this is not that hard, I just didn't implement it yet.

Another inconvenience is that the server will render the whole page again in window.onload, throwing away anything the server rendered. I was thinking, maybe I could only re-render everything on the first user interaction. I.e., only when the user clicks a link, start Re-Frame and render the new state...

Maybe I'll package up everything as a small library when I get it working. I could also create a pull request for Re-Frame, but right now I think a standalone library (e.g. Re-Frame-Server) would be better...

Feedback Welcome

What do you think about this solution? Do you think I can solve the remaining problems? Will it be usable in bigger applications, or will I hit a wall somewhere? Do you have any questions? Please Contact Me!

Are you interested in managing IT and software projects or teams? Are you in Europe in Autumn 2016? We are preparing a great conference for you: Advance IT Conference

Posting Type: 

Mocks or Intermediate Results: What I Would Do

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

Today I read a very interesting article by Kent Beck and Martin Fowler: "Half-done Versus Fake: The Intermediate Result/Mock Tradeoff". It shows some problems with mocking a class. Then the authors conclude that it's better to split the functionality and test intermediate results instead of using mocks.

The technique they introduce is very interesting. Still, I was immediately thinking: "I wouldn’t write the code that way in the first place". Don't get me wrong, I don't think their code is bad. And I think their technique is useful. But in this particular case, I would re-structure the code differently, so that testing it with mocks becomes easier and makes more sense.

What's Wrong With Their Approach?

The authors us a very simple example: Copy a set of source directories to a destination directory. Here's their code:

import subprocess
class Backup1:
    def __init__(self, sources, destination):
        self.sources = sources
        self.destination = destination
    def store(self):
        for source in self.sources:
            command = ['cp', '-r', source, self.destination]

They would test it with mocks by mocking subprocess.call and making sure that all the right copy commands are passed to that method. Then they argue that this causes several different problems, like, you tie your functionality to a particular implementation (You couldn't use Python's file system utilities instead of "cp" without breaking the test). And they are right!

A class or a method are never too small to split.

The suggested approach from the article was to "extract the scary bits" and then test the intermediate results:

class Backup2:
    def __init__(self, sources, destination):
        self.sources = sources
        self.destination = destination
    def store(self):
        [subprocess.call(command) for command in self.commands()]
    def commands(self):
        return [['cp', '-r', source, self.destination] for source in self.sources]

You can then test if Backupt2.commands produces the correct set of "cp" commands. Unfortunately, this approach has the same problems as the first one, with mocks. You still cannot replace "cp" with Python code without breaking the test.

And there is another problem: Backup2.commands clearly should be private. It is just a simple utility function, and I see no reason why it should be a part of the interface of the backup class. So you either have to make design compromises to test the functionality, or you cannot test it at all!

Anti Corruption Layer

I see two problems in Backup1 that are somewhat related: It does two different things (iterate over a list of directories to copy and do the actual, low-level copying, and it's code is on different levels of abstraction (decide what to process and implement how the processing is done).

A class or a method are never too small to split.

So I would split the code too, but along a different line. I would wrap how the copying is actually done in a class that makes sense for the caller (Please forgive me if I didn't get the Python code completely right. I don't really know Python that well...):

import subprocess
class FileSystemManipulation:
    def __init__(self):
    def copyDirectory(source, destination):
        command = ['cp', '-r', source, destination]

This class serves as an "Anti Corruption Layer" to the messy details about how to copy directories (and potentially other file system manipulation).

Aside: I am not sure if "FileSystemManipulation" is a good name here. It probably is not, because it is too generic. But we can always try to find a better name later, when we know more about the application and what we want to achieve.

I can now write the Backup class in a way that uses the new abstraction:

class Backup3:
    def __init__(self, sources, destination, fileSystemManipulation):
        self.sources = sources
        self.destination = destination
        self.fileSystemManipulation = fileSystemManipulation
    def store(self):
        for source in self.sources:
            self.fileSystemManipulation.copyDirectory(source, self.destination)

How Can You Test This

The responsibility of Backup3 is now much clearer: It processes a list of source directories, and makes sure each gets copied to a single destination directory. I can test this with mocks in a way that makes sense within the domain of Backup3. And I won't have to change my tests when I decide to implement the actual copying in a different way.

Aside: I would probably create two tests here: One that makes sure that a list of source directories is processed correctly (i.e. all are passed to fileSystemManipulation.copyDirectory), and one that makes sure that all calls to fileSystemManipulation.copyDirectory use the same destination directory.

I would then test FileSystemManipulation with an integration test - i.e. a test that copies a directory and observes the file system if the correct behavior happens. This test can make sure that it operates on minimal data, so it will be reasonably fast. And it should check some preconditions (like, is there enough space left on the device) and ignore the test when they are not satisfied. In JUnit, I would do this with Assume.assumeThat(...), I have no idea how to do it in Python.

Note that I wouldn't even have to change the test for FileSystemManipulation if I change the way how the copying is done. Since it is a real integration test (i.e. it tests how ONE of my classes interacts with the outside world), the test will still be valid if we decide to use Pythons file system manipulation instead of "cp".

Turtles All The Way Down

One potential problem here is that we create abstractions that depend on abstractions that depend on... It's like "Turtles all the way down". But we can decide to stop at any point. Ultimately, this is a cost-benefit trade-off: What is the cost of more abstraction compared to the benefit of better separation of concerns.

Anyway, I think it (almost) always makes sense to protect our domain classes / functions from the messy details of the outside world. And that's exactly what FileSystemManipulation does.


When a class is hard to test, consider splitting it. I absolutely agree on that with Kent Beck and Martin Fowler. But I would do the splitting differently:

First, look at all the different things the code does. Then think about whether the code works on different levels of abstraction. After that, in our example, we found a line where to split the code: Between the different levels of abstraction. This also solved the problems with the function having two different responsibilities.

Also, always protect your domain code from the messy details of the outside world by creating an anti corruption layer. Don't use "new Date()", use "wallclock.now()". Don't call a rest service to authenticate users directly from your domain code, create a "UserAuthenticator" that encapsulates the call...

Are you interested in managing IT and software projects or teams? Are you in Europe in Autumn 2016? We are preparing a great conference for you: Advance IT Conference

You might be also interested in:

Posting Type: 

Money is Time and Other Tips for Freelancers

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

Today I want to write about expenses and investments and a good way to think about them. This article is about a situation I encountered as a freelancer, so it might not be 100% relevant to you. But I think the lessons might be interesting for everybody. They are:

  1. Learn to think about all your expenses as costs or investments
  2. Exchange money for time (if you can)
  3. Convert monetary amounts to time or goods in your head

We'll come to the lessons soon, but first, a story...

But, but... I Only Wanted to Help...

About a year ago, some people asked me at a conference whether I could join a new mailing list they created for freelancers. The idea was to have a discussion forum where we could help each other.

I joined, because I really liked the idea. And I like to discuss and to help where I can. I ended up answering questions and helping others, not really discussing, because I already was a small business owner and then freelancer for 8 years at that time, and everyone else was just starting.

After a few weeks, someone new joined the mailing list and asked, like 5 or six people before, for some hints and resources to get started as a freelancer. So I answered, again. I gave him some generic tips (like, get and accountant, your city or government probably offers free services for founders, ...). And I wrote something like:

Buy Double Your Freelancing Rate by Brennan Dunn. Yes, the title sounds a little bit like click-bait. And yes, it's very expensive. But it's really worth the price. And if you don't like it, you can get a full refund by the author. BTW: The author interviewed me for one of the case studies ;)

Basically the same mail I wrote to everyone else who asked before. This time, the answer was like:

$130 for an ebook? You must be kidding me! Are you completely crazy? I bet the author pays you for advertising his book. And you only want to brag with your case study!

Well... I immediately unsubscribed from that mailing list. I don't have time for people like that. Even when I think about it now, it makes me a little bit angry. But now I have the distance to write about it.

So, besides being rude to someone who just wanted to help, what mistakes was the person making? How should you change your thinking when you want to work for your own?

Basic Considerations

Say your rate is 50€ per hour. This is quite low, but easy to calculate with.

And say you can comfortably work 1500 billable hours per year. This might be a little bit to high or too low (depending on how much time you have to spend for sales, marketing, administration, how long your projects are, how many projects you do in parallel, ...), but again, it's easy to calculate with.

So, in an average, month, you will work 125 hours and get 6250€ for it. But that's only the average. You might work 170 hours for two months, than have three weeks of idle time because a client canceled their contract, then work for two months, than you're sick or on vacation, and so on. You get the idea.

Also, let's assume a year has 200 working days, i.e. days where you can work billable hours. Again, this might be more or less than you're willing or able to work. But it's a nice, round number.

And, say, 1$ = 1€. Easy to calculate with.


So, an "ebook" for 130€ is expensive, right? (DYFR is actually more than a simple ebook, but that's what I was accused of selling...) You could get two fancy dinners with your spouse at a really nice restaurant for the same amount. And two dinners with a loved one are clearly more valuable than an ebook, right? Basically, yes. But...

A fancy dinner is an Operating Expense (OPEX). It is clearly necessary (if you don't go out for dinner with your loved one from time to time, it might be bad for you business and for your relationship), and you get something valuable for your money. But then, the money is gone. There is no chance that you'll get it back or make a profit on it, by just eating the dinner.

The "ebook" is (potentially) an investment (Capital Expenditure, CAPEX). It will potentially alter the future of your business. Say you learn something from the book that allows you to raise your hourly rate by 1€. Now the book pays for itself in one month.

And from then on, you can buy two fancy dinners for you and your spouse every month, for the rest of your life.

So, stop thinking like "ebooks are worth less than 10€" or "a 250€ android tablet is cheap". Think about costs and investments. Think about OPEX and CAPEX. A 130€ ebook that teaches you something important is damn cheap. And a 250€ android tablet you don't need is really, really expensive.

Time is Money is Time

As a freelancer, you can exchange time for money and money for time. You can choose to take on more work to earn more, or you can work less billable hours when you have enough money for now. And you can work on things that might earn money later (i.e. passive income like books, ...).

This, for me, is the single most important benefit of working for my own. Yes, this works only to a certain degree, but you have way more freedom here than an employee, who has to work at least 40 hours/week but also has to ask when they want to do overtime.

But there is more to that than just freedom.

Say you are developing applications for iOS using XCode. I do not (I mostly work with Java these days), but I read rants about XCode on Twitter every day, so there seem to be people who do it. So you're working with XCode, and it crashes (it seems to do that, sometimes, according to Twitter). So you get angry, get a coffee, write a rant on Twitter, restart XCode, continue working.

You've heard that there is AppCode and that it might be better, but it costs 200€. Should you buy it?

If it saves you the equivalent of 4 hours per year, it's a no-brainer: Buy it! (BTW, 4 hours per year are 1.2 minutes per day...)

You don't want to sort your invoices, but your accountant would charge you 50€ per month for doing it. Should you outsource it? Yes, if it would take you 1 hour per month to do it yourself (including the time you spend thinking "OMG, this is so boring. I don't even want to start. I'll just read a little bit of Twitter before starting...")!

You don't want to clean your house, but a cleaning service charges 20€/hour. Should you hire them? Yes, you can buy 2 1/2 hours of cleaning by a professional for one hour of your work! (Yes, I didn't consider taxes here, but the ratio is probably still favourable).

Money is iPhone

Don't think "money" when you about money. Think about money in terms what it could buy you (iPhone) or what it would cost you (time). That's basically the trick from above, but a little bit more generic.

50€ is one hour of your time, or a little bit less than a fancy dinner.
1500€ is just about one week of your time, two iPhones, a cheap laptop or a nice holiday with your family.
5000€ is a little less than one month of your time, more than your credit card limit, and almost enough for a long over-seas vacation.

I think you get the idea...

Are you interested in managing IT and software projects or teams? Are you in Europe in Autumn 2016? We are preparing a great conference for you: Advance IT Conference

You might be also interested in:

  • Cheap plastic drills: Most people think construction workers should have great tools. A lot of people think paying more than 1000 Euros for an office chair is a waste of money, even for a software developer who uses it 8 hours a day. Good tools are expensive.
  • A Spectrum Of Effort Estimates: An introductory post about estimating development effort.
  • Improve your Agile Practices: A FREE course that teaches you how you can improve as a software development team
Posting Type: 

Advance IT Conference

Next year, I will co-host a conference about IT management. You should really check it out, it will be fun:


Posting Type: 

Running Multiple Spring Boot Apps in the Same JVM

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

In the last week or so, I was playing a little bit with microservices (hey! a buzzword!), and I used Spring Boot to create those services. On of my first questions was: How can I test a set of services from a business point of view with a single click in my IDE - I.e. how can I ensure that the complete application has the right features? I wanted a way to start multiple Spring Boot web applications in the same JVM. Here is how I did it.

All of this is a work in progress, so I don't have some complete code for you on github. I will maybe write a more in-depth, step-by-step guide how I created this application later on quickglance.at. And this guide will then come with a complete example application. If I ever write it ;)

The Problem

I wanted to create an application that handles location data. It would consist of three services: One to write locations ("locations-command", one to read them ("locations-query") and a web application. You see, we are doing CQRS here (another buzzword!).

The web application contains only Spring WebMVC controllers and ViewModels (add MVVM to the list of buzzwords...). It calls the locations-query or locations-command service to do the real work. Those services would then use some storage backend to store and retrieve locations - Probably couchbase, but I have not decided yet.

I want to test this application from a business point of view using "executable specifications" written in FitNesse. I want to run those tests either in FitNesse or with JUnit from within my IDE. But I do not want to build and run a set of docker containers every time - I want to run the tests all in the same IDE, so I start them with a single click and debug them if necessary.

I also don't want those tests to use the real storage backend. I want to be able to mock backend calls, and only test against the real database in a separate set of tests (which would then really spin up all those docker containers). I have not completely solved this part yet, so I'll not cover it here. Maybe in a later blog post...

Third, I want that all the services to use port 8080 when they run in their own docker container - I don't want to customize ports within the application. I can do this later with docker. But when I run the services within the same JVM, they have to use different ports.

And fourth, all spring boot applications have to run completely independent from each other - They cannot share the classpath or anything else.

The Setup

My project structure roughly looks like this:

    |    |-backend-runner/
    |    |-locations-command/
    |    \-locations-query/

All the subprojects contain their own build.gradle, source folders, and other stuff.

Here is the global settings.gradle and the global build.gradle:


include 'webapp', ':locations-query', ':locations-command',
    ':servicerunners:locations-query', ':servicerunners:locations-command'


buildscript {
	ext {
		springBootVersion = '1.2.2.RELEASE'
	repositories {
	dependencies {

apply plugin: 'java'
apply plugin: 'spring-boot'

allprojects {
	sourceCompatibility = 1.8
	targetCompatibility = 1.8

	repositories {

subprojects {
	buildscript {
		ext {
			springBootVersion = '1.2.2.RELEASE'
		repositories {
		dependencies {
			classpath 'se.transmode.gradle:gradle-docker:1.2'
	apply plugin: 'java'
	apply plugin: 'spring-boot'

	dependencies {
		testCompile 'junit:junit:4.12'

	task wrapper(type: Wrapper) {
		gradleVersion = '2.3'

"webapp", "locations-command" and "locations-query" in the project root are the services. Those are standard Spring Boot web applications - You can create them, for example, with this web application: start.spring.io.

"specifications" contains the test fixtures and support code for the FitNesse tests. The three subprojects under "servicerunners" contain the code to start the web applications we need for testing.

For the tests we will only start the two locations services as spring boot applications. We will drive the tests through the ViewModels of the web application, so we don't need to start the Jetty of "webapp".

Test Suite Setup and Teardown

The specifications project must have the "webapp" in its classpath (so we can drive the tests through the ViewModels), but it cannot reference any other projects directly. Otherwise the two services would share the same classpath, and we don't want that. It also has to reference the required libraries from Spring Boot, so they are available for the service runners (see later).


dependencies {
	compile project(':webapp')


	testCompile 'org.fitnesse:fitnesse:20150226'

The FitnessSuiteHelper is where the magic happens. This class starts each service using a dedicated "BackendRunner" - See below what it does and why we need it.


public class FitnesseSuiteHelper {
    private static final List<Backend> activeBackends = new ArrayList<>();

    public FitnesseSuiteHelper() {

    public static void startBackends() throws Exception {
        startBackend("locations-query", "com.example.LocationsQueryBackendRunner");
        startBackend("locations-command", "com.example.LocationsCommandBackendRunner");

    private static void startBackend(final String backendProjectName, 
            final String backendClassName) throws Exception {
        URL backendRunnerUrl = new File("servicerunners/backend-runner/build/classes/main")
        URL runnerUrl = new File("servicerunners/" + backendProjectName 
            + "/build/classes/main").toURI().toURL();
        URL backendUrl = new File(backendProjectName 
            + "/build/classes/main").toURI().toURL();
        URL[] urls = new URL[] { backendUrl, backendRunnerUrl, runnerUrl };
        URLClassLoader cl = new URLClassLoader(urls, 
        Class<?> runnerClass = cl.loadClass(backendClassName);
        Object runnerInstance = runnerClass.newInstance();

        final Backend backend = new Backend(runnerClass, runnerInstance);


    public static void stopAllBackends() 
            throws IllegalAccessException, InvocationTargetException, 
            NoSuchMethodException {
        for(Backend b : activeBackends) {

    private static class Backend {
        private Class<?> runnerClass;
        private Object runnerInstance;

        public Backend(final Class<?> runnerClass, 
                       final Object runnerInstance) {
            this.runnerClass = runnerClass;
            this.runnerInstance = runnerInstance;

The "startBackend" method starts each service in its own classloader using a "BackendRunner". To do this, it has to configure the correct classpath: We need the service itself ("backendUrl"), the backend runner for this specific service ("runnerUrl") and the generic backend runner ("backendRunnerUrl"). We also have to keep a reference to all backends, so we can stop them after the test suite has finished ("stopAllBackends()").

The "BackendRunner" for each service is very simple: It only contains a constructor that configures the generic BackendRunner.


public class LocationsQueryBackendRunner extends BackendRunner {
    public LocationsQueryBackendRunner() {
        super(LocationsQueryBackendApplication.class, CustomizationBean.class);

The CustomizationBean makes sure that each web application runs on a different port. We could add other beans in the constructor to further customize the service for testing.


public class CustomizationBean implements EmbeddedServletContainerCustomizer {
    public void customize(ConfigurableEmbeddedServletContainer container) {

And the generic BackendRunner does the real work:


public abstract class BackendRunner {
    private ConfigurableApplicationContext appContext;
    private final Class<?>[] backendClasses;

    private Object monitor = new Object();
    private boolean shouldWait;

    protected BackendRunner(final Class<?>... backendClasses) {
        this.backendClasses = backendClasses;

    public void run() {
        if(appContext != null) {
            throw new IllegalStateException("AppContext must be null to run this backend");

    private void waitUntilBackendIsStarted() {
        try {
            synchronized (monitor) {
                if(shouldWait) {
        } catch (InterruptedException e) {
            throw new IllegalStateException(e);

    private void runBackendInThread() {
        final Thread runnerThread = new BackendRunnerThread();
        shouldWait = true;

    public void stop() {
        appContext = null;

    private class BackendRunnerThread extends Thread {
        public void run() {
            appContext = SpringApplication.run(backendClasses, new String[]{});
            synchronized (monitor) {
                shouldWait = false;

Actually this class only has to call appContext = SpringApplication.run(backendClasses, new String[]{}). But it has to do so on a new thread with the correct contextClassLoader, otherwise Spring would not pick up the correct classloader.

So we have to run the Spring Boot applicatoin in its own thread ("BackendRunnerThread") and also wait until it finished starting up. We do this by waiting on a monitor in "waitUntilBackendIsStarted()" - The runner thread will call "monitor.notify()" when the application is started.

To Recap

We can start multiple Spring Boot web applications when

  • They all run in their own classloader.
  • They run in their own thread, so Spring can use the contextClassLoader of the thread
  • They use different ports when running in the same VM
  • The application under test (who calls the services) and the test fixtures do not have a direct reference to them.

The code to support this is actually surprisingly simple (at least IMHO), but you'll need some extra subprojects to configure the classpath correctly.

Do you have any questions or comments about this article? Please tell me!

I wrote a book about agile anti-patterns, and how you must improve your technical skills as well as your organization and management to get better - and thought you might be interested in it ;)

Do you want to get articles like this on a regular basis, in your mail? Subscribe here!

You might be also interested in:

  • Simple Design passes its Tests: How software design and testing go hand in hand.
  • Cheap plastic drills: Most people think construction workers should have great tools. A lot of people think paying more than 1000 Euros for an office chair is a waste of money, even for a software developer who uses it 8 hours a day. Good tools are expensive.
  • Mocks or Intermediate Results: What I Would Do: An answer to Kent Beck's article, where he wrote about how he uses intermediate results instead of mocks. I show an alternative approach.
Posting Type: 

Getting Rid of //FIXME

Never miss one of my articles: Readers of my newsletter get my articles before anyone else. Subscribe here!

Do you write //FIXME or //TODO comments? I surely do. At least I did- I am trying to get rid of them right now. I am trying to get rid of them by replacing them with a failing unit test - A test that is ignored. Now I want to tell you how and why I do it.

Why Do We Write Those Comments?

When you work on a feature, you should concentrate on the happy path first. You should try to get some running code before thinking about all the gory little details. So you write:

//TODO handle invalid date range before marking the feature as complete

Sometimes you are gradually automating some manual process. In your MVP, you'll have to do some manual steps when there is a credit card charge-back. You leave this as a manual step, because it is a rare event, but you'll add a comment:

//TODO here we could extend the code to process charge-backs automatically

Sometimes some error handling is not even part of the current release. Yes, the app crashes when Google is down. No, we don't want to fix this right now - Google being down is a very rare event, we can live with this right now. Just add a FIXME comment:

//FIXME the app crashes when Google is down. We can live with that for 
//      now, but should fix it in a subsequent release.

One Step Back

Those comments are a TODO-List, scattered across the code. There are tools available to collect them, but you'll have to actively use those tools, so those comments are often forgotten.

We want those comments to be short-lived (for some definition of short). We should come back to the code in the future, fix it, and remove the comments. The second part is crucial - I have seen several examples where somebody fixed the code, but forgot to delete the comment. This is really confusing.

A Better Way

So, instead of writing a //TODO or //FIXME, try to formulate the TODO as a failing test next time. Also, tell the test system to ignore the test for now, because you are not planning to fix it right now - And you don't want to have failing tests in your test suite!

//TODO handle invalid date range before marking the feature as complete


@Test @Ignore("TODO - Before marking feature 1234 as complete")
public void doesNotAllowTheUserToProceedWhenInvalidDateRangeWasEntered() {
    fail("Implement me!");
//FIXME the app crashes when Google is down. We can live with that for
//      now, but should fix it in a subsequent release.


@Test @Ignore("FIXME")
public void degradesGracefullyWhenGoogleIsDown() {
    fail("Implement me - And also add more tests to describe exactly " +
         "how we should degrade gracefully");

Activate the test before you start to fix the problem. Then you can work in the normal TDD cycle while fixing it - Adding more tests as you need them. Add anything that has to be deleted when the problem was fixed to the @Ignore annotation - This way you'll have to delete it once you activate the test.

Sometimes It's Hard To Come Up With A Test

What about this comment:

//TODO This code is a mess. Clean up later.

You cannot really come up with a good test name from this description. But you could either refactor the code right away, or think about what exactly is wrong with the code, and derive a test name from that:

@Test @Ignore("TODO - Class has too many responsibilities, "+
              "should delegate some stuff to collaborators")
public void delegatesCalculatingCurrentInterestRateToTheGivenInterestRateCalculator() {
    fail("Implement me");


Most of the time, you can replace your //FIXME and //TODO comments with failing tests that are ignored (for now). The two main advantages of the latter approach are:

The test system will remind you about them. You don't have to actively extract the TODOs from your source code. The test system will report which tests were ignored.

You cannot forget to delete the comment. The tests remain in the system. First, they were a reminder of some work to do. And after you fixed the problem, they are a documentation of how the system works now.

And when you can't think of a way to replace a //FIXME or //TODO with a failing test, fix the problem right away.

Do you have any questions? Or Comments? Please contact me

Do you want to get articles like this on a regular basis, in your mail? Subscribe here!

You might be also interested in:

  • Cheap plastic drills: Most people think construction workers should have great tools. A lot of people think paying more than 1000 Euros for an office chair is a waste of money, even for a software developer who uses it 8 hours a day. Good tools are expensive.
  • Mocks or Intermediate Results: What I Would Do: An answer to Kent Beck's article, where he wrote about how he uses intermediate results instead of mocks. I show an alternative approach.
  • Simple Design passes its Tests: How software design and testing go hand in hand.
Posting Type: 

Back to Linux

A couple of weeks ago, I bought a new laptop. And with it, I switched completely back to Linux (Fedora Workstation 21). Ok, I have windows installed in VMWare so I can run the two or three programs that absolutely need Windows. But I do all of my normal work in Linux now.

I knew from the start that this might cause some problems, compared to just using Windows or a Mac. And I was right: It took me more than a week to install the system like I want it. I wasted three or four days because of a stupid bug in the nouveau graphics driver that caused the laptop to crash every 10 minutes. But now everything is set up like I want it, and I really enjoy working with it.

I know I would have loved to use a Mac. But I didn't buy one because the walls around Apple's walled garden seem to be getting higher. And I don't want to be inside. Also, the Lenovo was cheaper with better hardware. But that's not a major point against a Mac - Their hardware/software combo works really well.

In retrospect, I am glad I didn't stick with Windows, because of the whole "Superfish" problems and the Adware that comes pre-installed.

As I said, I really enjoy working with Linux again. I was especially surprised how well Gnome - And the Gnome shell - works now. 1Password also works reasonably fine - The program runs in Wine and the Chrome extension just works. And tarsnap is a great offsite backup tool.

The only minor annoyance is, for now, that some programs don't work well on a high resolution display. This happens mostly with Java programs (but some others too) - For example, in IntelliJ Idea, the fonts are sometimes a little bit mis-aligned.

But overall, my experience is really positive so far!

This blog post was not sent to my newsletter. But, readers of my newsletter get most of my posts before anyone else - Especially posts I write for devteams.at and quickglance.at. Interested? Subscribe here!

Posting Type: 


My name is David Tanzer and I have been working as an independent software consultant since 2006. I help my clients to develop software right and to develop the right software by providing training, coaching and consultanting for teams and individuals.

Learn more...

Subscribe to RSS - David's Blog