Tuesday, December 31, 2019

Books I read 2019

January
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein

February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor

March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy

April
- Cat on a Hot Tin Roof, Tennessee Williams
- Closely Watched Trains (Ostře sledované vlaky), Bohumil Hrabal
- The Chalk Giants, Keith Roberts
- Behold the Man, Michael Moorcock
- Cutting It Short (Postřižiny), Bohumil Hrabal
- La bicicleta de Sumji (סומכי : סיפור לבני הנעורים על אהבה והרפתקאות), Amos Oz
- The Sheltering Sky, Paul Bowles

May
- Elsewhere, Perhaps (מקום אחר), Amos Oz
- Liminal Thinking: Create the Change You Want by Changing the Way You Think, Dave Gray

June
- The Turn of the Screw, Henry James
- A Tale of Love and Darkness (סיפור על אהבה וחושך), Amos Oz
- Touch the Water, Touch the Wind (לגעת במים, לגעת ברוח), Amos Oz
- The Third Man, The Fallen Idol, Graham Greene
- The Little Book of Stupidity: How We Lie to Ourselves and Don't Believe Others, Sia Mohajer
- Slaughterhouse-Five, or The Children's Crusade: A Duty-Dance with Death, Kurt Vonnegut
- The Little Town Where Time Stood Still (Městečko, kde se zastavil čas), Bohumil Hrabal

July
- Offshore, Petros Márkaris
- Martian Time-Slip, Philip K. Dick
- Give and Take: A Revolutionary Approach to Success, Adam Grant
- Object-Oriented Reengineering Patterns, Serge Demeyer, Stéphane Ducasse and Oscar Nierstrasz

August
- The Loneliness of the Long-Distance Runner (The Loneliness of the Long-Distance Runner, Uncle Ernest, Mr Raynor the Schoolteacher, The Fishing-boat Picture, Noah's Ark, On Saturday Afternoon, The Match, The Disgrace of Jim Scarfedale, The Decline and Fall of Frankie Buller), Alan Sillitoe
- My Michael (מיכאל שלי), Amos Oz
- The Sirens of Titan, Kurt Vonnegut
- Up the Junction, Nell Dunn
- A Taste of Honey, Shelagh Delaney
- Stumbling on Happiness, Daniel Todd Gilbert
- Surfacing, Margaret Atwood
- Soldados de Salamina, Javier Cercas

September
- Monday Begins on Saturday (Понедельник начинается в субботу), Arkady Strugatsky and Boris Strugatsky
- Warlight, Michael Ondaatje
- Agile Technical Practices Distilled: A Journey Toward Mastering Software Design, Pedro Moreira Santos, Marco Consolaro, and Alessandro Di Gioia
- Capitalist Realism, Mark Fisher
- Black Box Thinking: The Surprising Truth About Success, Matthew Syed
- Otra vida por vivir (Μια ζωή ακόμα), Theodor Kallifatides
- The fine art of small talk, Debra fine

October
- Sally Heathcote. Sufragista (Sally Heathcote: Suffragette), Mary M. Talbot, Kate Charlesworth and Bryan Talbot
- Little Man, What Now? (Kleiner Mann – was nun?), Hans Fallada
- Ser Mujer Negra en España, Desirée Bela-Lobedde
- How to Fight, Thich Nhat Hanh
- Capitalism: A Ghost Story, Arundhati Roy
- The Mask of Dimitrios, Eric Ambler
- On the Shortness of Life (De brevitate vitae), Seneca
- Recordarán tu nombre, Lorenzo Silva

November
- Star Maker, William Stapledon
- The Antidote: Happiness for People Who Can't Stand Positive Thinking, Oliver Burkeman

December
- Mr. Kafka: And Other Tales from the Time of the Cult (Inzerát na dům, ve kterém už nechci bydlet), Bohumil Hrabal
- A Kind of Loving, Stan Barstow
- If Beale Street Could Talk, James Baldwin
- Atomic Habits, James Clear
- El día que Nietzsche lloró (When Nietzsche Wept), Irvin D. Yalom
- Próxima estación, Atenas (Η Αθήνα της μιας διαδρομής), Petros Márkaris
- La noche de la iguana (The Night of the Iguana), Tennessee Williams
- The Bed of Procrustes: Philosophical and Practical Aphorisms, Nassim Nicholas Taleb
- Berlín. Ciudad de piedras (Berlin: City of Stones), Jason Lutes
- Animal Farm, George Orwell
- Meditaciones (Ta eis heauton), Marco Aurelio
- The Art of Controversy (Eristische Dialektik: Die Kunst, Recht zu behalten), Arthur Schopenhauer
- Malaherba, Manuel Jabois
- El llano en llamas, Juan Rulfo

Sunday, September 29, 2019

Books I read (January - September 2019)

January
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein

February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor

March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy

April
- Cat on a Hot Tin Roof, Tennessee Williams
- Closely Watched Trains (Ostře sledované vlaky), Bohumil Hrabal
- The Chalk Giants, Keith Roberts
- Behold the Man, Michael Moorcock
- Cutting It Short (Postřižiny), Bohumil Hrabal
- La bicicleta de Sumji (סומכי : סיפור לבני הנעורים על אהבה והרפתקאות), Amos Oz
- The Sheltering Sky, Paul Bowles

May
- Elsewhere, Perhaps (מקום אחר), Amos Oz
- Liminal Thinking: Create the Change You Want by Changing the Way You Think, Dave Gray

June
- The Turn of the Screw, Henry James
- A Tale of Love and Darkness (סיפור על אהבה וחושך), Amos Oz
- Touch the Water, Touch the Wind (לגעת במים, לגעת ברוח), Amos Oz
- The Third Man, The Fallen Idol, Graham Greene
- The Little Book of Stupidity: How We Lie to Ourselves and Don't Believe Others, Sia Mohajer
- Slaughterhouse-Five, or The Children's Crusade: A Duty-Dance with Death, Kurt Vonnegut
- The Little Town Where Time Stood Still (Městečko, kde se zastavil čas), Bohumil Hrabal

July
- Offshore, Petros Márkaris
- Martian Time-Slip, Philip K. Dick
- Give and Take: A Revolutionary Approach to Success, Adam Grant
- Object-Oriented Reengineering Patterns, Serge Demeyer, Stéphane Ducasse and Oscar Nierstrasz

August
- The Loneliness of the Long-Distance Runner (The Loneliness of the Long-Distance Runner, Uncle Ernest, Mr Raynor the Schoolteacher, The Fishing-boat Picture, Noah's Ark, On Saturday Afternoon, The Match, The Disgrace of Jim Scarfedale, The Decline and Fall of Frankie Buller), Alan Sillitoe
- My Michael (מיכאל שלי), Amos Oz
- The Sirens of Titan, Kurt Vonnegut
- Up the Junction, Nell Dunn
- A Taste of Honey, Shelagh Delaney
- Stumbling on Happiness, Daniel Todd Gilbert
- Surfacing, Margaret Atwood
- Soldados de Salamina, Javier Cercas

September
- Monday Begins on Saturday (Понедельник начинается в субботу), Arkady Strugatsky and Boris Strugatsky
- Warlight, Michael Ondaatje
- Agile Technical Practices Distilled: A Journey Toward Mastering Software Design, Pedro Moreira Santos, Marco Consolaro, and Alessandro Di Gioia
- Capitalist Realism, Mark Fisher
- Black Box Thinking: The Surprising Truth About Success, Matthew Syed
- Otra vida por vivir (Μια ζωή ακόμα), Theodor Kallifatides
- The fine art of small talk, Debra fine

Monday, July 29, 2019

Playing Fizzbuzz with property-based testing

Introduction.

Lately, I’ve been playing a bit with property-based testing.

I practised doing the FizzBuzz kata in Clojure and used the following constraints for fun[1]:

  1. Add one property at a time before writing the code to make the property hold.
  2. Make the failing test pass before writing a new property.

The kata step by step.

To create the properties, I partitioned the first 100 integers according to how they are transformed by the code. This was very easy using two of the operations on sets that Clojure provides (difference and intersection).

The first property I wrote checks that the multiples of 3 but not 5 are Fizz:

and this is the code that makes that test pass:

Next, I wrote a property to check that the multiples of 5 but not 3 are Buzz (I show only the new property for brevity):

and this is the code that makes the new test pass:

Then, I added a property to check that the multiples of 3 and 5 are FizzBuzz:

which was already passing with the existing production code.

Finally, I added a property to check that the rest of numbers are just casted to a string:

which Id made pass with this version of the code:

The final result.

These are the resulting tests where you can see all the properties together:

You can find all the code in this repository.

Conclusions.

It was a lot of fun doing this kata. It is a toy example that didn’t make me dive a lot into clojure.check’s generators documentation because I could take advantage of Clojure’s set functions to write the properties.

I think the resulting properties are quite readable even if you don’t know Clojure. On the other hand, the resulting implementation is probably not similar to the ones you’re used to see, and it shows Clojure’s conciseness and expressiveness.

Footnotes:

[1] I'm not saying that you should property-based testing with this constraints. They probably make no sense in real cases. The constraints were meant to make it fun.

Saturday, June 29, 2019

Books I read (January - June 2019)

January
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein

February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor

March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy

April
- Cat on a Hot Tin Roof, Tennessee Williams
- Closely Watched Trains (Ostře sledované vlaky), Bohumil Hrabal
- The Chalk Giants, Keith Roberts
- Behold the Man, Michael Moorcock
- Cutting It Short (Postřižiny), Bohumil Hrabal
- La bicicleta de Sumji (סומכי : סיפור לבני הנעורים על אהבה והרפתקאות), Amos Oz
- The Sheltering Sky, Paul Bowles

May
- Elsewhere, Perhaps (מקום אחר), Amos Oz
- Liminal Thinking: Create the Change You Want by Changing the Way You Think, Dave Gray

June
- The Turn of the Screw, Henry James
- A Tale of Love and Darkness (סיפור על אהבה וחושך), Amos Oz
- Touch the Water, Touch the Wind (לגעת במים, לגעת ברוח), Amos Oz
- The Third Man, The Fallen Idol, Graham Greene
- The Little Book of Stupidity: How We Lie to Ourselves and Don't Believe Others, Sia Mohajer
- Slaughterhouse-Five, or The Children's Crusade: A Duty-Dance with Death, Kurt Vonnegut
- The Little Town Where Time Stood Still (Městečko, kde se zastavil čas), Bohumil Hrabal

Thursday, June 27, 2019

An example of listening to the tests to improve a design

Introduction.

Recently in the B2B team at LIFULL Connect, we improved the validation of the clicks our API receive using a service that detects whether the clicks were made by a bot or a human being.

So we used TDD to add this new validation to the previously existing validation that checked if the click contained all mandatory information. This was the resulting code:

and these were its tests:

The problem with these tests is that they know too much. They are coupled to many implementation details. They not only know the concrete validations we apply to a click and the order in which they are applied, but also details about what gets logged when a concrete validations fails. There are multiple axes of change that will make these tests break. The tests are fragile against those axes of changes and, as such, they might become a future maintenance burden, in case changes along those axes are required.

So what might we do about that fragility when any of those changes come?

Improving the design to have less fragile tests.

As we said before the test fragility was hinting to a design problem in the ClickValidation code. The problem is that it’s concentrating too much knowledge because it’s written in a procedural style in which it is querying every concrete validation to know if the click is ok, combining the result of all those validations and knowing when to log validation failures. Those are too many responsibilities for ClickValidation and is the cause of the fragility in the tests.

We can revert this situation by changing to a more object-oriented implementation in which responsibilities are better distributed. Let’s see how that design might look:

1. Removing knowledge about logging.

After this change, ClickValidation will know nothing about looging. We can use the same technique to avoid knowing about any similar side-effects which concrete validations might produce.

First we create an interface, ClickValidator, that any object that validates clicks should implement:

Next we create a new class NoBotClickValidator that wraps the BotClickDetector and adapts[1] it to implement the ClickValidator interface. This wrapper also enrichs BotClickDetector’s’ behavior by taking charge of logging in case the click is not valid.

These are the tests of NoBotClickValidator that takes care of the delegation to BotClickDetector and the logging:

If we used NoBotClickValidator in ClickValidation, we’d remove all knowledge about logging from ClickValidation.

Of course, that knowledge would also disappear from its tests. By using the ClickValidator interface for all concrete validations and wrapping validations with side-effects like logging, we’d make ClickValidation tests robust to changes involving some of the possible axis of change that were making them fragile:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.

2. Another improvement: don't use test doubles when it's not worth it[2].

There’s another way to make ClickValidation tests less fragile.

If we have a look at ClickParamsValidator and BotClickDetector (I can’t show their code here for security reasons), they have very different natures. ClickParamsValidator has no collaborators, no state and a very simple logic, whereas BotClickDetector has several collaborators, state and a complicated validation logic.

Stubbing ClickParamsValidator in ClickValidation tests is not giving us any benefit over directly using it, and it’s producing coupling between the tests and the code.

On the contrary, stubbing NoBotClickValidator (which wraps BotClickDetector) is really worth it, because, even though it also produces coupling, it makes ClickValidation tests much simpler.

Using a test double when you’d be better of using the real collaborator is a weakness in the design of the test, rather than in the code to be tested.

These would be the tests for the ClickValidation code with no logging knowledge, after applying this idea of not using test doubles for everything:

Notice how the tests now use the real ClickParamsValidator and how that reduces the coupling with the production code and makes the set up simpler.

3. Removing knowledge about the concrete sequence of validations.

After this change, the new design will compose validations in a way that will result in ClickValidation being only in charge of combining the result of a given sequence of validations.

First we refactor the click validation so that the validation is now done by composing several validations:

The new validation code has several advantages over the previous one:

  • It does not depend on concrete validations any more
  • It does not depend on the order in which the validations are made.

It has only one responsibility: it applies several validations in sequence, if all of them are valid, it will accept the click, but if any given validation fails, it will reject the click and stop applying the rest of the validations. If you think about it, it’s behaving like an and operator.

We may write these tests for this new version of the click validation:

These tests are robust to the changes making the initial version of the tests fragile that we described in the introduction:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.
  3. Adding more validations.
  4. Changing the order of the validation.

However, this version of ClickValidationTest is so general and flexible, that using it, our tests would stop knowing which validations, and in which order, are applied to the clicks[3]. That sequence of validations is a business rule and, as such, we should protect it. We might keep this version of ClickValidationTest only if we had some outer test protecting the desired sequence of validations.

This other version of the tests, on the other hand, keeps protecting the business rule:

Notice how this version of the tests keeps in its setup the knowledge of which sequence of validations should be used, and how it only uses test doubles for NoBotClickValidator.

4. Avoid exposing internals.

The fact that we’re injecting into ClickValidation an object, ClickParamsValidator, that we realized we didn’t need to double, it’s a smell which points to the possibility that ClickParamsValidator is an internal detail of ClickValidation instead of its peer. So by injecting it, we’re coupling ClickValidation users, or at least the code that creates it, to an internal detail of ClickValidation: ClickParamsValidator.

A better version of this code would hide ClickParamsValidator by instantiating it inside ClickValidation’s constructor:

With this change ClickValidation recovers the knowledge of the sequence of validations which in the previous section was located in the code that created ClickValidation.

There are some stereotypes that can help us identify real collaborators (peers)[4]:

  1. Dependencies: services that the object needs from its environment so that it can fulfill its responsibilities.
  2. Notifications: other parts of the system that need to know when the object changes state or performs an action.
  3. Adjustments or Policies: objects that tweak or adapt the object’s behaviour to the needs of the system.

Following these stereotypes, we could argue that NoBotClickValidator is also an internal detail of ClickValidation and shouldn’t be exposed to the tests by injecting it. Hiding it we’d arrive to this other version of ClickValidation:

in which we have to inject the real dependencies of the validation, and no internal details are exposed to the client code. This version is very similar to the one we’d have got using tests doubles only for infrastructure.

The advantage of this version would be that its tests would know the least possible about ClickValidation. They’d know only ClickValidation’s boundaries marked by the ports injected through its constructor, and ClickValidation`’s public API. That will reduce the coupling between tests and production code, and facilitate refactorings of the validation logic.

The drawback is that the combinations of test cases in ClickValidationTest would grow, and may of those test cases would talk about situations happening in the validation boundaries that might be far apart from ClickValidation’s callers. This might make the tests hard to understand, specially if some of the validations have a complex logic. When this problem gets severe, we may reduce it by injecting and use test doubles for very complex validators, this is a trade-off in which we decide to accept some coupling with the internal of ClickValidation in order to improve the understandability of its tests. In our case, the bot detection was one of those complex components, so we decided to test it separately, and inject it in ClickValidation so we could double it in ClickValidation’s tests, which is why we kept the penultimate version of ClickValidation in which we were injecting the click-not-made-by-a-bot validation.

Conclusion.

In this post, we tried to play with an example to show how listening to the tests[5] we can detect possible design problems, and how we can use that feedback to improve both the design of our code and its tests, when changes that expose those design problems are required.

In this case, the initial tests were fragile because the production code was procedural and had too many responsibilities. The tests were fragile also because they were using test doubles for some collaborators when it wasn’t worth to do it.

Then we showed how refactoring the original code to be more object-oriented and separating better its responsibilities, could remove some of the fragility of the tests. We also showed how reducing the use of test doubles only to those collaborators that really needs to be substituted can improve the tests and reduce their fragility. Finally, we showed how we can go too far in trying to make the tests flexible and robust, and accidentally stop protecting a business rule, and how a less flexible version of the tests can fix that.

When faced with fragility due to coupling between tests and the code being tested caused by using test doubles, it’s easy and very usual to “blame the mocks”, but, we believe, it would be more productive to listen to the tests to notice which improvements in our design they are suggesting. If we act on this feedback the tests doubles give us about our design, we can use tests doubles in our advantage, as powerful feedback tools[6], that help us improve our designs, instead of just suffering and blaming them.

Acknowledgements.

Many thanks to my Codesai colleagues Alfredo Casado, Fran Reyes, Antonio de la Torre and Manuel Tordesillas, and to my Aprendices colleagues Paulo Clavijo, Álvaro García and Fermin Saez for their feedback on the post, and to my colleagues at LIFULL Connect for all the mobs we enjoy together.

Footnotes:

[2] See Test Smell: Everything is mocked by Steve Freeman where he talks about things you shouldn't be substituting with tests doubles.
[3] Thanks Alfredo Casado for detecting that problem in the first version of the post.
[4] From Growing Object-Oriented Software, Guided by Tests > Chapter 6, Object-Oriented Style > Object Peer Stereotypes, page 52. You can also read about these stereotypes in a post by Steve Freeman: Object Collaboration Stereotypes.
[5] Difficulties in testing might be a hint of design problems. Have a look at this interesting series of posts about listening to the tests by Steve Freeman.
[6] According to Nat Pryce mocks were designed as a feedback tool for designing OO code following the 'Tell, Don't Ask' principle: "In my opinion it's better to focus on the benefits of different design styles in different contexts (there are usually many in the same system) and what that implies for modularisation and inter-module interfaces. Different design styles have different techniques that are most applicable for test-driving code written in those styles, and there are different tools that help you with those techniques. Those tools should give useful feedback about the external and *internal* quality of the system so that programmers can 'listen to the tests'. That's what we -- with the help of many vocal users over many years -- designed jMock to do for 'Tell, Don't Ask' object-oriented design." (from a conversation in Growing Object-Oriented Software Google Group).

I think that if your design follows a different OO style, it might be preferable to stick to a classical TDD style which nearly limits the use of test doubles only to infrastructure and undesirable side-effects.

Saturday, May 25, 2019

The curious case of the negative builder

Recently, one of the teams I’m coaching at my current client, asked me to help them with a problem, they were experiencing while using TDD to add and validate new mandatory query string parameters[1]. This is a shortened version (validating fewer parameters than the original code) of the tests they were having problems with:

and this is the implementation of the QueryStringBuilder used in this test:

which is a builder with a fluid interface that follows to the letter a typical implementation of the pattern. There are even libraries that help you to automatically create this kind of builders[2].

However, in this particular case, implementing the QueryStringBuilder following this typical recipe causes a lot of problems. Looking at the test code, you can see why.

To add a new mandatory parameter, for example sourceId, following the TDD cycle, you would first write a new test asserting that a query string lacking the parameter should not be valid.

So far so good, the problem comes when you change the production code to make this test pass, in that momento you’ll see how the first test that was asserting that a query string with all the parameters was valid starts to fail (if you check the query string of that tests and the one in the new test, you’ll see how they are the same). Not only that, all the previous tests that were asserting that a query string was invalid because a given parameter was lacking won’t be “true” anymore because after this change they could fail for more than one reason.

So to carry on, you’d need to fix the first test and also change all the previous ones so that they fail again only for the reason described in the test name:

That’s a lot of rework on the tests only for adding a new parameter, and the team had to add many more. The typical implementation of a builder was not helping them.

The problem we’ve just explained can be avoided by chosing a default value that creates a valid query string and what I call “a negative builder”, a builder with methods that remove parts instead of adding them. So we refactored together the initial version of the tests and the builder, until we got to this new version of the tests:

which used a “negative” QueryStringBuilder:

After this refactoring, to add the sourceId we wrote this test instead:

which only carries with it updating the valid method in QueryStringBuilder and adding a method that removes the sourceId parameter from a valid query string.

Now when we changed the code to make this last test pass, no other test failed or started to have descriptions that were not true anymore.

Leaving behind the typical recipe and adapting the idea of the builder pattern to the context of the problem at hand, led us to a curious implementation, a “negative builder”, that made the tests easier to maintain and improved our TDD flow.

Acknowledgements.

Many thanks to my Codesai colleagues Antonio de la Torre and Fran Reyes, and to all the colleagues of the Prime Services Team at LIFULL Connect for all the mobs we enjoy together.

Footnotes:

[1] Currently, this validation is not done in the controller anymore. The code showed above belongs to a very early stage of an API we're developing.
[2] Have a look, for instance, at lombok's' @Builder annotation for Java.

Tuesday, May 14, 2019

Killing mutants to improve your tests

At my current client we’re working on having a frontend architecture for writing SPAs in JavaScript similar to re-frame’s one: an event-driven bus with effects and coeffects for state management[1] (commands) and subscriptions using reselect’s selectors (queries).

One of the pieces we have developed to achieved that goal is reffects-store. Using this store, React components can be subscribed to given reselect’s selectors, so that they only render when the values in the application state tracked by the selectors change.

After we finished writing the code for the store, we decided to use mutation testing to evaluate the quality of our tests. Mutation testing is a technique in which, you introduce bugs, (mutations), into your production code, and then run your tests for each mutation. If your tests fail, it’s ok, the mutation was “killed”, that means that they were able to defend you against the regression caused by the mutation. If they don’t, it means your tests are not defending you against that regression. The higher the percentage of mutations killed, the more effective your tests are.

There are tools that do this automatically, stryker[2] is one of them. When you run stryker, it will create many mutant versions of your production code, and run your tests for each mutant (that’s how mutations are called in stryker’s’ documentation) version of the code. If your tests fail then the mutant is killed. If your tests passed, the mutant survived. Let’s have a look at the the result of runnning stryker against reffects-store’s code:

Notice how stryker shows the details of every mutation that survived our tests, and look at the summary the it produces at the end of the process.

All the surviving mutants were produced by mutations to the store.js file. Having a closer look to the mutations in stryker’s output we found that the functions with mutant code were unsubscribeAllListeners and unsubscribeListener. After a quick check of their tests, it was esay to find out why unsubscribeAllListeners was having surviving mutants. Since it was a function we used only in tests for cleaning the state after each test case was run, we had forgotten to test it.

However, finding out why unsubscribeListener mutants were surviving took us a bit more time and thinking. Let’s have a look at the tests that were exercising the code used to subscribe and unsubscribe listeners of state changes:

If we examine the mutations and the tests, we can see that the tests for unsubscribeListener are not good enough. They are throwing an exception from the subscribed function we unsubscribe, so that if the unsubscribeListener function doesn’t work and that function is called the test fails. Unfortunately, the test passes also if that function is never called for any reason. In fact, most of the surviving mutants that stryker found above have are variations on that idea.

A better way to test unsubscribeListener is using spies to verify that subscribed functions are called and unsubscribed functions are not (this version of the tests includes also a test for unsubscribeAllListeners):

After this change, when we run stryker we got the following output:

No mutants survived!! This means this new version of the tests is more reliable and will protect us better from regressions than the initial version.

Mutation testing is a great tool to know if you can trust your tests. This is event more true when working with legacy code.

Acknowledgements.

Many thanks to Mario Sánchez and Alex Casajuana Martín for all the great time coding together, and thanks to Porapak Apichodilok for the photo used in this post and to Pexels.

Footnotes:

[1] See also reffects which is the synchronous event bus with effects and coeffects we wrote to manage the application state.
[2] The name of this tool comes from a fictional Marvel comics supervillain Willian Stryker who was obsessed with the eradication of all mutants.

Sunday, April 7, 2019

Interesting Talk: "In Search of Doors"

I've just watched this wonderful and inspiring talk by V.E. Schwab

The Beverages Prices Refactoring kata: a kata to practice refactoring away from an awful application of inheritance.

I created the Beverages Prices Refactoring kata for the Deliberate Practice Program I’m running at Lifull Connect offices in Barcelona (previously Trovit). Its goal is to practice refactoring away from a bad usage of inheritance.

The code computes the price of the different beverages that are sold in a coffe house. There are some supplements that can be added to those beverages. Each supplement increases the price a bit. Not all combinations of drinks and supplements are possible.

Just having a quick look at the tests of the initial code would give you an idea of the kind of problems it might have:

If that’s not enough have a look at its inheritance hierarchy:

To make things worse, we are asked to add an optional cinnamon supplement that costs 0.05€ to all our existing catalog of beverages. We think we should refactor this code a bit before introducing the new feature.

We hope you have fun practicing refactoring with this kata.

Thursday, April 4, 2019

Books I read (January - March 2019)

January
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein

February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor

March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy

Monday, March 25, 2019

Our experience at ClojureBridge Bilbao 2018

ClojureBridge is an all-volunteer organization inspired by RailsBridge that provides free, introductory workshops to increase diversity in the Clojure community. I first learned about ClojureBridge thanks to a talk by Ali King at EuroClojure 2015 in Barcelona: ClojureBridge, Building a more diverse Clojure community. I really liked the idea and we (the Clojure Developers Barcelona) tried to organize one in Barcelona, but failed to actually do it because we lacked the numbers, money and time. So the moment I knew that my colleague at GreenPowerMonitor, Estíbaliz Rodríguez, and the company where she works, Magnet, were planning to organize a ClojureBridge edition in Bilbao for December 1st 2018, I decided to do my best to help.

ClojureBridge 2018 Bilbao introduction

I met Estíbaliz and José Ayudarte at the beginning of last summer, when they started working as ClojureScript developers for GreenPowerMonitor, and it’s been a great experience to work with them. They are part of Magnet which is a cooperative of developers, designers and consultants that work with Clojure and ClojureScript. Magnet’s team is distributed across Europe but most of them live in the Basque country.

I told them I’d like to participate as a volunteer. Actually I had already bought the tickets and booked an accommodation for that weekend before asking them. In the worst-case scenario, if I couldn’t participate in ClojureBridge, I’d at least spend a weekend in Bilbao which is a wonderful city. In the end, everything went well and they told me they were delighted to have me there. I participated in a couple of meetings to know the other volunteers and talk about how the event would be structured, the mentoring and the exercises. Magnet was sponsoring the event and most of its team worked very hard to make it possible.

So, on December 1st I was there trying to help people learn a bit of Clojure. There were many women and girls with very diverse backgrounds: professional developers that were using other languages, teenagers with a bit of programming experience from school or without, little girls and women of all ages that had no programming experience. At the beginning, Usoa Sasigain gave a talk to introduce ClojureBridge's goal of increasing diversity in technology, the important role women had played in the history of computer science and technology, and Clojure. She also talked about her personal history with technology and Clojure, and explained how technology might be a nice career option for women.

ClojureBridge 2018 Bilbao Grace Hopper slide

After the introduction, the participants were splitted in groups according to their programming experience. I was assigned to help the group of experienced software developers. They worked through the exercises in Maria Cloud which is a beginner-friendly coding environment for Clojure and I answered to questions about Clojure and helped when they got stuck. I think they got to appreciate Clojure and the possibilities it offers. We did several coffee breaks during the morning and for lunch in which we could talk about many things and I had the opportunity to meet Asier Galdós, Amaia Guridi, Iván Perdomo and other members of Magnet. The lunch was very nice and sponsored by Magnet.

At the end of the first half of the event, some attendees had to go to have lunch with their families. A funny and lovely anecdote for me happened when two girls of about six or eight years old that had been enthusiastically programming all morning didn’t want to stop and leave with their parents for lunch. I remember with a lot of tenderness seeing them totally engrossed in programming during the morning and celebrating with raised arms every time they succeeded in changing the color or any other feature of the shapes they were working with in Maria Cloud’s exercises.

During the afternoon, we continue working on more advanced Maria Cloud exercises. There were less people, so I changed to work with some women that had no previous experience with programming. It was a very nice experience and we had a good time going through some more Maria cloud’s exercises playing with shapes.

ClojureBridge 2018 Bilbao John McCarthy slide

All in all, volunteering in ClojureBridge was a very beautiful experience for me. I enjoyed helping people to learn Clojure and programming, and met many nice people with many different backgrounds. I learned several words in basque language. I also had some very interesting conversations with Asier, Usoa, José and Iván about Clojure, Magnet’s experience as a Clojure/ClojureScript cooperative and the interesting platform, Hydrogen, that they are building. We also talked about my doing a desk surfing with them, an idea that I was really excited about, but, unfortunately, I haven’t been able to do yet because I recently started working for a new client in Barcelona.

Before finishing I’d like to thank Magnet for making the first ClojureBridge in Spain possible and for letting me help. I’d also like to send love and good energy to Estíbaliz. I hope you recover soon and we can meet each other in Euskadi or somewhere else.

Thursday, January 3, 2019

Avoid using subscriptions only as app-state getters

Introduction.

Subscriptions in re-frame or re-om are query functions that extract data from the app-state and provide it to view functions in the right format. When we use subscriptions well, they provide a lot of value[1], because they avoid having to keep derived state the app-state and they dumb down the views, that end up being simple “data in, screen out” functions.

However, things are not that easy. When you start working with subscriptions, it might happen that you end up using them as mere getters of app-state. This is a missed opportunity because using subscriptions in this way, we won’t take advantage of all the value they can provide, and we run the risk of leaking pieces of untested logic into our views.

An example.

We’ll illustrate this problem with a small real example written with re-om subscriptions (in an app using re-frame subscriptions the problem would look similar). Have a look at this piece of view code in which some details have been elided for brevity sake:

this code is using subscriptions written in the horizon.domain.reports.dialogs.edit namespace.

The misuse of subscriptions we’d like to show appears on the following piece of the view:

Notice how the only thing that we need to render this piece of view is next-generation. To compute its value the code is using several subscriptions to get some values from the app-state and binding them to local vars (delay-unit start-at, delay-number and date-modes). Those values are then fed to a couple of private functions also defined in the view (get-next-generation and delay->interval) to obtain the value of next-generation.

This is a bad use of subscriptions. Remember subscriptions are query functions on app-state that, used well, help to make views as dumb (with no logic) as possible. If you push as much logic as possible into subscriptions, you might achieve views that are so dumb you nearly don’t need to test, and decide to limit your unit tests to do only subcutaneous testing of your SPA.

Refactoring: placing the logic in the right place.

We can refactor the code shown above to remove all the leaked logic from the view by writing only one subscription called next-generation which will produce the only information that the view needs. As a result both get-next-generation and delay->interval functions will get pushed into the logic behind the new next-generation subscription and dissappear from the view.

This is the resulting view code after this refactoring:

and this is the resulting pure logic of the new subscription. Notice that, since get-next-generation function wasn’t pure, we had to change it a bit to make it pure:

After this refactoring the view is much dumber. The previously leaked (an untested) logic in the view (the get-next-generation and delay->interval functions) has been removed from it. Now that logic can be easyly tested through the new next-generation subscription. This design is also much better than the previous one because it hides how we obtain the data that the view needs: now both the view and the tests ignore, and so are not coupled, to how the data the view needs is obtained. We might refactor both the app-state and the logic now in get-next-generation and delay->interval functions without affecting the view. This is another example of how what is more stable than how.

Summary

The idea to remember is that subscriptions by themselves don’t make code more testable and maintainable. It’s the way we use subscriptions that produces better code. For that the logic must be in the right place which is not inside the view but behind the subscriptions that provide the data that the view needs. If we keep writing “getter” subscriptions” and placing logic in views, we won’t gain all the advantages the subscriptions concept provides and we’ll write poorly designed views coupled to leaked bunches of (very likely untested) logic.

Acknowledgements.

Many thanks to André Stylianos Ramos and Fran Reyes for giving us great feedback to improve this post and for all the interesting conversations.

Footnotes:

[1] Subscriptions also make it easier to share code between different views and, in the case of re-frame (and soon re-om as well), they are optimized to minimize unnecessary re-renderings of the views and de-duplicate computations.

Interesting Talk: "Improving as developers"

I've just watched this wonderful talk by Belén Albeza