Record of experiments, readings, links, videos and other things that I find on the long road.
Registro de experimentos, lecturas, links, vídeos y otras cosas que voy encontrando en el largo camino.
Tuesday, December 31, 2019
Books I read 2019
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein
February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor
March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy
April
- Cat on a Hot Tin Roof, Tennessee Williams
- Closely Watched Trains (Ostře sledované vlaky), Bohumil Hrabal
- The Chalk Giants, Keith Roberts
- Behold the Man, Michael Moorcock
- Cutting It Short (Postřižiny), Bohumil Hrabal
- La bicicleta de Sumji (סומכי : סיפור לבני הנעורים על אהבה והרפתקאות), Amos Oz
- The Sheltering Sky, Paul Bowles
May
- Elsewhere, Perhaps (מקום אחר), Amos Oz
- Liminal Thinking: Create the Change You Want by Changing the Way You Think, Dave Gray
June
- The Turn of the Screw, Henry James
- A Tale of Love and Darkness (סיפור על אהבה וחושך), Amos Oz
- Touch the Water, Touch the Wind (לגעת במים, לגעת ברוח), Amos Oz
- The Third Man, The Fallen Idol, Graham Greene
- The Little Book of Stupidity: How We Lie to Ourselves and Don't Believe Others, Sia Mohajer
- Slaughterhouse-Five, or The Children's Crusade: A Duty-Dance with Death, Kurt Vonnegut
- The Little Town Where Time Stood Still (Městečko, kde se zastavil čas), Bohumil Hrabal
July
- Offshore, Petros Márkaris
- Martian Time-Slip, Philip K. Dick
- Give and Take: A Revolutionary Approach to Success, Adam Grant
- Object-Oriented Reengineering Patterns, Serge Demeyer, Stéphane Ducasse and Oscar Nierstrasz
August
- The Loneliness of the Long-Distance Runner (The Loneliness of the Long-Distance Runner, Uncle Ernest, Mr Raynor the Schoolteacher, The Fishing-boat Picture, Noah's Ark, On Saturday Afternoon, The Match, The Disgrace of Jim Scarfedale, The Decline and Fall of Frankie Buller), Alan Sillitoe
- My Michael (מיכאל שלי), Amos Oz
- The Sirens of Titan, Kurt Vonnegut
- Up the Junction, Nell Dunn
- A Taste of Honey, Shelagh Delaney
- Stumbling on Happiness, Daniel Todd Gilbert
- Surfacing, Margaret Atwood
- Soldados de Salamina, Javier Cercas
September
- Monday Begins on Saturday (Понедельник начинается в субботу), Arkady Strugatsky and Boris Strugatsky
- Warlight, Michael Ondaatje
- Agile Technical Practices Distilled: A Journey Toward Mastering Software Design, Pedro Moreira Santos, Marco Consolaro, and Alessandro Di Gioia
- Capitalist Realism, Mark Fisher
- Black Box Thinking: The Surprising Truth About Success, Matthew Syed
- Otra vida por vivir (Μια ζωή ακόμα), Theodor Kallifatides
- The fine art of small talk, Debra fine
October
- Sally Heathcote. Sufragista (Sally Heathcote: Suffragette), Mary M. Talbot, Kate Charlesworth and Bryan Talbot
- Little Man, What Now? (Kleiner Mann – was nun?), Hans Fallada
- Ser Mujer Negra en España, Desirée Bela-Lobedde
- How to Fight, Thich Nhat Hanh
- Capitalism: A Ghost Story, Arundhati Roy
- The Mask of Dimitrios, Eric Ambler
- On the Shortness of Life (De brevitate vitae), Seneca
- Recordarán tu nombre, Lorenzo Silva
November
- Star Maker, William Stapledon
- The Antidote: Happiness for People Who Can't Stand Positive Thinking, Oliver Burkeman
December
- Mr. Kafka: And Other Tales from the Time of the Cult (Inzerát na dům, ve kterém už nechci bydlet), Bohumil Hrabal
- A Kind of Loving, Stan Barstow
- If Beale Street Could Talk, James Baldwin
- Atomic Habits, James Clear
- El día que Nietzsche lloró (When Nietzsche Wept), Irvin D. Yalom
- Próxima estación, Atenas (Η Αθήνα της μιας διαδρομής), Petros Márkaris
- La noche de la iguana (The Night of the Iguana), Tennessee Williams
- The Bed of Procrustes: Philosophical and Practical Aphorisms, Nassim Nicholas Taleb
- Berlín. Ciudad de piedras (Berlin: City of Stones), Jason Lutes
- Animal Farm, George Orwell
- Meditaciones (Ta eis heauton), Marco Aurelio
- The Art of Controversy (Eristische Dialektik: Die Kunst, Recht zu behalten), Arthur Schopenhauer
- Malaherba, Manuel Jabois
- El llano en llamas, Juan Rulfo
Sunday, December 1, 2019
Interesting Podcast: "POODR And Beyond - Part I"
Sunday, September 29, 2019
Books I read (January - September 2019)
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein
February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor
March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy
April
- Cat on a Hot Tin Roof, Tennessee Williams
- Closely Watched Trains (Ostře sledované vlaky), Bohumil Hrabal
- The Chalk Giants, Keith Roberts
- Behold the Man, Michael Moorcock
- Cutting It Short (Postřižiny), Bohumil Hrabal
- La bicicleta de Sumji (סומכי : סיפור לבני הנעורים על אהבה והרפתקאות), Amos Oz
- The Sheltering Sky, Paul Bowles
May
- Elsewhere, Perhaps (מקום אחר), Amos Oz
- Liminal Thinking: Create the Change You Want by Changing the Way You Think, Dave Gray
June
- The Turn of the Screw, Henry James
- A Tale of Love and Darkness (סיפור על אהבה וחושך), Amos Oz
- Touch the Water, Touch the Wind (לגעת במים, לגעת ברוח), Amos Oz
- The Third Man, The Fallen Idol, Graham Greene
- The Little Book of Stupidity: How We Lie to Ourselves and Don't Believe Others, Sia Mohajer
- Slaughterhouse-Five, or The Children's Crusade: A Duty-Dance with Death, Kurt Vonnegut
- The Little Town Where Time Stood Still (Městečko, kde se zastavil čas), Bohumil Hrabal
July
- Offshore, Petros Márkaris
- Martian Time-Slip, Philip K. Dick
- Give and Take: A Revolutionary Approach to Success, Adam Grant
- Object-Oriented Reengineering Patterns, Serge Demeyer, Stéphane Ducasse and Oscar Nierstrasz
August
- The Loneliness of the Long-Distance Runner (The Loneliness of the Long-Distance Runner, Uncle Ernest, Mr Raynor the Schoolteacher, The Fishing-boat Picture, Noah's Ark, On Saturday Afternoon, The Match, The Disgrace of Jim Scarfedale, The Decline and Fall of Frankie Buller), Alan Sillitoe
- My Michael (מיכאל שלי), Amos Oz
- The Sirens of Titan, Kurt Vonnegut
- Up the Junction, Nell Dunn
- A Taste of Honey, Shelagh Delaney
- Stumbling on Happiness, Daniel Todd Gilbert
- Surfacing, Margaret Atwood
- Soldados de Salamina, Javier Cercas
September
- Monday Begins on Saturday (Понедельник начинается в субботу), Arkady Strugatsky and Boris Strugatsky
- Warlight, Michael Ondaatje
- Agile Technical Practices Distilled: A Journey Toward Mastering Software Design, Pedro Moreira Santos, Marco Consolaro, and Alessandro Di Gioia
- Capitalist Realism, Mark Fisher
- Black Box Thinking: The Surprising Truth About Success, Matthew Syed
- Otra vida por vivir (Μια ζωή ακόμα), Theodor Kallifatides
- The fine art of small talk, Debra fine
Sunday, August 4, 2019
Interesting Talk: "What we talk about when we talk about software"
Interesting Talk: "Property-Based Testing The Ugly Parts: Case Studies from Komposition"
Monday, July 29, 2019
Playing Fizzbuzz with property-based testing
Introduction.
Lately, I’ve been playing a bit with property-based testing.
I practised doing the FizzBuzz kata in Clojure and used the following constraints for fun[1]:
- Add one property at a time before writing the code to make the property hold.
- Make the failing test pass before writing a new property.
The kata step by step.
To create the properties, I partitioned the first 100 integers according to how they are transformed by the code. This was very easy using two of the operations on sets that Clojure provides (difference and intersection).
The first property I wrote checks that the multiples of 3 but not 5 are Fizz:
(ns fizzbuzz-pbt.core-test | |
(:require | |
[midje.sweet :refer :all] | |
[clojure.set :as set] | |
[clojure.test.check.generators :as gen] | |
[midje.experimental :refer [for-all]] | |
[fizzbuzz-pbt.core :as sut])) | |
(defn- multiples-of [n] | |
(iterate #(+ % n) n)) | |
(defn- fizzbuzz-for [n] | |
(nth (sut/fizzbuzz) (dec n))) | |
(def multiples-of-3 (set (take-while #(< % 101) (multiples-of 3)))) | |
(def multiples-of-5 (set (take-while #(< % 101) (multiples-of 5)))) | |
(facts | |
"about fizzbuzz" | |
(fact | |
"multiples of 3 but not 5 are Fizz" | |
(for-all | |
[n (gen/elements | |
(set/difference | |
multiples-of-3 | |
multiples-of-5))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => "Fizz"))) |
and this is the code that makes that test pass:
(ns fizzbuzz-pbt.core) | |
(defn fizzbuzz [] | |
(take 100 (cycle ["" "" "Fizz"]))) |
Next, I wrote a property to check that the multiples of 5 but not 3 are Buzz (I show only the new property for brevity):
(ns fizzbuzz-pbt.core-test | |
(:require | |
;; other requires | |
[fizzbuzz-pbt.core :as sut])) | |
;; ... | |
;; some helpers | |
;; ... | |
(facts | |
"about fizzbuzz" | |
;; ... | |
;; previous properties | |
;; ... | |
(fact | |
"multiples of 5 but not 3 are Buzz" | |
(for-all | |
[n (gen/elements | |
(set/difference | |
multiples-of-5 | |
multiples-of-3))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => "Buzz"))) |
and this is the code that makes the new test pass:
(ns fizzbuzz-pbt.core) | |
(defn fizzbuzz [] | |
(take 100 (map #(str %1 %2) | |
(cycle ["" "" "Fizz"]) | |
(cycle ["" "" "" "" "Buzz"])))) |
Then, I added a property to check that the multiples of 3 and 5 are FizzBuzz:
(ns fizzbuzz-pbt.core-test | |
(:require | |
;; other requires | |
[fizzbuzz-pbt.core :as sut])) | |
;; ... | |
;; some helpers | |
;; ... | |
(facts | |
"about fizzbuzz" | |
;; ... | |
;; previous properties | |
;; ... | |
(fact | |
"multiples of 3 and 5 are FizzBuzz" | |
(for-all | |
[n (gen/elements | |
(set/intersection | |
multiples-of-5 | |
multiples-of-3))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => "FizzBuzz"))) |
which was already passing with the existing production code.
Finally, I added a property to check that the rest of numbers are just casted to a string:
(ns fizzbuzz-pbt.core-test | |
(:require | |
;; other requires | |
[fizzbuzz-pbt.core :as sut])) | |
;; ... | |
;; some helpers | |
;; ... | |
(facts | |
"about fizzbuzz" | |
;; ... | |
;; previous properties | |
;; ... | |
(fact | |
"the rest of numbers are casted to string" | |
(for-all | |
[n (gen/elements | |
(set/difference | |
(set (range 1 101)) | |
multiples-of-3 | |
multiples-of-5))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => (str n)))) |
which Id made pass with this version of the code:
(ns fizzbuzz-pbt.core | |
(:require | |
[clojure.string :as string])) | |
(defn fizzbuzz [] | |
(map #(string/replace (str %1 %2) #"^$" (str %3)) | |
(cycle ["" "" "Fizz"]) | |
(cycle ["" "" "" "" "Buzz"]) | |
(range 1 101))) |
The final result.
These are the resulting tests where you can see all the properties together:
(ns fizzbuzz-pbt.core-test | |
(:require | |
[midje.sweet :refer :all] | |
[clojure.set :as set] | |
[clojure.test.check.generators :as gen] | |
[midje.experimental :refer [for-all]] | |
[fizzbuzz-pbt.core :as sut])) | |
(defn- multiples-of [n] | |
(iterate #(+ % n) n)) | |
(defn- fizzbuzz-for [n] | |
(nth (sut/fizzbuzz) (dec n))) | |
(def multiples-of-3 (set (take-while #(< % 101) (multiples-of 3)))) | |
(def multiples-of-5 (set (take-while #(< % 101) (multiples-of 5)))) | |
(facts | |
"about fizzbuzz" | |
(fact | |
"multiples of 3 but not 5 are Fizz" | |
(for-all | |
[n (gen/elements | |
(set/difference | |
multiples-of-3 | |
multiples-of-5))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => "Fizz")) | |
(fact | |
"multiples of 5 but not 3 are Buzz" | |
(for-all | |
[n (gen/elements | |
(set/difference | |
multiples-of-5 | |
multiples-of-3))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => "Buzz")) | |
(fact | |
"multiples of 3 and 5 are FizzBuzz" | |
(for-all | |
[n (gen/elements | |
(set/intersection | |
multiples-of-5 | |
multiples-of-3))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => "FizzBuzz")) | |
(fact | |
"the rest of numbers are casted to string" | |
(for-all | |
[n (gen/elements | |
(set/difference | |
(set (range 1 101)) | |
multiples-of-3 | |
multiples-of-5))] | |
{:num-tests 100} | |
(fizzbuzz-for n) => (str n)))) |
You can find all the code in this repository.
Conclusions.
It was a lot of fun doing this kata. It is a toy example that didn’t make me dive a lot into clojure.check’s generators documentation because I could take advantage of Clojure’s set functions to write the properties.
I think the resulting properties are quite readable even if you don’t know Clojure. On the other hand, the resulting implementation is probably not similar to the ones you’re used to see, and it shows Clojure’s conciseness and expressiveness.
Footnotes:
Saturday, June 29, 2019
Books I read (January - June 2019)
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein
February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor
March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy
April
- Cat on a Hot Tin Roof, Tennessee Williams
- Closely Watched Trains (Ostře sledované vlaky), Bohumil Hrabal
- The Chalk Giants, Keith Roberts
- Behold the Man, Michael Moorcock
- Cutting It Short (Postřižiny), Bohumil Hrabal
- La bicicleta de Sumji (סומכי : סיפור לבני הנעורים על אהבה והרפתקאות), Amos Oz
- The Sheltering Sky, Paul Bowles
May
- Elsewhere, Perhaps (מקום אחר), Amos Oz
- Liminal Thinking: Create the Change You Want by Changing the Way You Think, Dave Gray
June
- The Turn of the Screw, Henry James
- A Tale of Love and Darkness (סיפור על אהבה וחושך), Amos Oz
- Touch the Water, Touch the Wind (לגעת במים, לגעת ברוח), Amos Oz
- The Third Man, The Fallen Idol, Graham Greene
- The Little Book of Stupidity: How We Lie to Ourselves and Don't Believe Others, Sia Mohajer
- Slaughterhouse-Five, or The Children's Crusade: A Duty-Dance with Death, Kurt Vonnegut
- The Little Town Where Time Stood Still (Městečko, kde se zastavil čas), Bohumil Hrabal
Thursday, June 27, 2019
An example of listening to the tests to improve a design
Introduction.
Recently in the B2B team at LIFULL Connect, we improved the validation of the clicks our API receive using a service that detects whether the clicks were made by a bot or a human being.
So we used TDD to add this new validation to the previously existing validation that checked if the click contained all mandatory information. This was the resulting code:
<?php | |
namespace Trovit\B2B\AdClick\Domain; | |
class ClickValidation | |
{ | |
private $botClickDetector; | |
private $domainLogger; | |
private $paramsValidator; | |
public function __construct( | |
BotClickDetector $botClickDetector, | |
DomainLogger $domainLogger, | |
ClickParamsValidator $paramsValidator | |
) | |
{ | |
$this->botClickDetector = $botClickDetector; | |
$this->domainLogger = $domainLogger; | |
$this->paramsValidator = $paramsValidator; | |
} | |
public function isValid(array $click) | |
{ | |
if (!$this->paramsValidator->isValid($click)) { | |
return false; | |
} | |
$isBotClick = $this->botClickDetector->isBot($click['userIp']); | |
if ($isBotClick) { | |
$this->domainLogger->logBotClick($click); | |
return false; | |
} | |
return true; | |
} | |
} |
and these were its tests:
<?php | |
namespace Trovit\B2B\AdClick\Tests\unit\Domain; | |
use PHPUnit\Framework\TestCase; | |
use Prophecy\Argument; | |
use Trovit\B2B\AdClick\Domain\BotClickDetector; | |
use Trovit\B2B\AdClick\Domain\ClickParamsValidator; | |
use Trovit\B2B\AdClick\Domain\ClickValidation; | |
use Trovit\B2B\AdClick\Domain\DomainLogger; | |
use Trovit\B2B\AdClick\Tests\helper\ClickRawDataBuilder; | |
class ClickValidationTest extends TestCase | |
{ | |
private $click; | |
private $botDetector; | |
private $domainLogger; | |
private $clickValidation; | |
private $paramsValidator; | |
protected function setUp() | |
{ | |
$this->click = ClickRawDataBuilder::clickMandatoryRawData()->build(); | |
$this->domainLogger = $this->prophesize(DomainLogger::class); | |
$this->botDetector = $this->prophesize(BotClickDetector::class); | |
$this->paramsValidator = $this->prophesize(ClickParamsValidator::class); | |
$this->clickValidation = new ClickValidation( | |
$this->botDetector->reveal(), | |
$this->domainLogger->reveal(), | |
$this->paramsValidator->reveal() | |
); | |
} | |
/** @test */ | |
public function a_click_by_a_bot_is_not_valid_and_is_logged() | |
{ | |
$this->paramsValidator->isValid(Argument::any())->willReturn(true); | |
$this->botDetector->isBot($this->click['userIp'])->willReturn(true); | |
$isValid = $this->clickValidation->isValid($this->click); | |
$this->assertFalse($isValid); | |
$this->domainLogger->logBotClick($this->click)->shouldHaveBeenCalledOnce(); | |
} | |
/** @test */ | |
public function a_click_by_human_is_valid() | |
{ | |
$this->paramsValidator->isValid(Argument::any())->willReturn(true); | |
$this->botDetector->isBot($this->click['userIp'])->willReturn(false); | |
$isValid = $this->clickValidation->isValid($this->click); | |
$this->assertTrue($isValid); | |
$this->domainLogger->logBotClick($this->click)->shouldNotHaveBeenCalled(); | |
} | |
/** @test */ | |
public function a_click_missing_a_mandatory_param_is_not_valid_but_is_not_logged() | |
{ | |
$this->paramsValidator->isValid(Argument::any())->willReturn(false); | |
$isValid = $this->clickValidation->isValid($this->click); | |
$this->assertFalse($isValid); | |
$this->domainLogger->logBotClick($this->click)->shouldNotHaveBeenCalled(); | |
} | |
} |
The problem with these tests is that they know too much. They are coupled to many implementation details. They not only know the concrete validations we apply to a click and the order in which they are applied, but also details about what gets logged when a concrete validations fails. There are multiple axes of change that will make these tests break. The tests are fragile against those axes of changes and, as such, they might become a future maintenance burden, in case changes along those axes are required.
So what might we do about that fragility when any of those changes come?
Improving the design to have less fragile tests.
As we said before the test fragility was hinting to a design problem in the ClickValidation
code. The problem is that it’s concentrating too much knowledge because it’s written in a procedural style in which it is querying every concrete validation to know if the click is ok, combining the result of all those validations and knowing when to log validation failures. Those are too many responsibilities for ClickValidation
and is the cause of the fragility in the tests.
We can revert this situation by changing to a more object-oriented implementation in which responsibilities are better distributed. Let’s see how that design might look:
1. Removing knowledge about logging.
After this change, ClickValidation
will know nothing about looging. We can use the same technique to avoid knowing about any similar side-effects which concrete validations might produce.
First we create an interface, ClickValidator
, that any object that validates clicks should implement:
<?php | |
namespace Trovit\B2B\AdClick\Domain; | |
interface ClickValidator { | |
function isValid(array $click); | |
} |
Next we create a new class NoBotClickValidator
that wraps the BotClickDetector
and adapts[1] it to implement the ClickValidator
interface. This wrapper also enrichs BotClickDetector
’s’ behavior by taking charge of logging in case the click is not valid.
<?php | |
namespace Trovit\B2B\AdClick\Domain; | |
class NoBotClickValidator implements ClickValidator | |
{ | |
private $botClickDetector; | |
private $domainLogger; | |
public function __construct( | |
BotClickDetector $botClickDetector, | |
DomainLogger $domainLogger | |
) | |
{ | |
$this->botClickDetector = $botClickDetector; | |
$this->domainLogger = $domainLogger; | |
} | |
public function isValid(array $click) | |
{ | |
$clickMadeByBot = $this->botClickDetector->isBot($click['userIp']); | |
if ($clickMadeByBot) { | |
$this->domainLogger->logBotClick($click); | |
return false; | |
} | |
return true; | |
} | |
} |
These are the tests of NoBotClickValidator
that takes care of the delegation to BotClickDetector
and the logging:
<?php | |
namespace Trovit\B2B\AdClick\Tests\unit\Domain; | |
use PHPUnit\Framework\TestCase; | |
use Trovit\B2B\AdClick\Domain\BotClickDetector; | |
use Trovit\B2B\AdClick\Domain\NoBotClickValidator; | |
use Trovit\B2B\AdClick\Domain\DomainLogger; | |
class NoBotClickValidatorTest extends TestCase | |
{ | |
private $click; | |
private $botDetector; | |
private $domainLogger; | |
protected function setUp() | |
{ | |
$this->click = ['userIp' => 'someUserIp']; | |
$this->domainLogger = $this->prophesize(DomainLogger::class); | |
$this->botDetector = $this->prophesize(BotClickDetector::class); | |
$this->noBotClickValidator = new NoBotClickValidator( | |
$this->botDetector->reveal(), | |
$this->domainLogger->reveal() | |
); | |
} | |
/** @test */ | |
public function when_bot_is_detected_click_is_invalid_and_problem_is_logged() | |
{ | |
$this->botDetector->isBot($this->click['userIp'])->willReturn(true); | |
$isValid = $this->noBotClickValidator->isValid($this->click); | |
$this->assertFalse($isValid); | |
$this->domainLogger->logBotClick($this->click)->shouldHaveBeenCalledOnce(); | |
} | |
/** @test */ | |
public function when_no_bot_is_detected_click_is_valid_and_nothing_is_logged() | |
{ | |
$this->botDetector->isBot($this->click['userIp'])->willReturn(false); | |
$isValid = $this->noBotClickValidator->isValid($this->click); | |
$this->assertTrue($isValid); | |
$this->domainLogger->logBotClick($this->click)->shouldNotHaveBeenCalled(); | |
} | |
} |
If we used NoBotClickValidator
in ClickValidation
, we’d remove all knowledge about logging from ClickValidation
.
<?php | |
namespace Trovit\B2B\AdClick\Domain; | |
class ClickValidation | |
{ | |
private $noBotClickValidator; | |
private $paramsValidator; | |
public function __construct( | |
ClickValidator $noBotClickValidator, | |
ClickValidator $paramsValidator | |
) | |
{ | |
$this->noBotClickValidator = $noBotClickValidator; | |
$this->paramsValidator = $paramsValidator; | |
} | |
public function isValid(array $click) | |
{ | |
return $this->paramsValidator->isValid($click) && | |
$noBotClickValidator->isValid($click); | |
} | |
} |
Of course, that knowledge would also disappear from its tests. By using the ClickValidator
interface for all concrete validations and wrapping validations with side-effects like logging, we’d make ClickValidation
tests robust to changes involving some of the possible axis of change that were making them fragile:
- Changing the interface of any of the individual validations.
- Adding side-effects to any of the validations.
2. Another improvement: don't use test doubles when it's not worth it[2].
There’s another way to make ClickValidation
tests less fragile.
If we have a look at ClickParamsValidator
and BotClickDetector
(I can’t show their code here for security reasons), they have very different natures. ClickParamsValidator
has no collaborators, no state and a very simple logic, whereas BotClickDetector
has several collaborators, state and a complicated validation logic.
Stubbing ClickParamsValidator
in ClickValidation
tests is not giving us any benefit over directly using it, and it’s producing coupling between the tests and the code.
On the contrary, stubbing NoBotClickValidator
(which wraps BotClickDetector
) is really worth it, because, even though it also produces coupling, it makes ClickValidation
tests much simpler.
Using a test double when you’d be better of using the real collaborator is a weakness in the design of the test, rather than in the code to be tested.
These would be the tests for the ClickValidation
code with no logging knowledge, after applying this idea of not using test doubles for everything:
<?php | |
namespace Trovit\B2B\AdClick\Tests\unit\Domain; | |
use PHPUnit\Framework\TestCase; | |
use Trovit\B2B\AdClick\Domain\ClickParamsValidator; | |
use Trovit\B2B\AdClick\Domain\NoBotClickValidator; | |
use Trovit\B2B\AdClick\Domain\ClickValidation; | |
use Trovit\B2B\AdClick\Tests\helper\ClickRawDataBuilder; | |
class ClickValidationTest extends TestCase | |
{ | |
private $noBotClickValidator; | |
private $clickValidation; | |
protected function setUp() | |
{ | |
$this->noBotClickValidator = $this->prophesize(NoBotClickValidator::class); | |
$this->clickValidation = new ClickValidation( | |
$this->noBotClickValidator->reveal(), | |
new ClickParamsValidator() | |
); | |
} | |
/** @test */ | |
public function a_click_lacking_any_mandatory_parameter_is_not_valid() | |
{ | |
$clickLackingParameters = $this->clickWithAllParameters() | |
->withoutCountry->build(); | |
$this->botDetector->noBotClickValidator($clickLackingParameters) | |
->willReturn(true); | |
$isValid = $this->clickValidation->isValid($clickLackingParameters); | |
$this->assertFalse($isValid); | |
} | |
/** @test */ | |
public function a_click_made_by_a_bot_is_not_valid() | |
{ | |
$clickWithAllParameters = $this->clickWithAllParameters()->build(); | |
$this->noBotClickValidator->isBot($clickWithAllParameters) | |
->willReturn(false); | |
$isValid = $this->clickValidation->isValid($clickWithAllParameters); | |
$this->assertFalse($isValid); | |
} | |
/** @test */ | |
public function a_click_with_all_parameters_and_made_by_a_human_is_valid() | |
{ | |
$clickWithAllParameters = $this->clickWithAllParameters()->build(); | |
$this->botDetector->noBotClickValidator($clickWithAllParameters) | |
->willReturn(true); | |
$isValid = $this->clickValidation->isValid($clickWithAllParameters); | |
$this->assertTrue($isValid); | |
} | |
private function clickWithAllParameters() { | |
return ClickRawDataBuilder::clickMandatoryRawData(); | |
} | |
} |
Notice how the tests now use the real ClickParamsValidator
and how that reduces the coupling with the production code and makes the set up simpler.
3. Removing knowledge about the concrete sequence of validations.
After this change, the new design will compose validations in a way that will result in ClickValidation
being only in charge of combining the result of a given sequence of validations.
First we refactor the click validation so that the validation is now done by composing several validations:
<?php | |
namespace Trovit\B2B\AdClick\Domain; | |
class ClickValidation | |
{ | |
private $validators | |
public function __construct(array $validators) | |
{ | |
$this->validators = $validators; | |
} | |
public function isValid(array $click) | |
{ | |
foreach ($this->validators as $validator) { | |
if (!$validator->isValid($click)) { | |
return false; | |
} | |
} | |
return true; | |
} | |
} |
The new validation code has several advantages over the previous one:
- It does not depend on concrete validations any more
- It does not depend on the order in which the validations are made.
It has only one responsibility: it applies several validations in sequence, if all of them are valid, it will accept the click, but if any given validation fails, it will reject the click and stop applying the rest of the validations. If you think about it, it’s behaving like an and
operator.
We may write these tests for this new version of the click validation:
<?php | |
namespace Trovit\B2B\AdClick\Tests\unit\Domain; | |
use PHPUnit\Framework\TestCase; | |
use Prophecy\Argument; | |
use Trovit\B2B\AdClick\Domain\ClickValidation; | |
use Trovit\B2B\AdClick\Domain\ClickValidator; | |
class ClickValidationTest extends TestCase | |
{ | |
private $click; | |
private $validator1; | |
private $validator2; | |
private $clickValidation; | |
protected function setUp() | |
{ | |
$this->click = []; | |
$this->validator1 = $this->prophesize(ClickValidator::class); | |
$this->validator2 = $this->prophesize(ClickValidator::class); | |
$this->clickValidation = new ClickValidation( | |
[$this->validator1->reveal(), $this->validator2->reveal()] | |
); | |
} | |
/** @test */ | |
public function when_any_validator_returns_false_the_click_is_not_valid() | |
{ | |
$this->validator1->isValid($this->click)->willReturn(false); | |
$this->validator2->isValid($this->click)->willReturn(true); | |
$isValid = $this->clickValidation->isValid($this->click); | |
$this->assertFalse($isValid); | |
} | |
/** @test */ | |
public function when_all_validators_return_truse_the_click_is_valid() | |
{ | |
$this->validator1->isValid($this->click)->willReturn(true); | |
$this->validator2->isValid($this->click)->willReturn(true); | |
$isValid = $this->clickValidation->isValid($this->click); | |
$this->assertTrue($isValid); | |
} | |
} |
These tests are robust to the changes making the initial version of the tests fragile that we described in the introduction:
- Changing the interface of any of the individual validations.
- Adding side-effects to any of the validations.
- Adding more validations.
- Changing the order of the validation.
However, this version of ClickValidationTest
is so general and flexible, that using it, our tests would stop knowing which validations, and in which order, are applied to the clicks[3]. That sequence of validations is a business rule and, as such, we should protect it. We might keep this version of ClickValidationTest
only if we had some outer test protecting the desired sequence of validations.
This other version of the tests, on the other hand, keeps protecting the business rule:
<?php | |
namespace Trovit\B2B\AdClick\Tests\unit\Domain; | |
use PHPUnit\Framework\TestCase; | |
use Trovit\B2B\AdClick\Domain\ClickValidation; | |
use Trovit\B2B\AdClick\Domain\ClickParamsValidator; | |
use Trovit\B2B\AdClick\Domain\NoBotClickValidator; | |
use Trovit\B2B\AdClick\Tests\helper\ClickRawDataBuilder; | |
class ClickValidationTest extends TestCase | |
{ | |
private $noBotClickValidator; | |
private $clickValidation; | |
protected function setUp() | |
{ | |
$this->noBotClickValidator = $this->prophesize(NoBotClickValidator::class); | |
$this->clickValidation = new ClickValidation( | |
[new ClickParamsValidator(), $this->noBotClickValidator->reveal()] | |
); | |
} | |
/** @test */ | |
public function a_click_lacking_any_mandatory_parameter_is_not_valid() | |
{ | |
$clickLackingParameters = $this->clickWithAllParameters() | |
->withoutCountry()->build(); | |
$this->noBotClickValidator->isValid($clickLackingParameters) | |
->willReturn(true); | |
$isValid = $this->clickValidation->isValid($clickLackingParameters); | |
$this->assertFalse($isValid); | |
} | |
/** @test */ | |
public function a_click_by_a_bot_is_not_valid() | |
{ | |
$clickWithAllParameters = $this->clickWithAllParameters()->build(); | |
$this->noBotClickValidator->isValid($clickWithAllParameters) | |
->willReturn(false); | |
$isValid = $this->clickValidation->isValid($clickWithAllParameters); | |
$this->assertFalse($isValid); | |
} | |
/** @test */ | |
public function a_click_by_a_human_with_all_parameters_is_valid() | |
{ | |
$clickWithAllParameters = $this->clickWithAllParameters()->build(); | |
$this->noBotClickValidator->isValid($clickWithAllParameters) | |
->willReturn(true); | |
$isValid = $this->clickValidation->isValid($clickWithAllParameters); | |
$this->assertTrue($isValid); | |
} | |
private function clickWithAllParameters() { | |
return ClickRawDataBuilder::clickMandatoryRawData(); | |
} | |
} |
Notice how this version of the tests keeps in its setup the knowledge of which sequence of validations should be used, and how it only uses test doubles for NoBotClickValidator
.
4. Avoid exposing internals.
The fact that we’re injecting into ClickValidation
an object, ClickParamsValidator
, that we realized we didn’t need to double, it’s a smell which points to the possibility that ClickParamsValidator
is an internal detail of ClickValidation
instead of its peer. So by injecting it, we’re coupling ClickValidation
users, or at least the code that creates it, to an internal detail of ClickValidation
: ClickParamsValidator
.
A better version of this code would hide ClickParamsValidator
by instantiating it inside ClickValidation
’s constructor:
<?php | |
namespace Trovit\B2B\AdClick\Domain; | |
class ClickValidation | |
{ | |
private $validators | |
public function __construct(ClickValidator $noBotClickValidator) | |
{ | |
$this->validators = [new ClickParamsValidator(), $noBotClickValidator]; | |
} | |
public function isValid(array $click) | |
{ | |
foreach ($this->validators as $validator) { | |
if (!$validator->isValid($click)) { | |
return false; | |
} | |
} | |
return true; | |
} | |
} |
With this change ClickValidation
recovers the knowledge of the sequence of validations which in the previous section was located in the code that created ClickValidation
.
There are some stereotypes that can help us identify real collaborators (peers)[4]:
- Dependencies: services that the object needs from its environment so that it can fulfill its responsibilities.
- Notifications: other parts of the system that need to know when the object changes state or performs an action.
- Adjustments or Policies: objects that tweak or adapt the object’s behaviour to the needs of the system.
Following these stereotypes, we could argue that NoBotClickValidator
is also an internal detail of ClickValidation
and shouldn’t be exposed to the tests by injecting it. Hiding it we’d arrive to this other version of ClickValidation
:
<?php | |
namespace Trovit\B2B\AdClick\Domain; | |
class ClickValidation | |
{ | |
private $validators | |
public function __construct( | |
DomainLogger $domainLogger, | |
Clock $clock, | |
ClickPersistence $clickPersistence) | |
{ | |
$this->validators = [ | |
new ClickParamsValidator(), | |
NoBotClickValidator::create($clock, $clickPersistence, $domainLogger)]; | |
} | |
public function isValid(array $click) | |
{ | |
foreach ($this->validators as $validator) { | |
if (!$validator->isValid($click)) { | |
return false; | |
} | |
} | |
return true; | |
} | |
} |
in which we have to inject the real dependencies of the validation, and no internal details are exposed to the client code. This version is very similar to the one we’d have got using tests doubles only for infrastructure.
The advantage of this version would be that its tests would know the least possible about ClickValidation
. They’d know only ClickValidation
’s boundaries marked by the ports injected through its constructor, and ClickValidation`’s public API. That will reduce the coupling between tests and production code, and facilitate refactorings of the validation logic.
The drawback is that the combinations of test cases in ClickValidationTest
would grow, and may of those test cases would talk about situations happening in the validation boundaries that might be far apart from ClickValidation
’s callers. This might make the tests hard to understand, specially if some of the validations have a complex logic. When this problem gets severe, we may reduce it by injecting and use test doubles for very complex validators, this is a trade-off in which we decide to accept some coupling with the internal of ClickValidation
in order to improve the understandability of its tests. In our case, the bot detection was one of those complex components, so we decided to test it separately, and inject it in ClickValidation
so we could double it in ClickValidation
’s tests, which is why we kept the penultimate version of ClickValidation
in which we were injecting the click-not-made-by-a-bot validation.
Conclusion.
In this post, we tried to play with an example to show how listening to the tests[5] we can detect possible design problems, and how we can use that feedback to improve both the design of our code and its tests, when changes that expose those design problems are required.
In this case, the initial tests were fragile because the production code was procedural and had too many responsibilities. The tests were fragile also because they were using test doubles for some collaborators when it wasn’t worth to do it.
Then we showed how refactoring the original code to be more object-oriented and separating better its responsibilities, could remove some of the fragility of the tests. We also showed how reducing the use of test doubles only to those collaborators that really needs to be substituted can improve the tests and reduce their fragility. Finally, we showed how we can go too far in trying to make the tests flexible and robust, and accidentally stop protecting a business rule, and how a less flexible version of the tests can fix that.
When faced with fragility due to coupling between tests and the code being tested caused by using test doubles, it’s easy and very usual to “blame the mocks”, but, we believe, it would be more productive to listen to the tests to notice which improvements in our design they are suggesting. If we act on this feedback the tests doubles give us about our design, we can use tests doubles in our advantage, as powerful feedback tools[6], that help us improve our designs, instead of just suffering and blaming them.
Acknowledgements.
Many thanks to my Codesai colleagues Alfredo Casado, Fran Reyes, Antonio de la Torre and Manuel Tordesillas, and to my Aprendices colleagues Paulo Clavijo, Álvaro García and Fermin Saez for their feedback on the post, and to my colleagues at LIFULL Connect for all the mobs we enjoy together.
Footnotes:
I think that if your design follows a different OO style, it might be preferable to stick to a classical TDD style which nearly limits the use of test doubles only to infrastructure and undesirable side-effects.
Monday, June 24, 2019
Interesting Talk: "Solving Problems the Clojure Way"
Saturday, May 25, 2019
The curious case of the negative builder
Recently, one of the teams I’m coaching at my current client, asked me to help them with a problem, they were experiencing while using TDD to add and validate new mandatory query string parameters[1]. This is a shortened version (validating fewer parameters than the original code) of the tests they were having problems with:
<?php | |
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase; | |
class DefaultControllerTest extends WebTestCase | |
{ | |
/** @test */ | |
public function should_returns_200_when_all_mandatory_parameters_are_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("us") | |
->withVertical(1) | |
->withBrand("someBrand")->build() | |
); | |
$this->assertEquals(200, $client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_returns_400_when_brand_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("us") | |
->withVertical(1)->build() | |
); | |
$this->assertEquals(400, $client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_returns_400_when_country_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', $this->queryString | |
->withBrand("someBrand") | |
->withVertical(2)->build() | |
); | |
$this->assertEquals(400, $client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_returns_400_when_vertical_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("es") | |
->withBrand("someBrand")->build() | |
); | |
$this->assertEquals(400, $client->getResponse()->getStatusCode()); | |
} | |
function queryString(): QueryStringBuilder | |
{ | |
return new QueryStringBuilder(); | |
} | |
} |
and this is the implementation of the QueryStringBuilder
used in this test:
<?php | |
class QueryStringBuilder | |
{ | |
const CONNECTOR = '&'; | |
const START = '/?'; | |
private $query; | |
public function build(): string | |
{ | |
return self::START . $this->query; | |
} | |
public function withBrand($brand): QueryStringBuilder | |
{ | |
$this->query .= "brand={$brand}" . self::CONNECTOR; | |
return $this; | |
} | |
public function withCountry($country): QueryStringBuilder | |
{ | |
$this->query .= "country={$country}" . self::CONNECTOR; | |
return $this; | |
} | |
public function withVertical($vertical): QueryStringBuilder | |
{ | |
$this->query .= "vertical={$vertical}" . self::CONNECTOR; | |
return $this; | |
} | |
} |
which is a builder with a fluid interface that follows to the letter a typical implementation of the pattern. There are even libraries that help you to automatically create this kind of builders[2].
However, in this particular case, implementing the QueryStringBuilder
following this typical recipe causes a lot of problems. Looking at the test code, you can see why.
To add a new mandatory parameter, for example sourceId
, following the TDD cycle, you would first write a new test asserting that a query string lacking the parameter should not be valid.
<?php | |
///...... | |
/** @test */ | |
public function should_returns_400_when_source_id_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("us") | |
->withVertical(1) | |
->withBrand("someBrand")->build() | |
); | |
$this->assertEquals(200, $client->getResponse()->getStatusCode()); | |
} | |
///...... |
So far so good, the problem comes when you change the production code to make this test pass, in that momento you’ll see how the first test that was asserting that a query string with all the parameters was valid starts to fail (if you check the query string of that tests and the one in the new test, you’ll see how they are the same). Not only that, all the previous tests that were asserting that a query string was invalid because a given parameter was lacking won’t be “true” anymore because after this change they could fail for more than one reason.
So to carry on, you’d need to fix the first test and also change all the previous ones so that they fail again only for the reason described in the test name:
<?php | |
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase; | |
class DefaultControllerTest extends WebTestCase | |
{ | |
/** @test */ | |
public function should_returns_200_when_all_mandatory_parameters_are_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("us") | |
->withVertical(1) | |
->withBrand("someBrand") | |
->withSourceId(5)->build() | |
); | |
$this->assertEquals(200, $client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_returns_400_when_brand_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("us") | |
->withVertical(1) | |
->withSourceId(5)->build() | |
); | |
$this->assertEquals(400, $client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_returns_400_when_country_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withBrand("someBrand") | |
->withVertical(2) | |
->withSourceId(5)->build() | |
); | |
$this->assertEquals(400, $client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_returns_400_when_vertical_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("es") | |
->withBrand("someBrand") | |
->withSourceId(5)->build() | |
); | |
$this->assertEquals(400, $client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_returns_400_when_source_id_is_not_present() | |
{ | |
$client = static::createClient(); | |
$client->request( | |
'GET', | |
$this->queryString | |
->withCountry("us") | |
->withVertical(1) | |
->withBrand("someBrand")->build() | |
); | |
$this->assertEquals(200, $client->getResponse()->getStatusCode()); | |
} | |
function queryString(): QueryStringBuilder | |
{ | |
return new QueryStringBuilder(); | |
} | |
} |
That’s a lot of rework on the tests only for adding a new parameter, and the team had to add many more. The typical implementation of a builder was not helping them.
The problem we’ve just explained can be avoided by chosing a default value that creates a valid query string and what I call “a negative builder”, a builder with methods that remove parts instead of adding them. So we refactored together the initial version of the tests and the builder, until we got to this new version of the tests:
<?php | |
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase; | |
class DefaultControllerTest extends WebTestCase | |
{ | |
private $client; | |
protected function setUp() | |
{ | |
$this->client = static::createClient(); | |
parent::setUp(); | |
} | |
/** @test */ | |
public function should_succeed_when_all_mandatory_parameters_are_present() | |
{ | |
$this->client->request('GET', queryString()->build()); | |
$this->assertEquals(200, $this->client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_fail_when_brand_is_not_present() | |
{ | |
$this->client->request( | |
'GET', | |
queryString()->withoutBrand()->build() | |
); | |
$this->assertEquals(400, $this->client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_fail_when_country_is_not_present() | |
{ | |
$this->client->request( | |
'GET', | |
queryString()->withoutCountry()->build() | |
); | |
$this->assertEquals(400, $this->client->getResponse()->getStatusCode()); | |
} | |
/** @test */ | |
public function should_fail_when_vertical_is_not_present() | |
{ | |
$this->client->request( | |
'GET', | |
queryString()->withoutVertical()->build() | |
); | |
$this->assertEquals(400, $this->client->getResponse()->getStatusCode()); | |
} | |
} | |
function queryString(): QueryStringBuilder | |
{ | |
return QueryStringBuilder::valid(); | |
} |
which used a “negative” QueryStringBuilder
:
<?php | |
class QueryStringBuilder | |
{ | |
const CONNECTOR = '&'; | |
const START = '/?'; | |
private $brand; | |
private $vertical; | |
private $country; | |
public static function valid() | |
{ | |
$builder = new QueryStringBuilder(); | |
$builder->brand = "1"; | |
$builder->vertical = "2"; | |
$builder->country = "es"; | |
return $builder; | |
} | |
public function build() | |
{ | |
return self::START . | |
$this->addParameter("brand", $this->brand) . | |
$this->addParameter("vertical", $this->vertical) . | |
$this->addParameter("country", $this->country); | |
} | |
public function withoutBrand() | |
{ | |
$this->brand = null; | |
return $this; | |
} | |
public function withoutCountry() | |
{ | |
$this->country = null; | |
return $this; | |
} | |
public function withoutVertical() | |
{ | |
$this->vertical = null; | |
return $this; | |
} | |
private function addParameter($name, $value): string | |
{ | |
if (is_null($value)) { | |
return ""; | |
} | |
return "{$name}={$value}" . self::CONNECTOR; | |
} | |
} |
After this refactoring, to add the sourceId
we wrote this test instead:
<?php | |
///...... | |
/** @test */ | |
public function should_returns_400_when_source_id_is_not_present() | |
{ | |
$this->client->request( | |
'GET', | |
queryString()->withoutSourceId()->build() | |
); | |
$this->assertEquals(400, $this->client->getResponse()->getStatusCode()); | |
} | |
///...... |
which only carries with it updating the valid
method in QueryStringBuilder
and adding a method that removes the sourceId
parameter from a valid query string.
Now when we changed the code to make this last test pass, no other test failed or started to have descriptions that were not true anymore.
Leaving behind the typical recipe and adapting the idea of the builder pattern to the context of the problem at hand, led us to a curious implementation, a “negative builder”, that made the tests easier to maintain and improved our TDD flow.
Acknowledgements.
Many thanks to my Codesai colleagues Antonio de la Torre and Fran Reyes, and to all the colleagues of the Prime Services Team at LIFULL Connect for all the mobs we enjoy together.
Footnotes:
Tuesday, May 14, 2019
Killing mutants to improve your tests
At my current client we’re working on having a frontend architecture for writing SPAs in JavaScript similar to re-frame’s one: an event-driven bus with effects and coeffects for state management[1] (commands) and subscriptions using reselect’s selectors (queries).
One of the pieces we have developed to achieved that goal is reffects-store. Using this store, React components can be subscribed to given reselect’s selectors, so that they only render when the values in the application state tracked by the selectors change.
After we finished writing the code for the store, we decided to use mutation testing to evaluate the quality of our tests. Mutation testing is a technique in which, you introduce bugs, (mutations), into your production code, and then run your tests for each mutation. If your tests fail, it’s ok, the mutation was “killed”, that means that they were able to defend you against the regression caused by the mutation. If they don’t, it means your tests are not defending you against that regression. The higher the percentage of mutations killed, the more effective your tests are.
There are tools that do this automatically, stryker[2] is one of them. When you run stryker, it will create many mutant versions of your production code, and run your tests for each mutant (that’s how mutations are called in stryker’s’ documentation) version of the code. If your tests fail then the mutant is killed. If your tests passed, the mutant survived. Let’s have a look at the the result of runnning stryker against reffects-store’s code:
$ npm run test:mutants | |
> reffects-store@0.1.0 test:mutants /home/trikitrok/todos-spike/reffects-store | |
> stryker run | |
21:38:12 (4489) INFO ConfigReader Using stryker.conf.js in the current working directory. | |
21:38:12 (4489) INFO InputFileResolver Found 3 of 13 file(s) to be mutated. | |
21:38:12 (4489) INFO InitialTestExecutor Starting initial test run. This may take a while. | |
21:38:15 (4489) INFO InitialTestExecutor Initial test run succeeded. Ran 23 tests in 2 seconds (net 125 ms, overhead 2331 ms). | |
21:38:15 (4489) INFO MutatorFacade 41 Mutant(s) generated | |
21:38:15 (4489) INFO SandboxPool Creating 8 test runners (based on CPU count) | |
Mutation testing [==================================================] 100% (ETC n/a) 41/41 tested (6 survived) | |
15. [Survived] Block | |
/home/trikitrok/todos-spike/reffects-store/src/store.js:38:63 | |
- listeners.forEach(function iterateListeners(currentListener) { | |
- if (currentListener === listener) { | |
- listener = null; | |
- } else { | |
- out.push(currentListener); | |
- } | |
- }); | |
+ listeners.forEach(function iterateListeners(currentListener) {}); | |
Ran all tests for this mutant. | |
18. [Survived] IfStatement | |
/home/trikitrok/todos-spike/reffects-store/src/store.js:39:8 | |
- if (currentListener === listener) { | |
+ if (true) { | |
Ran all tests for this mutant. | |
19. [Survived] BinaryExpression | |
/home/trikitrok/todos-spike/reffects-store/src/store.js:39:8 | |
- if (currentListener === listener) { | |
+ if (currentListener !== listener) { | |
Ran all tests for this mutant. | |
20. [Survived] Block | |
/home/trikitrok/todos-spike/reffects-store/src/store.js:39:38 | |
- if (currentListener === listener) { | |
- listener = null; | |
- } else { | |
+ if (currentListener === listener) {} else { | |
Ran all tests for this mutant. | |
21. [Survived] Block | |
/home/trikitrok/todos-spike/reffects-store/src/store.js:41:11 | |
- } else { | |
- out.push(currentListener); | |
- } | |
+ } else {} | |
Ran all tests for this mutant. | |
22. [Survived] Block | |
/home/trikitrok/todos-spike/reffects-store/src/store.js:53:33 | |
- export function unsubscribeAll() { | |
- listeners = []; | |
- } | |
+ export function unsubscribeAll() {} | |
Ran all tests for this mutant. | |
Ran 15.27 tests per mutant on average. | |
------------------|---------|----------|-----------|------------|----------|---------| | |
File | % score | # killed | # timeout | # survived | # no cov | # error | | |
------------------|---------|----------|-----------|------------|----------|---------| | |
All files | 85.37 | 35 | 0 | 6 | 0 | 0 | | |
store.js | 75.00 | 18 | 0 | 6 | 0 | 0 | | |
storeUtils.js | 100.00 | 4 | 0 | 0 | 0 | 0 | | |
subscriptions.js | 100.00 | 13 | 0 | 0 | 0 | 0 | | |
------------------|---------|----------|-----------|------------|----------|---------| | |
21:38:27 (4489) INFO HtmlReporter Your report can be found at: file:///home/trikitrok/todos-spike/reffects-store/reports/mutation/html/index.html | |
21:38:27 (4489) INFO Stryker Done in 14 seconds. |
Notice how stryker shows the details of every mutation that survived our tests, and look at the summary the it produces at the end of the process.
All the surviving mutants were produced by mutations to the store.js
file. Having a closer look to the mutations in stryker’s output we found that the functions with mutant code were unsubscribeAllListeners
and unsubscribeListener
.
After a quick check of their tests, it was esay to find out why unsubscribeAllListeners
was having surviving mutants. Since it was a function we used only in tests for cleaning the state after each test case was run, we had forgotten to test it.
However, finding out why unsubscribeListener
mutants were surviving took us a bit more time and thinking.
Let’s have a look at the tests that were exercising the code used to subscribe and unsubscribe listeners of state changes:
describe('subscriptions to store changes', () => { | |
test('subscribing to store changes', () => { | |
const newValue = 'lolo'; | |
const path = ['koko']; | |
store.initialize({ koko: 'loko' }); | |
store.subscribeListener(function(newState) { | |
expect(newState).toEqual({ koko: newValue }); | |
}); | |
store.setState({ path, newValue }); | |
}); | |
test('unsubscribing from store changes', () => { | |
const newValue = 'lolo'; | |
const path = ['koko']; | |
store.initialize({ koko: 'loko' }); | |
function functionToUnsubscribe() { | |
throw new Error('check is still suscribed!!'); | |
} | |
store.subscribeListener(function(newState) { | |
expect(newState).toEqual({ koko: newValue }); | |
}); | |
store.subscribeListener(functionToUnsubscribe); | |
store.unsubscribeListener(functionToUnsubscribe); | |
store.setState({ path, newValue }); | |
}); | |
}); |
If we examine the mutations and the tests, we can see that the tests for unsubscribeListener
are not good enough. They are throwing an exception from the subscribed function we unsubscribe, so that if the unsubscribeListener
function doesn’t work and that function is called the test fails. Unfortunately, the test passes also if that function is never called for any reason. In fact, most of the surviving mutants that stryker found above have are variations on that idea.
A better way to test unsubscribeListener
is using spies to verify that subscribed functions are called and unsubscribed functions are not (this version of the tests includes also a test for unsubscribeAllListeners
):
describe('subscriptions to store changes', () => { | |
test('subscribing a listener to store changes', () => { | |
const newValue = 'lolo'; | |
const path = ['koko']; | |
store.initialize({ koko: 'loko' }); | |
const fn = jest.fn(newState => | |
expect(newState).toEqual({ koko: newValue }) | |
); | |
store.subscribeListener(fn); | |
store.setState({ path, newValue }); | |
expect(fn).toHaveBeenCalledTimes(1); | |
}); | |
test('unsubscribing a listener from store changes', () => { | |
const newValue = 'lolo'; | |
const path = ['koko']; | |
store.initialize({ koko: 'loko' }); | |
const spyToBeUnsubscribed = jest.fn(); | |
const spyNotUnsubscribed = jest.fn(); | |
store.subscribeListener(spyNotUnsubscribed); | |
store.subscribeListener(spyToBeUnsubscribed); | |
store.unsubscribeListener(spyToBeUnsubscribed); | |
store.setState({ path, newValue }); | |
expect(spyToBeUnsubscribed).toHaveBeenCalledTimes(0); | |
expect(spyNotUnsubscribed).toHaveBeenCalledTimes(1); | |
}); | |
test('unsubscribing all listeners from store changes', () => { | |
const newValue = 'lolo'; | |
const path = ['koko']; | |
store.initialize({ koko: 'loko' }); | |
const spyToBeUnsubscribed1 = jest.fn(); | |
const spyToBeUnsubscribed2 = jest.fn(); | |
store.subscribeListener(spyToBeUnsubscribed1); | |
store.subscribeListener(spyToBeUnsubscribed2); | |
store.unsubscribeAllListeners(); | |
store.setState({ path, newValue }); | |
expect(spyToBeUnsubscribed1).toHaveBeenCalledTimes(0); | |
expect(spyToBeUnsubscribed2).toHaveBeenCalledTimes(0); | |
}); | |
}); |
After this change, when we run stryker we got the following output:
$ npm run test:mutants | |
> reffects-store@0.1.4 test:mutants /home/trikitrok/todos-spike/reffects-store | |
> stryker run | |
21:49:18 (4981) INFO ConfigReader Using stryker.conf.js in the current working directory. | |
21:49:18 (4981) INFO InputFileResolver Found 4 of 14 file(s) to be mutated. | |
21:49:18 (4981) INFO InitialTestExecutor Starting initial test run. This may take a while. | |
21:49:21 (4981) INFO InitialTestExecutor Initial test run succeeded. Ran 24 tests in 2 seconds (net 122 ms, overhead 2313 ms). | |
21:49:21 (4981) INFO MutatorFacade 39 Mutant(s) generated | |
21:49:21 (4981) INFO SandboxPool Creating 8 test runners (based on CPU count) | |
Mutation testing [==================================================] 100% (ETC n/a) 39/39 tested (0 survived) | |
Ran 15.74 tests per mutant on average. | |
--------------|---------|----------|-----------|------------|----------|---------| | |
File | % score | # killed | # timeout | # survived | # no cov | # error | | |
--------------|---------|----------|-----------|------------|----------|---------| | |
All files | 100.00 | 39 | 0 | 0 | 0 | 0 | | |
subscription | 100.00 | 13 | 0 | 0 | 0 | 0 | | |
index.js | 100.00 | 13 | 0 | 0 | 0 | 0 | | |
store.js | 100.00 | 22 | 0 | 0 | 0 | 0 | | |
utils.js | 100.00 | 4 | 0 | 0 | 0 | 0 | | |
--------------|---------|----------|-----------|------------|----------|---------| | |
21:49:32 (4981) INFO HtmlReporter Your report can be found at: file:///home/trikitrok/todos-spike/reffects-store/reports/mutation/html/index.html | |
21:49:32 (4981) INFO Stryker Done in 13 seconds. |
No mutants survived!! This means this new version of the tests is more reliable and will protect us better from regressions than the initial version.
Mutation testing is a great tool to know if you can trust your tests. This is event more true when working with legacy code.
Acknowledgements.
Many thanks to Mario Sánchez and Alex Casajuana Martín for all the great time coding together, and thanks to Porapak Apichodilok for the photo used in this post and to Pexels.
Footnotes:
Sunday, April 7, 2019
Interesting Talk: "In Search of Doors"
The Beverages Prices Refactoring kata: a kata to practice refactoring away from an awful application of inheritance.
I created the Beverages Prices Refactoring kata for the Deliberate Practice Program I’m running at Lifull Connect offices in Barcelona (previously Trovit). Its goal is to practice refactoring away from a bad usage of inheritance.
The code computes the price of the different beverages that are sold in a coffe house. There are some supplements that can be added to those beverages. Each supplement increases the price a bit. Not all combinations of drinks and supplements are possible.
Just having a quick look at the tests of the initial code would give you an idea of the kind of problems it might have:
package unit_tests; | |
import beverages.*; | |
import org.junit.Test; | |
import static org.hamcrest.CoreMatchers.is; | |
import static org.hamcrest.MatcherAssert.assertThat; | |
import static org.hamcrest.Matchers.closeTo; | |
public class BeveragesPricingTest { | |
private static final double PRECISION = 0.001; | |
@Test | |
public void computes_coffee_price() { | |
Beverage coffee = new Coffee(); | |
assertThat(coffee.price(), is(closeTo(1.20, PRECISION))); | |
} | |
@Test | |
public void computes_tea_price() { | |
Beverage tea = new Tea(); | |
assertThat(tea.price(), is(closeTo(1.50, PRECISION))); | |
} | |
@Test | |
public void computes_hot_chocolate_price() { | |
Beverage hotChocolate = new HotChocolate(); | |
assertThat(hotChocolate.price(), is(closeTo(1.45, PRECISION))); | |
} | |
@Test | |
public void computes_tea_with_milk_price() { | |
Tea teaWithMilk = new TeaWithMilk(); | |
assertThat(teaWithMilk.price(), is(closeTo(1.60, PRECISION))); | |
} | |
@Test | |
public void computes_coffee_with_milk_price() { | |
Coffee coffeeWithMilk = new CoffeeWithMilk(); | |
assertThat(coffeeWithMilk.price(), is(closeTo(1.30, PRECISION))); | |
} | |
@Test | |
public void computes_coffee_with_milk_and_cream_price() { | |
Coffee coffeeWithMilkAndCream = new CoffeeWithMilkAndCream(); | |
assertThat(coffeeWithMilkAndCream.price(), is(closeTo(1.45, PRECISION))); | |
} | |
@Test | |
public void computes_hot_chocolate_with_cream_price() { | |
HotChocolateWithCream hotChocolateWithCream = new HotChocolateWithCream(); | |
assertThat(hotChocolateWithCream.price(), is(closeTo(1.60, PRECISION))); | |
} | |
} |
If that’s not enough have a look at its inheritance hierarchy:

To make things worse, we are asked to add an optional cinnamon supplement that costs 0.05€ to all our existing catalog of beverages. We think we should refactor this code a bit before introducing the new feature.
We hope you have fun practicing refactoring with this kata.
Thursday, April 4, 2019
Books I read (January - March 2019)
- Timeless Laws of Software Development, Jerry Fitzpatrick
- Writing to Learn, William Zinsser
- The End of the Affair, Graham Greene
- Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, David Scott Bernstein
February
- Refactoring Workbook, William C. Wake
- Binti, Nnedi Okorafor
March
- Home, Nnedi Okorafor
- The Night Masquerade, Nnedi Okorafor
- Developer Hegemony, Erik Dietrich
- The Ministry of Utmost Happiness, Arundhati Roy
Tuesday, March 26, 2019
Interesting Talk: "Mastering Spark Unit Testing"
Monday, March 25, 2019
Our experience at ClojureBridge Bilbao 2018
ClojureBridge is an all-volunteer organization inspired by RailsBridge that provides free, introductory workshops to increase diversity in the Clojure community. I first learned about ClojureBridge thanks to a talk by Ali King at EuroClojure 2015 in Barcelona: ClojureBridge, Building a more diverse Clojure community. I really liked the idea and we (the Clojure Developers Barcelona) tried to organize one in Barcelona, but failed to actually do it because we lacked the numbers, money and time. So the moment I knew that my colleague at GreenPowerMonitor, Estíbaliz Rodríguez, and the company where she works, Magnet, were planning to organize a ClojureBridge edition in Bilbao for December 1st 2018, I decided to do my best to help.

I met Estíbaliz and José Ayudarte at the beginning of last summer, when they started working as ClojureScript developers for GreenPowerMonitor, and it’s been a great experience to work with them. They are part of Magnet which is a cooperative of developers, designers and consultants that work with Clojure and ClojureScript. Magnet’s team is distributed across Europe but most of them live in the Basque country.
I told them I’d like to participate as a volunteer. Actually I had already bought the tickets and booked an accommodation for that weekend before asking them. In the worst-case scenario, if I couldn’t participate in ClojureBridge, I’d at least spend a weekend in Bilbao which is a wonderful city. In the end, everything went well and they told me they were delighted to have me there. I participated in a couple of meetings to know the other volunteers and talk about how the event would be structured, the mentoring and the exercises. Magnet was sponsoring the event and most of its team worked very hard to make it possible.
So, on December 1st I was there trying to help people learn a bit of Clojure. There were many women and girls with very diverse backgrounds: professional developers that were using other languages, teenagers with a bit of programming experience from school or without, little girls and women of all ages that had no programming experience. At the beginning, Usoa Sasigain gave a talk to introduce ClojureBridge's goal of increasing diversity in technology, the important role women had played in the history of computer science and technology, and Clojure. She also talked about her personal history with technology and Clojure, and explained how technology might be a nice career option for women.

After the introduction, the participants were splitted in groups according to their programming experience. I was assigned to help the group of experienced software developers. They worked through the exercises in Maria Cloud which is a beginner-friendly coding environment for Clojure and I answered to questions about Clojure and helped when they got stuck. I think they got to appreciate Clojure and the possibilities it offers. We did several coffee breaks during the morning and for lunch in which we could talk about many things and I had the opportunity to meet Asier Galdós, Amaia Guridi, Iván Perdomo and other members of Magnet. The lunch was very nice and sponsored by Magnet.
At the end of the first half of the event, some attendees had to go to have lunch with their families. A funny and lovely anecdote for me happened when two girls of about six or eight years old that had been enthusiastically programming all morning didn’t want to stop and leave with their parents for lunch. I remember with a lot of tenderness seeing them totally engrossed in programming during the morning and celebrating with raised arms every time they succeeded in changing the color or any other feature of the shapes they were working with in Maria Cloud’s exercises.
During the afternoon, we continue working on more advanced Maria Cloud exercises. There were less people, so I changed to work with some women that had no previous experience with programming. It was a very nice experience and we had a good time going through some more Maria cloud’s exercises playing with shapes.

All in all, volunteering in ClojureBridge was a very beautiful experience for me. I enjoyed helping people to learn Clojure and programming, and met many nice people with many different backgrounds. I learned several words in basque language. I also had some very interesting conversations with Asier, Usoa, José and Iván about Clojure, Magnet’s experience as a Clojure/ClojureScript cooperative and the interesting platform, Hydrogen, that they are building. We also talked about my doing a desk surfing with them, an idea that I was really excited about, but, unfortunately, I haven’t been able to do yet because I recently started working for a new client in Barcelona.
Before finishing I’d like to thank Magnet for making the first ClojureBridge in Spain possible and for letting me help. I’d also like to send love and good energy to Estíbaliz. I hope you recover soon and we can meet each other in Euskadi or somewhere else.

Wednesday, March 20, 2019
Interesting Talk: "Acercándonos a la programación funcional a través de la arquitectura hexagonal"
Saturday, March 16, 2019
Interesting Talk: "Building on developers' intuitions to create effective property-based tests"
Sunday, March 10, 2019
Tuesday, March 5, 2019
Interesting Talk: "Testing & validating Apache Spark jobs"
Interesting Talk: "The Art of Refactoring"
Wednesday, January 30, 2019
Interesting Talk: "Una vida descubriendo Agile"
Thursday, January 3, 2019
Avoid using subscriptions only as app-state getters
Introduction.
Subscriptions in re-frame or re-om are query functions that extract data from the app-state
and provide it to view functions in the right format. When we use subscriptions well, they provide a lot of value[1], because they avoid having to keep derived state the app-state
and they dumb down the views, that end up being simple “data in, screen out” functions.
However, things are not that easy. When you start working with subscriptions, it might happen that you end up using them as mere getters of app-state
. This is a missed opportunity because using subscriptions in this way, we won’t take advantage of all the value they can provide, and we run the risk of leaking pieces of untested logic into our views.
An example.
We’ll illustrate this problem with a small real example written with re-om subscriptions (in an app using re-frame subscriptions the problem would look similar). Have a look at this piece of view code in which some details have been elided for brevity sake:
(ns horizon.desktop.reports.dialogs.edit.settings.scheduling-section | |
(:require | |
[greenpowermonitor.re-om :as re-om] | |
[horizon.common.i18n.core :refer [i18n]] | |
[horizon.common.state.lens :as l] | |
[horizon.common.utils.collections :as utils.collections] | |
[horizon.common.utils.time.core :as utils.time] | |
[horizon.controls.utils.css-transitions-group :as css-transitions-group] | |
;; ... | |
[horizon.domain.reports.core :as domain.reports.core] | |
[horizon.domain.reports.dialogs.edit :as domain.reports.dialogs.edit] | |
[om.core :as om :include-macros true] | |
[sablono.core :refer-macros [html]])) | |
(defn- delay->interval | |
[{:keys [number unit]}] | |
(let [date-modes (l/view domain.reports.core/reports-options-date-selection-modes-data-lens) | |
{kw :Keyword} (utils.collections/find-by-kv-pair date-modes :Value unit)] | |
(get {"hour" {:hour number} | |
"day" {:day number}} | |
kw))) | |
(defn- get-next-generation [start-at delay-unit delay-number] | |
(when (and start-at delay-number delay-unit) | |
(let [interval (delay->interval {:number delay-number | |
:unit delay-unit}) | |
next-date (utils.time/offset start-at interval)] | |
(utils.time/strftime next-date (i18n ::next-generation-date-format :format-vals []))))) | |
(defn main-view [_ owner] | |
(om/component | |
(html | |
(let [scheduling? (re-om/subscribe | |
[::domain.reports.dialogs.edit/field | |
::scheduling?] | |
owner)] | |
[:div.form-dialog__section.form-dialog__section--centered | |
[:div.form-dialog__switch | |
;; ... | |
] | |
[:div.form-dialog__title | |
[:strong (i18n ::scheduling-title)]] | |
(css-transitions-group/with-transition | |
:collapsible-card | |
(when scheduling? | |
(html | |
[:div | |
[:div.form-dialog__subtitle | |
;; ... | |
] | |
[:table.form-dialog__field-group | |
;; ... | |
] | |
(let [delay-unit (re-om/subscribe | |
[::domain.reports.dialogs.edit/field | |
::delay-generation-unit] | |
owner) | |
start-at (re-om/subscribe | |
[::domain.reports.dialogs.edit/field | |
::start-at] | |
owner) | |
delay-number (re-om/subscribe | |
[::domain.reports.dialogs.edit/field | |
::delay-generation-number] | |
owner)] | |
(when-let [next-generation (get-next-generation | |
start-at | |
delay-unit | |
delay-number)] | |
[:div.reports-edition-dialog__next-generation | |
[:p | |
(i18n ::next-generation) " " | |
[:strong next-generation] " (UTC)"]]))])))])))) |
this code is using subscriptions written in the horizon.domain.reports.dialogs.edit
namespace.
The misuse of subscriptions we’d like to show appears on the following piece of the view:
(let [delay-unit (re-om/subscribe [::domain.reports.dialogs.edit/field ::delay-generation-unit] owner) | |
start-at (re-om/subscribe [::domain.reports.dialogs.edit/field ::start-at] owner) | |
delay-number (re-om/subscribe [::domain.reports.dialogs.edit/field ::delay-generation-number] owner) | |
date-modes (re-om/subscribe [::domain.reports.core/date-selection-modes] owner)] | |
(when-let [next-generation (get-next-generation start-at | |
delay-unit | |
delay-number | |
date-modes)] | |
[:div.reports-edition-dialog__next-generation | |
[:p (i18n ::next-generation) " " [:strong next-generation] " (UTC)"]])) |
Notice how the only thing that we need to render this piece of view is next-generation
. To compute its value the code is using several subscriptions to get some values from the app-state
and binding them to local vars (delay-unit
start-at
, delay-number
and date-modes
). Those values are then fed to a couple of private functions also defined in the view (get-next-generation
and delay->interval
) to obtain the value of next-generation
.
This is a bad use of subscriptions. Remember subscriptions are query functions on app-state
that, used well, help to make views as dumb (with no logic) as possible. If you push as much logic as possible into subscriptions, you might achieve views that are so dumb you nearly don’t need to test, and decide to limit your unit tests to do only subcutaneous testing of your SPA.
Refactoring: placing the logic in the right place.
We can refactor the code shown above to remove all the leaked logic from the view by writing only one subscription called next-generation
which will produce the only information that the view needs. As a result both get-next-generation
and delay->interval
functions will get pushed into the logic behind the new next-generation
subscription and dissappear from the view.
This is the resulting view code after this refactoring:
(ns horizon.desktop.reports.dialogs.edit.settings.scheduling-section | |
(:require | |
[greenpowermonitor.re-om :as re-om] | |
[horizon.common.i18n.core :refer [i18n]] | |
[horizon.controls.utils.css-transitions-group :as css-transitions-group] | |
;; ... | |
[horizon.domain.reports.dialogs.edit :as domain.reports.dialogs.edit] | |
[om.core :as om :include-macros true] | |
[sablono.core :refer-macros [html]])) | |
(defn main-view [_ owner] | |
(om/component | |
(html | |
(let [scheduling? (re-om/subscribe [::domain.reports.dialogs.edit/field ::scheduling?] owner)] | |
[:div.form-dialog__section.form-dialog__section--centered | |
[:div.form-dialog__switch | |
;; ... | |
] | |
[:div.form-dialog__title | |
[:strong (i18n ::scheduling-title)]] | |
(css-transitions-group/with-transition | |
:collapsible-card | |
(when scheduling? | |
(html | |
[:div | |
[:div.form-dialog__subtitle | |
;; ... | |
] | |
[:table.form-dialog__field-group | |
;; ... | |
] | |
(when-let [next-generation (re-om/subscribe | |
[::domain.reports.dialogs.edit/next-generation-date | |
(i18n ::next-generation-date-format :format-vals []) | |
{:delay-generation-number ::delay-generation-number | |
:delay-generation-unit ::delay-generation-unit | |
:start-at ::start-at}] | |
owner)] | |
[:div.reports-edition-dialog__next-generation | |
[:p | |
(i18n ::next-generation) " " | |
[:strong next-generation] " (UTC)"]])])))])))) |
and this is the resulting pure logic of the new subscription. Notice that, since get-next-generation
function wasn’t pure, we had to change it a bit to make it pure:
(ns horizon.domain.reports.dialogs.edit | |
(:require | |
[com.rpl.specter :as specter] | |
[greenpowermonitor.re-om :as re-om] | |
[horizon.common.macros :refer [defstate]] | |
[horizon.common.state.lens :as l] | |
[horizon.common.utils.collections :as utils.collections] | |
[horizon.domain.reports.core :as domain.reports.core] | |
;; ... | |
[horizon.common.utils.time.core :as utils.time])) | |
(defstate reports-dialogs-edit | |
{:field-mappings {} | |
:options-mappings {} | |
:validators {} | |
:active-tab nil | |
:tabs [] | |
:fields {} | |
:infractions {} | |
:configuration-fields {} | |
:original-fields {} | |
:loaded? false | |
:editing? false | |
:options {}}) | |
;; ... | |
(def ^:private field-mappings-lens reports-dialogs-edit-field-mappings-lens) | |
(def ^:private options-lens reports-dialogs-edit-options-lens) | |
(def ^:private loaded?-lens reports-dialogs-edit-loaded?-lens) | |
(def ^:private fields-lens reports-dialogs-edit-fields-lens) | |
(def active-tab-lens reports-dialogs-edit-active-tab-lens) | |
(def ^:private tabs-lens reports-dialogs-edit-tabs-lens) | |
(def ^:private delay-generation-options-lens domain.reports.core/reports-options-date-selection-modes-data-lens) | |
;; ... | |
(defn- maybe-result [res] | |
(if (= res :form/empty) | |
nil | |
res)) | |
(re-om/register-sub! | |
::field-mapping | |
(fn [db [args]] | |
(apply (l/pure-view db field-mappings-lens) args))) | |
;; ... | |
(re-om/register-sub! | |
::field | |
(fn [db args] | |
(let [res (if-let [path (re-om/get [::field-mapping args] db)] | |
(specter/select-one path (l/pure-view db fields-lens)) | |
(get (l/pure-view db fields-lens) (first args) :form/empty))] | |
(maybe-result res)))) | |
;; ... | |
(defn- delay->interval | |
[{:keys [number unit date-modes]}] | |
(let [{kw :Keyword} (utils.collections/find-by-kv-pair date-modes :Value unit)] | |
(get {"hour" {:hour (int number)} | |
"day" {:day (int number)}} | |
kw))) | |
(defn- get-next-generation [start-at delay-unit delay-number date-modes date-format] | |
(when (and start-at delay-number delay-unit) | |
(let [interval (delay->interval {:number delay-number | |
:unit delay-unit | |
:date-modes date-modes}) | |
next-date (utils.time/offset start-at interval)] | |
(utils.time/strftime next-date date-format)))) | |
(re-om/register-sub! | |
::next-generation-date | |
(fn [db [date-format {:keys [delay-generation-unit start-at delay-generation-number]}]] | |
(let [delay-unit (re-om/get [::field delay-generation-unit] db) | |
start-at (re-om/get [::field start-at] db) | |
delay-number (re-om/get [::field delay-generation-number] db) | |
date-modes (l/pure-view db delay-generation-options-lens)] | |
(get-next-generation start-at | |
delay-unit | |
delay-number | |
date-modes | |
date-format)))) | |
;; ... |
After this refactoring the view is much dumber. The previously leaked (an untested) logic in the view (the get-next-generation
and delay->interval
functions) has been removed from it. Now that logic can be easyly tested through the new next-generation
subscription. This design is also much better than the previous one because it hides how we obtain the data that the view needs: now both the view and the tests ignore, and so are not coupled, to how the data the view needs is obtained. We might refactor both the app-state
and the logic now in get-next-generation
and delay->interval
functions without affecting the view. This is another example of how what is more stable than how.
Summary
The idea to remember is that subscriptions by themselves don’t make code more testable and maintainable. It’s the way we use subscriptions that produces better code. For that the logic must be in the right place which is not inside the view but behind the subscriptions that provide the data that the view needs. If we keep writing “getter” subscriptions” and placing logic in views, we won’t gain all the advantages the subscriptions concept provides and we’ll write poorly designed views coupled to leaked bunches of (very likely untested) logic.
Acknowledgements.
Many thanks to André Stylianos Ramos and Fran Reyes for giving us great feedback to improve this post and for all the interesting conversations.
Footnotes: