Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts

Tuesday, August 23, 2022

Example of role tests in Java with Junit

I’d like to continue with the topic of role tests that we wrote about in a previous post, by showing an example of how it can be applied in Java to reduce duplication in your tests.

This example comes from a deliberate practice session I did recently with some people from Women Tech Makers Barcelona with whom I’m doing Codesai’s Practice Program in Java twice a month.

Making additional changes to the code that resulted from solving the Bank Kata we wrote the following tests to develop two different implementations of the TransactionsRepository port: the InMemoryTransactionsRepository and the FileTransactionsRepository.

These are their tests, respectively:

As you can see both tests contain the same test cases: a_transaction_can_be_saved and transactions_can_be_retrieved but their implementations are different for each class. This makes sense because both implementations implement the same role, (see our previous post to learn how this relates to Liskov Substitution Principle).

We can make this fact more explicit by using role tests. In this case, Junit does not have something equivalent or similar to the RSpec’s shared examples functionality we used in our previous example in Ruby. Nonetheless, we can apply the Template Method pattern to write the role test, so that we remove the duplication, and more importantly make the contract we are implementing more explicit.

To do that we created an abstract class, TransactionsRepositoryRoleTest. This class contains the tests cases that document the role and protect its contract (a_transaction_can_be_saved and transactions_can_be_retrieved) and defines hooks for the operations that will vary in the different implementations of this integration test (prepareData, readAllTransactions and createRepository):

Then we made the previous tests extend TransactionsRepositoryRoleTest and implemented the hooks.

This is the new code of InMemoryTransactionsRepositoryTest:

And this is the new code of FileTransactionsRepositoryTest after the refactoring:

This new version of the tests not only reduces duplication, but also makes explicit and protects the behaviour of the TransactionsRepository role. It also makes less error prone the process of adding a new implementation of TransactionsRepository because just by extending the TransactionsRepositoryRoleTest, you’d get a checklist of the behaviour you need to implement to ensure substitutability, i.e., to ensure the Liskov Substitution Principle is not violated.

Have a look at this Jason Gorman’s repository to see another example that applies the same technique.

In a future post we’ll show how we can do the same in JavaScript using Jest.

Acknowledgements.

I’d like to thank the WTM study group, and especially Inma Navas and Laura del Toro for practising with this kata together.

Thanks to my Codesai colleagues, Inma Navas and Laura del Toro for reading the initial drafts and giving me feedback.

References.

Thursday, June 27, 2019

An example of listening to the tests to improve a design

Introduction.

Recently in the B2B team at LIFULL Connect, we improved the validation of the clicks our API receive using a service that detects whether the clicks were made by a bot or a human being.

So we used TDD to add this new validation to the previously existing validation that checked if the click contained all mandatory information. This was the resulting code:

and these were its tests:

The problem with these tests is that they know too much. They are coupled to many implementation details. They not only know the concrete validations we apply to a click and the order in which they are applied, but also details about what gets logged when a concrete validations fails. There are multiple axes of change that will make these tests break. The tests are fragile against those axes of changes and, as such, they might become a future maintenance burden, in case changes along those axes are required.

So what might we do about that fragility when any of those changes come?

Improving the design to have less fragile tests.

As we said before the test fragility was hinting to a design problem in the ClickValidation code. The problem is that it’s concentrating too much knowledge because it’s written in a procedural style in which it is querying every concrete validation to know if the click is ok, combining the result of all those validations and knowing when to log validation failures. Those are too many responsibilities for ClickValidation and is the cause of the fragility in the tests.

We can revert this situation by changing to a more object-oriented implementation in which responsibilities are better distributed. Let’s see how that design might look:

1. Removing knowledge about logging.

After this change, ClickValidation will know nothing about looging. We can use the same technique to avoid knowing about any similar side-effects which concrete validations might produce.

First we create an interface, ClickValidator, that any object that validates clicks should implement:

Next we create a new class NoBotClickValidator that wraps the BotClickDetector and adapts[1] it to implement the ClickValidator interface. This wrapper also enrichs BotClickDetector’s’ behavior by taking charge of logging in case the click is not valid.

These are the tests of NoBotClickValidator that takes care of the delegation to BotClickDetector and the logging:

If we used NoBotClickValidator in ClickValidation, we’d remove all knowledge about logging from ClickValidation.

Of course, that knowledge would also disappear from its tests. By using the ClickValidator interface for all concrete validations and wrapping validations with side-effects like logging, we’d make ClickValidation tests robust to changes involving some of the possible axis of change that were making them fragile:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.

2. Another improvement: don't use test doubles when it's not worth it[2].

There’s another way to make ClickValidation tests less fragile.

If we have a look at ClickParamsValidator and BotClickDetector (I can’t show their code here for security reasons), they have very different natures. ClickParamsValidator has no collaborators, no state and a very simple logic, whereas BotClickDetector has several collaborators, state and a complicated validation logic.

Stubbing ClickParamsValidator in ClickValidation tests is not giving us any benefit over directly using it, and it’s producing coupling between the tests and the code.

On the contrary, stubbing NoBotClickValidator (which wraps BotClickDetector) is really worth it, because, even though it also produces coupling, it makes ClickValidation tests much simpler.

Using a test double when you’d be better of using the real collaborator is a weakness in the design of the test, rather than in the code to be tested.

These would be the tests for the ClickValidation code with no logging knowledge, after applying this idea of not using test doubles for everything:

Notice how the tests now use the real ClickParamsValidator and how that reduces the coupling with the production code and makes the set up simpler.

3. Removing knowledge about the concrete sequence of validations.

After this change, the new design will compose validations in a way that will result in ClickValidation being only in charge of combining the result of a given sequence of validations.

First we refactor the click validation so that the validation is now done by composing several validations:

The new validation code has several advantages over the previous one:

  • It does not depend on concrete validations any more
  • It does not depend on the order in which the validations are made.

It has only one responsibility: it applies several validations in sequence, if all of them are valid, it will accept the click, but if any given validation fails, it will reject the click and stop applying the rest of the validations. If you think about it, it’s behaving like an and operator.

We may write these tests for this new version of the click validation:

These tests are robust to the changes making the initial version of the tests fragile that we described in the introduction:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.
  3. Adding more validations.
  4. Changing the order of the validation.

However, this version of ClickValidationTest is so general and flexible, that using it, our tests would stop knowing which validations, and in which order, are applied to the clicks[3]. That sequence of validations is a business rule and, as such, we should protect it. We might keep this version of ClickValidationTest only if we had some outer test protecting the desired sequence of validations.

This other version of the tests, on the other hand, keeps protecting the business rule:

Notice how this version of the tests keeps in its setup the knowledge of which sequence of validations should be used, and how it only uses test doubles for NoBotClickValidator.

4. Avoid exposing internals.

The fact that we’re injecting into ClickValidation an object, ClickParamsValidator, that we realized we didn’t need to double, it’s a smell which points to the possibility that ClickParamsValidator is an internal detail of ClickValidation instead of its peer. So by injecting it, we’re coupling ClickValidation users, or at least the code that creates it, to an internal detail of ClickValidation: ClickParamsValidator.

A better version of this code would hide ClickParamsValidator by instantiating it inside ClickValidation’s constructor:

With this change ClickValidation recovers the knowledge of the sequence of validations which in the previous section was located in the code that created ClickValidation.

There are some stereotypes that can help us identify real collaborators (peers)[4]:

  1. Dependencies: services that the object needs from its environment so that it can fulfill its responsibilities.
  2. Notifications: other parts of the system that need to know when the object changes state or performs an action.
  3. Adjustments or Policies: objects that tweak or adapt the object’s behaviour to the needs of the system.

Following these stereotypes, we could argue that NoBotClickValidator is also an internal detail of ClickValidation and shouldn’t be exposed to the tests by injecting it. Hiding it we’d arrive to this other version of ClickValidation:

in which we have to inject the real dependencies of the validation, and no internal details are exposed to the client code. This version is very similar to the one we’d have got using tests doubles only for infrastructure.

The advantage of this version would be that its tests would know the least possible about ClickValidation. They’d know only ClickValidation’s boundaries marked by the ports injected through its constructor, and ClickValidation`’s public API. That will reduce the coupling between tests and production code, and facilitate refactorings of the validation logic.

The drawback is that the combinations of test cases in ClickValidationTest would grow, and may of those test cases would talk about situations happening in the validation boundaries that might be far apart from ClickValidation’s callers. This might make the tests hard to understand, specially if some of the validations have a complex logic. When this problem gets severe, we may reduce it by injecting and use test doubles for very complex validators, this is a trade-off in which we decide to accept some coupling with the internal of ClickValidation in order to improve the understandability of its tests. In our case, the bot detection was one of those complex components, so we decided to test it separately, and inject it in ClickValidation so we could double it in ClickValidation’s tests, which is why we kept the penultimate version of ClickValidation in which we were injecting the click-not-made-by-a-bot validation.

Conclusion.

In this post, we tried to play with an example to show how listening to the tests[5] we can detect possible design problems, and how we can use that feedback to improve both the design of our code and its tests, when changes that expose those design problems are required.

In this case, the initial tests were fragile because the production code was procedural and had too many responsibilities. The tests were fragile also because they were using test doubles for some collaborators when it wasn’t worth to do it.

Then we showed how refactoring the original code to be more object-oriented and separating better its responsibilities, could remove some of the fragility of the tests. We also showed how reducing the use of test doubles only to those collaborators that really needs to be substituted can improve the tests and reduce their fragility. Finally, we showed how we can go too far in trying to make the tests flexible and robust, and accidentally stop protecting a business rule, and how a less flexible version of the tests can fix that.

When faced with fragility due to coupling between tests and the code being tested caused by using test doubles, it’s easy and very usual to “blame the mocks”, but, we believe, it would be more productive to listen to the tests to notice which improvements in our design they are suggesting. If we act on this feedback the tests doubles give us about our design, we can use tests doubles in our advantage, as powerful feedback tools[6], that help us improve our designs, instead of just suffering and blaming them.

Acknowledgements.

Many thanks to my Codesai colleagues Alfredo Casado, Fran Reyes, Antonio de la Torre and Manuel Tordesillas, and to my Aprendices colleagues Paulo Clavijo, Álvaro García and Fermin Saez for their feedback on the post, and to my colleagues at LIFULL Connect for all the mobs we enjoy together.

Footnotes:

[2] See Test Smell: Everything is mocked by Steve Freeman where he talks about things you shouldn't be substituting with tests doubles.
[3] Thanks Alfredo Casado for detecting that problem in the first version of the post.
[4] From Growing Object-Oriented Software, Guided by Tests > Chapter 6, Object-Oriented Style > Object Peer Stereotypes, page 52. You can also read about these stereotypes in a post by Steve Freeman: Object Collaboration Stereotypes.
[5] Difficulties in testing might be a hint of design problems. Have a look at this interesting series of posts about listening to the tests by Steve Freeman.
[6] According to Nat Pryce mocks were designed as a feedback tool for designing OO code following the 'Tell, Don't Ask' principle: "In my opinion it's better to focus on the benefits of different design styles in different contexts (there are usually many in the same system) and what that implies for modularisation and inter-module interfaces. Different design styles have different techniques that are most applicable for test-driving code written in those styles, and there are different tools that help you with those techniques. Those tools should give useful feedback about the external and *internal* quality of the system so that programmers can 'listen to the tests'. That's what we -- with the help of many vocal users over many years -- designed jMock to do for 'Tell, Don't Ask' object-oriented design." (from a conversation in Growing Object-Oriented Software Google Group).

I think that if your design follows a different OO style, it might be preferable to stick to a classical TDD style which nearly limits the use of test doubles only to infrastructure and undesirable side-effects.

Saturday, May 25, 2019

The curious case of the negative builder

Recently, one of the teams I’m coaching at my current client, asked me to help them with a problem, they were experiencing while using TDD to add and validate new mandatory query string parameters[1]. This is a shortened version (validating fewer parameters than the original code) of the tests they were having problems with:

and this is the implementation of the QueryStringBuilder used in this test:

which is a builder with a fluid interface that follows to the letter a typical implementation of the pattern. There are even libraries that help you to automatically create this kind of builders[2].

However, in this particular case, implementing the QueryStringBuilder following this typical recipe causes a lot of problems. Looking at the test code, you can see why.

To add a new mandatory parameter, for example sourceId, following the TDD cycle, you would first write a new test asserting that a query string lacking the parameter should not be valid.

So far so good, the problem comes when you change the production code to make this test pass, in that momento you’ll see how the first test that was asserting that a query string with all the parameters was valid starts to fail (if you check the query string of that tests and the one in the new test, you’ll see how they are the same). Not only that, all the previous tests that were asserting that a query string was invalid because a given parameter was lacking won’t be “true” anymore because after this change they could fail for more than one reason.

So to carry on, you’d need to fix the first test and also change all the previous ones so that they fail again only for the reason described in the test name:

That’s a lot of rework on the tests only for adding a new parameter, and the team had to add many more. The typical implementation of a builder was not helping them.

The problem we’ve just explained can be avoided by chosing a default value that creates a valid query string and what I call “a negative builder”, a builder with methods that remove parts instead of adding them. So we refactored together the initial version of the tests and the builder, until we got to this new version of the tests:

which used a “negative” QueryStringBuilder:

After this refactoring, to add the sourceId we wrote this test instead:

which only carries with it updating the valid method in QueryStringBuilder and adding a method that removes the sourceId parameter from a valid query string.

Now when we changed the code to make this last test pass, no other test failed or started to have descriptions that were not true anymore.

Leaving behind the typical recipe and adapting the idea of the builder pattern to the context of the problem at hand, led us to a curious implementation, a “negative builder”, that made the tests easier to maintain and improved our TDD flow.

Acknowledgements.

Many thanks to my Codesai colleagues Antonio de la Torre and Fran Reyes, and to all the colleagues of the Prime Services Team at LIFULL Connect for all the mobs we enjoy together.

Footnotes:

[1] Currently, this validation is not done in the controller anymore. The code showed above belongs to a very early stage of an API we're developing.
[2] Have a look, for instance, at lombok's' @Builder annotation for Java.

Sunday, April 7, 2019

The Beverages Prices Refactoring kata: a kata to practice refactoring away from an awful application of inheritance.

I created the Beverages Prices Refactoring kata for the Deliberate Practice Program I’m running at Lifull Connect offices in Barcelona (previously Trovit). Its goal is to practice refactoring away from a bad usage of inheritance.

The code computes the price of the different beverages that are sold in a coffe house. There are some supplements that can be added to those beverages. Each supplement increases the price a bit. Not all combinations of drinks and supplements are possible.

Just having a quick look at the tests of the initial code would give you an idea of the kind of problems it might have:

If that’s not enough have a look at its inheritance hierarchy:

To make things worse, we are asked to add an optional cinnamon supplement that costs 0.05€ to all our existing catalog of beverages. We think we should refactor this code a bit before introducing the new feature.

We hope you have fun practicing refactoring with this kata.

Tuesday, September 11, 2018

Cursos en abierto en Canarias

En la primera mitad de este año no habíamos podido hacer ningún curso de TDD en abierto. Nos absorbió el trabajo diario, la agenda de eventos en los que participamos, y todas las reuniones, decisiones y papeleos que tuvimos para lanzar nuestra cooperativa. Era una pena porque nos encanta hacer cursos en abierto por el entusiasmo y las ganas de trabajar y de aprender con las que vienen las personas que asisten a ellos. Para nosotros estos cursos son también muy interesantes porque nos permiten conocer a personas, que, en ocasiones, acaban colaborando con Codesai. Ese fue, por ejemplo, mi caso o el de Luis.
Por eso, justo después de constituir la cooperativa, Fran y yo decidimos liarnos la manta a la cabeza y anunciar que haríamos un curso en abierto en Tenerife un mes después, a mitad de Julio, justo antes del comienzo de las vacaciones para muchos. Teníamos muy poco tiempo para anunciarlo, pero era una oportunidad muy buena para empezar a movernos como cooperativa y para seguir refinando el nuevo material que ya habíamos usado en los cursos de TDD que hemos hecho este año en Merkle y Gradiant. Una vez empezamos a organizarlo, pensé que podría dar otro curso en Gran Canaria con Ronny para aprovechar el viaje. Con lo que al final dimos dos cursos en abierto en una semana: el 9 y 10 de Julio en Las Palmas de Gran Canaria y el 12 y 13 de Julio en Santa Cruz de Tenerife.

Al curso de Gran Canaria vinieron seis personas. La mayoría de ellas trabajaban en AIDA, una empresa con la que trabajamos durante muchos años y con la que tenemos una relación muy cercana. El curso fue intenso porque el nivel de conocimiento de las personas que vinieron era muy dispar. Trabajamos mucho con las parejas durante los ejercicios prácticos e incluso hicimos una sesión extra una semana después en las oficinas de AIDA en la que terminamos de hacer el último ejercicio del curso. Ronny y yo acabamos muy satisfechos y recibimos muy buen feedback de los asistentes, tanto en persona, como en los formularios para dejar tu opinión de forma anónima que siempre suelo enviar unas semanas después. El curso nos sirvió también para experimentar con algunas nuevas secciones sobre cómo usar dobles de prueba de forma sostenible, y sacamos bastante información para aplicar en próximos cursos.

Al día siguiente volé a Tenerife, donde me esperaba Fran en el aeropuerto, y nos fuimos a su casa donde hicimos los últimos preparativos, y descansamos hasta el día siguiente en que empezaba el curso. Al curso de Tenerife asistieron siete personas que venían de diferentes empresas. La gran mayoría eran personas que habían sido antiguos compañeros de Fran, entre ellos, algunos de sus antiguos mentores, por lo que fue un curso muy especial para él. Yo también disfruté mucho. En este curso el nivel de los asistentes era bastante alto y salieron debates muy interesantes. De nuevo recibimos muy buen feedback al terminar.

Una novedad de estos cursos fue que ofrecimos por primera vez un descuento del 50% para colectivos poco representados en la tecnología. Este descuento es algo que hemos decidido ofrecer de ahora en adelante en nuestros cursos en abierto.
Viendo los cursos en perspectiva después de unos meses, creo que aprendimos varias cosas:
  • Un mes es muy poco tiempo para promocionar un curso (cómo no ㋡), sobre todo en un mercado tan pequeño como Canarias. Sólo gracias a nuestra red de contactos y a empresas con las que ya habíamos trabajado en el pasado individualmente o como Codesai pudimos conseguir gente suficiente para hacer el curso rentable.
  • El nuevo material está funcionando bastante bien. Estamos consiguiendo mantener el nivel de satisfacción que había con el curso anterior, al mismo tiempo que estamos pudiendo profundizar en temas que el material anterior no tocaba o lo hacía de forma muy superficial.
  • Los canales de comunicación que usamos habitualmente para promocionar nuestros cursos, Twitter y LinkedIn no fueron suficiente para atraer a personas de colectivos poco representados en la tecnología. Sólo dos asistentes aprovecharon el descuento del 50%. Tenemos que buscar otras formas de llegar a estas personas. De hecho, si lees esto y conoces a gente a la que le pudiese interesar, por favor, corre la voz.
Personalmente me alegro mucho de haber decidido mover estos cursos, porque he conocido a gente muy interesante y muy agradable con la que espero seguir en contacto, ¡muchas gracias a todos!, por la satisfacción del trabajo en sí mismo, y por los buenos ratos y conversaciones que tuve con mis compañeros Ronny y Fran. Esto es algo que es muy valioso e importante porque al estar distribuidos en diferentes regiones (e incluso países) y trabajar en diferentes clientes se echa de menos el contacto humano con los compañeros. Por eso cada curso y/o cada consultoría presencial que hacemos en pareja es una gran oportunidad para reforzar la relación que nos une. Ronny, Fran, muchísimas gracias por la acogida y todos los buenos ratos.

Antes de terminar quería dar las gracias a Manuel Tordesillas por el gran trabajo que hizo preparando todas las katas del curso en C#.
PS: Nuestro próximo curso en abierto de TDD será el 18 y 19 de Octubre en Barcelona, y de nuevo haremos un descuento del 50% para colectivos poco representados en la tecnología. Corre la voz ㋡

Originally published in Codesai's blog.

Thursday, June 28, 2018

Back at Merkle in 2018

TDD training

In the last quarter of 2017 we delivered several TDD trainings at Merkle’s offices in Barcelona and did several consulting session with their JavaScript and Java teams. Merkle is a company with high quality standards for the software they develop, so we were very happy when they contacted us to collaborate again this year doing several TDD trainings and consulting sessions.
So far we have done the first round of consulting sessions and the first of the TDD trainings. It was a very special one because it was the first test drive of our revamped training material. We have used the feedback from former training attendees to refine our TDD training material. We edited some parts that we thought were less valuable, and added new modules that reflects better the way we understand TDD at Codesai, so that the course is now more dynamic and easier to follow. We are looking forward to having the next consulting sessions and TDD trainings.

TDD training

We know that a two days introductory TDD training is not enough for a team to start doing TDD in a real environment. After the training, once the team returns to work, a lot of questions arise and the way to apply TDD is not always clear to them. In order to help Merkle’s teams in this process we are supplementing the TDD training with, on one hand, several rounds of consulting sessions in which we address together the difficulties that arise on the teams daily job, and, on the other hand, with one of the main novelties of this year’s collaboration with Merkle: a 4 months deliberate practice program. This program is aimed to improve the skills of a group of about 22 Merkle’s developers in object-oriented design, refactoring and TDD. We will give more details about it in a future post.

Deliberate Practice Program

Before finishing this post, we would love to thank Merkle (and especially Nelson Cardenas) for trusting us again to train their developers and help them grow their skills.

Originally published in Codesai's blog.

Tuesday, June 26, 2018

Improving your reds

Improving your reds is a simple tip that is described by Steve Freeman and Nat Pryce in their wonderful book Growing Object-Oriented Software, Guided by Tests. It consists in a small variation to the TDD cycle in which you watch the error message of your failing test and ask yourself if the information it gives you would make it easier to diagnose the origin of the problem in case this error appears as a regression in the future. If the answer is no, you improve the error message before going on to make the test pass.


From Growing Object-Oriented Software by Nat Pryce and Steve Freeman.

Investing time in improving your reds will prove very useful for your colleagues and yourself because the clearer the error message, the better the context to fix the regression error effectively. Most of the times, applying this small variation to the TDD cycle only requires a small effort. As a simple example, have a look at the following assertion and how it fails

Why is it failing? Will this error message help us know what’s happening if we see it in the future?
The answer is clearly no, but with little effort, we can add a bit of code to make the error message clearer (implementing the toString() method).

This error message is much clearer that the previous one and will help us to be more effective both while test-driving the code and if/when a regression error happens in the future, and we got this just by adding an implementation of toString() generated by our IDE.

Take that low hanging fruit, start improving your reds!

Originally published in Codesai's blog.

Friday, March 30, 2018

Kata: A small kata to explore and play with property-based testing

1. Introduction.

I've been reading Fred Hebert's wonderful PropEr Testing online book about property-based testing. So to play with it a bit, I did a small exercise. This is its description:

1. 1. The kata.

We'll implement a function that can tell if two sequences are equal regardless of the order of their elements. The elements can be of any type.

We'll use property-based testing (PBT). Use the PBT library of your language (bring it already installed).

Follow these constraints:

  1. You can't use or compute frequencies of elements.
  2. Work test first: write a test, then write the code to make that test pass.
  3. If you get stuck, you can use example-based tests to drive the implementation on. However, at the end of the exercise, only property-based tests can remain.

Use mutation testing to check if you tests are good enough (we'll do it manually injecting failures in the implementation code (by commenting or changing parts of it) and checking if the test are able to detect the failure to avoid using more libraries).

2. Driving a solution using both example-based and property-based tests.

I used Clojure and its test.check library (an implementation of QuickCheck) to do the exercise. I also used my favorite Clojure's test framework: Brian Marick's Midje which has a macro, forall, which makes it very easy to integrate property-based tests with Midje.

So I started to drive a solution using an example-based test (thanks to Clojure's dynamic nature, I could use vectors of integers to write the tests. ):

which I made pass using the following implementation:

Then I wrote a property-based test that failed:

This is how the failure looked in Midje (test.check returns more output when a property fails, but Midje extracts and shows only the information it considers more useful):

the most useful piece of information for us in this failure message is the quick-check shrunken failing values. When a property-based testing library finds a counter-example for a property, it applies a shrinking algorithm which tries to reduce it to find a minimal counter-example that produces the same test failure. In this case, the [1 0] vector is the minimal counter-example found by the shrinking algorithm that makes this test fails.

Next I made the property-based test pass by refining the implementation a bit:

I didn't know which property to write next, so I wrote a failing example-based test involving duplicate elements instead:

and refined the implementation to make it pass:

With this, the implementation was done (I chose a function that was easy to implement, so I could focus on thinking about properties).

3. Getting rid of example-based tests.

Then the next step was finding properties that could make the example-based tests redundant. I started by trying to remove the first example-based test. Since I didn't know test.check's generators and combinators library, I started exploring it on the REPL with the help of its API documentation and cheat sheet.

My sessions on the REPL to build generators bit by bit were a process of shallowly reading bits of documentation followed by trial and error. This tinkering sometimes lead to quick successes and most of the times to failures which lead to more deep and careful reading of the documentation, and more trial and error. In the end I managed to build the generators I wanted. The sample function was very useful during all the process to check what each part of the generator would generate.

For the sake of brevity I will show only summarized versions of my REPL sessions where everything seems easy and linear...

3. 1. First attempt: a partial success.

First, I wanted to create a generator that generated two different vectors of integers so that I could replace the example-based tests that were checking two different vectors. I used the list-distinct combinator to create it and the sample function to be able to see what the generator would generate:

I used this generator to write a new property which made it possible to remove the first example-based test but not the second one:

In principle, we might think that the new property should have been enough to also allow removing the last example-based test involving duplicate elements. A quick manual mutation test, after removing that example-based test, showed that it wasn't enough: I commented the line (= (count s1) (count s2)) in the implementation and the property-based tests weren't able to detect the regression.

This was due to the low probability of generating a pair of random vectors that were different because of having duplicate elements, which was what the commented line, (= (count s1) (count s2)), was in the implementation for. If we'd run the tests more times, we'd have finally won the lottery of generating a counter-example that would detect the regression. So we had to improve the generator in order to increase the probabilities, or, even better, make sure it'd be able to detect the regression.

In practice, we'd combine example-based and property-based tests. However, my goal was learning more about property-based testing, so I went on and tried to improve the generators (that's why this exercise has the constraint of using only property-based tests).

3. 2. Second attempt: success!

So, I worked a bit more on the REPL to create a generator that would always generate vectors with duplicate elements. For that I used test.check's let macro, the tuple, such-that and not-empty combinators, and Clojure's core library repeat function to build it.

The following snippet shows a summary of the work I did on the REPL to create the generator using again the sample function at each tiny step to see what inputs the growing generator would generate:

Next I used this new generator to write properties that this time did detect the regression mentioned above. Notice how there are separate properties for sequences with and without duplicates:

After tinkering a bit more with some other generators like return and vector-distinct, I managed to remove a redundant property-based test getting to this final version:

4. Conclusion.

All in all, this exercise was very useful to think about properties and to explore test.check's generators and combinators. Using the REPL made this exploration very interactive and a lot of fun. You can find the code of this exercise on this GitHub repository.

A couple of days later I proposed to solve this exercise at the last Clojure Developers Barcelona meetup. I received very positive feedback, so I'll probably propose it for a Barcelona Software Craftsmanship meetup event soon.

Wednesday, March 28, 2018

Examples lists in TDD

1. Introduction.

During coding dojos and some mentoring sessions I've noticed that most people just start test-driving code without having thought a bit about the problem first. Unfortunately, writing a list of examples before starting to do TDD is a practice that is most of the times neglected.

Writing a list of examples is very useful because having to find a list of concrete examples forces you to think about the problem at hand. In order to write each concrete example in the list, you need to understand what you are trying to do and how you will know when it is working. This exploration of the problem space improves your knowledge of the domain, which will later be very useful while doing TDD to design a solution. However, just generating a list of examples is not enough.

2. Orthogonal examples.

A frequent problem I've seen in beginners' lists is that many of the examples are redundant because they would drive the same piece of behavior. When two examples drive the same behavior, we say that they overlap with each other, they are overlapping examples.

To explore the problem space effectively, we need to find examples that drive different pieces of behavior, i.e. that do not overlap. From now on, I will refer to those non-overlapping examples as orthogonal examples[1].

Keeping this idea of orthogonal examples in mind while exploring a problem space, will help us prune examples that don't add value, and keep just the ones that will force us to drive new behavior.

How can we get those orthogonal examples?
  1. Start by writing all the examples that come to your mind.
  2. As you gather more examples ask yourself which behavior they would drive. Will they drive one clear behavior or will they drive several behaviors?
  3. Try to group them by the piece of behavior they'd drive and see which ones overlap so you can prune them.
  4. Identify also which behaviors of the problem are not addressed by any example yet. This will help you find a list of orthogonal examples.
With time and experience you'll start seeing these behavior partitions in your mind and spend less time to find orthogonal examples.

3. A concrete application.

Next, we'll explore a concrete application using a subset of the Mars Rover kata:
  • The rover is located on a grid at some point with coordinates (x,y) and facing a direction encoded with a character.
  • The meaning of each direction character is:
    • "N" -> North
    • "S" -> South
    • "E" -> East
    • "W" -> West
  • The rover receives a sequence of commands (a string of characters) which are codified in the following way:
    • When it receives an "f", it moves forward one position in the direction it is facing.
    • When it receives a "b", it moves backward one position in the direction it is facing.
    • When it receives a "l", it turns 90º left changing its direction.
    • When it receives a "r", it turns 90º right changing its direction.

Heuristics.

Let's start writing a list of examples that explores this problem. But how?

Since the rover is receiving a sequence of commands, we can apply a useful heuristic for sequences to get us started: J. B. Rainsberger's "0, 1, many, oops" heuristic [2].

In this case, it means generating examples for: no commands, one command, several commands and unknown commands.

I will use the following notation for examples:
(x, y, d), commands_sequence -> (x’, y’, d’)
Meaning that, given the rover is in an initial location with x and y coordinates, and facing a direction d, after receiving a given sequence of commands (which is represented by a string), the rover will be located at x’ and y’ coordinates and facing the d’ direction.

Then our first example corresponding to no commands might be any of:
(0, 0, "N"), "" -> (0, 0, "N")
(1, 4, "S"), "" -> (1, 4, "S")
(2, 5, "E"), "" -> (2, 5, "E")
(3, 2, "E"), "" -> (3, 2, "E")
...
Notice that in these examples, we don't care about the specific positions or directions of the rover. The only important thing here is that the position and direction of the rover does not change. They will all drive the same behavior so we might express this fact using a more generic example:
(any_x, any_y, any_direction), "" -> (any_x, any_y, any_direction)
where we have used any_x, any_y and any_direction to make explicit that the specific values that any_x, any_y and any_direction take are not important for these tests. What is important for the tests, is that the values of x, y and direction remain the same after applying the sequence of commands [3].

Next, we focus on receiving one command.

In this case there are a lot of possible examples, but we are only interested on those that are orthogonal. Following our recommendations to get orthogonal examples, you can get to the following set of 16 examples that can be used to drive all the one command behavior (we're using any_x, any_y where we can):
(4, any_y, "E"), "b" -> (3, any_y, "E")
(any_x, any_y, "S"), "l" -> (any_x, any_y, "E")
(any_x, 6, "N"), "b" -> (any_x, 5, "N")
(any_x, 3, "N"), "f" -> (any_x, 4, "N")
(5, any_y, "W"), "f" -> (4, any_y, "W")
(2, any_y, "W"), "b" -> (3, any_y, "W")
(any_x, any_y, "E"), "l" -> (any_x, any_y, "N")
(any_x, any_y, "W"), "r" -> (any_x, any_y, "N")
(any_x, any_y, "N"), "l" -> (any_x, any_y, "W")
(1, any_y, "E"), "f" -> (2, any_y, "E")
(any_x, 8, "S"), "f" -> (any_x, 7, "S")
(any_x, any_y, "E"), "r" -> (any_x, any_y, "S")
(any_x, 3, "S"), "b" -> (any_x, 4, "S")
(any_x, any_y, "W"), "l" -> (any_x, any_y, "S")
(any_x, any_y, "N"), "r" -> (any_x, any_y, "E")
(any_x, any_y, "S"), "r" -> (any_x, any_y, "W")
There're already important properties about the problem that we can learn from these examples:
  1. The position of the rover is irrelevant for rotations.
  2. The direction the rover is facing is relevant for every command. It determines how each command will be applied.
Sometimes it can also be useful to think in different ways of grouping the examples to see how they may relate to each other.

For instance, we might group the examples above by the direction the rover faces initially:
Facing East
(1, any_y, "E"), "f" -> (2, any_y, "E")
(4, any_y, "E"), "b" -> (3, any_y, "E")
(any_x, any_y, "E"), "l" -> (any_x, any_y, "N")
(any_x, any_y, "E"), "r" -> (any_x, any_y, "S")
Facing West
(5, any_y, "W"), "f" -> (4, any_y, "W") ...
or, by the command the rover receives:
Move forward
(1, any_y, "E"), "f" -> (2, any_y, "E")
(5, any_y, "W"), "f" -> (4, any_y, "W")
(any_x, 3, "N"), "f" -> (any_x, 4, "N")
(any_x, 8, "S"), "f" -> (any_x, 7, "S")
Move backward
(4, any_y, "E"), "b" -> (3, any_y, "E")
(2, any_y, "W"), "b" -> (3, any_y, "W")
...
Trying to classify the examples helps us explore different ways in which we can use them to make the code grow by discovering what Mateu Adsuara calls dimensions of complexity of the problem[4]. Each dimension of complexity can be driven using a different set of orthogonal examples, so this knowledge can be useful to choose the next example when doing TDD.

Which of the two groupings shown above might be more useful to drive the problem?

In this case, I think that the by the command the rover receives grouping is more useful, because each group will help us drive a whole behavior (the command). If we were to use the by the direction the rover faces initially grouping, we'd end up with partially implemented behaviors (commands) after using each group of examples.

Once we have the one command examples, let's continue using the "0, 1, many, oops" heuristic and find examples for the several commands category.

We can think of many different examples:
(7, 4, "E"), "rb" -> (7, 5, "S") (7, 4, "E"), "fr" -> (8, 4, "S") (7, 4, "E"), "ffl" -> (9, 4, "N")
The thing is that any of them might be thought as a composition of several commands:
(7, 4, "E"), "r" -> (7, 4, "S"), "b" -> (7, 5, "S")
Then the only new behavior these examples would drive is composing commands.

So It turns out that there's only one orthogonal example in this category. We might choose any of them, like the following one for instance:
(7, 4, "E"), "frrbbl" -> (10, 4, "S")
This doesn't mean that when we're later doing TDD, we have to use only this example to drive the behavior. We can use more overlapping examples if we're unsure on how to implement it and we need to use triangulation[5].

Finally, we can consider the "oops" category which for us is unknown commands. In this case, we need to find out how we'll handle them and this might involve some conversations.

Let's say that we find out that we should ignore unknown commands, then this might be an example:
(any_x, any_y, any_direction), "*" -> (any_x, any_y, any_direction)
Before finishing, I’d like to remark that it’s important to keep this technique as lightweight and informal as possible, writing the examples on a piece of paper or on a whiteboard, and never, ever, write them directly as tests (which I’ve also seen many times).

There are two important reasons for this:
  1. Avoiding implementation details to leak into a process meant for thinking about the problem space.
  2. Avoiding getting attached to the implementation of tests, which can create some inertia and push you to take implementation decisions without having explored the problem well.

4. Conclusion.

Writing a list of examples before starting doing TDD is an often overlooked technique that can be very useful to reflect about a given problem. We also talked about how thinking in finding orthogonal examples can make your list of examples much more effective and saw some useful heuristics that might help you find them.

Then we worked on a small example in tiny steps, compared different alternatives just to try to illustrate and make the technique more explicit and applied one of the heuristics.

With practice, this technique becomes more and more a mental process. You'll start doing it in your mind and find orthogonal examples faster. At the same time, you’ll also start losing awareness of the process[6].

Nonetheless, writing a list of examples or other similar lightweight exploration techniques can still be very helpful for more complicated cases. This technique can also be very useful to think in a problem when you’re working on it with someone else (pairing, mob programming, etc.), because it enhances communication.

5. Acknowledgements.

Many thanks to Alfredo Casado, Álvaro Garcia, Abel Cuenca, Jaime Perera, Ángel Rojo, Antonio de la Torre, Fran Reyes and Manuel Tordesillas for giving me great feedback to improve this post and for all the interesting conversations.

6. References.


Footnotes.
[1] This concept of orthogonal examples is directly related to Mateu Adsuara's dimensions of complexity idea because each dimension of complexity can be driven using a different set of orthogonal examples. For a definition of dimensions of complexity, see footnote [4] .
[2] Another very useful heuristic is described in James Grenning's TDD Guided by ZOMBIES post.
[3] This is somehow related to Brian Marick’s metaconstants which can be very useful to write tests in dynamic languages. They’re also hints about properties that might be helpful in property-based testing.
[4] Dimension of Complexity is a term used by Mateu Adsuara in a talk at Socrates Canarias 2016 to name an orthogonal functionality. In that talk he used dimensions of complexity to classify the examples in his tests list in different groups and help him choose the next test when doing TDD.
He talked about it in these three posts:
Other names for the same concept that I've heard are axes of change, directions of change or vectors of change.
[5] Even though triangulation is probably the most popular, there are two other strategies for implementing new functionality in TDD: obvious implementation and fake it. Kent Beck in his Test-driven Development: By Example book describes the three techniques and says that he prefers to use obvious implementation or fake it most of the time, and only use triangulation as a last resort when design gets complicated.
[6] This loss of awareness of the process is the price of expertise according to the Dreyfus model of skill acquisition.

Saturday, March 10, 2018

Kata: Generating bingo cards with clojure.spec, clojure/test.check, RDD and TDD

Clojure Developers Barcelona has been running for several years now. Since we're not many yet, we usually do mob programming sessions as part of what we call "sagas". For each saga, we choose an exercise or kata and solve it during the first one or two sessions. After that, we start imagining variations on the exercise using different Clojure/ClojureScript libraries or technologies we feel like exploring and develop those variations in following sessions. Once we feel we can't imagine more interesting variations or we get tired of a given problem, we choose a different problem to start a new saga. You should try doing sagas, they are a lot of fun!

Recently we've been working on the Bingo Kata.

The initial implementation

These were the tests we wrote to check the randomly generated bingo cards:

and the code we initially wrote to generate them was something like (we didn't save the original one):

As you can see the tests are not concerned with which specific numeric values are included on each column of the bingo card. They are just checking that they follow the specification of a bingo card. This makes them very suitable for property-based testing.

Introducing clojure.spec

In the following session of the Bingo saga, I suggested creating the bingo cards using clojure.spec.
spec is a Clojure library to describe the structure of data and functions. Specs can be used to validate data, conform (destructure) data, explain invalid data, generate examples that conform to the specs, and automatically use generative testing to test functions.
For a brief introduction to this wonderful library see Arne Brasseur's Introduction to clojure.spec talk.

I'd used clojure.spec at work before. At my current client Green Power Monitor, we've been using it for a while to validate the shape (and in some cases types) of data flowing through some important public functions of some key name spaces. We started using pre and post-conditions for that validation (see Fogus' Clojure’s :pre and :post to know more), and from there, it felt as a natural step to start using clojure.spec to write some of them.

Another common use of clojure.spec specs is to generate random data conforming to the spec to be used for property-based testing.

In the Bingo kata case, I thought that we might use this ability of randomly generating data conforming to the spec in production code. This meant that instead of writing code to randomly generating bingo cards and then testing that the results were as expected, we might describe the bingo cards using clojure.spec and then took advantage of that specification to randomly generate bingo cards using clojure.test.check's generate function.

So with this idea in our heads, we started creating a spec for bingo columns on the REPL bit by bit (for the sake of brevity what you can see here is the final form of the spec):

then we discovered clojure.spec's coll-of function which allowed us to simplify the spec a bit:

Generating bingo cards

Once we thought we had it, we tried to use the column spec to generate columns with clojure.test.check's generate function, but we got the following error:
ExceptionInfo Couldn't satisfy such-that predicate after 100 tries.
Of course we were trying to find a needle in a haystack...

After some trial and error on the REPL and reading the clojure.spec guide, we found the clojure.spec's int-in function and we finally managed to generate the bingo columns:

Then we used the spec code from the REPL to write the bingo cards spec:

in which we wrote the create-column-spec factory function that creates column specs to remove duplication between the specs of different columns.

With this in place the bingo cards could be created in a line of code:

Introducing property-based testing

Property-based tests make statements about the output of your code based on the input, and these statements are verified for many different possible inputs.
Jessica Kerr (Property-based testing: what is it?)
Having the specs it was very easy to change our bingo card test to use property-based testing instead of example-based testing just by using the generator created by clojure.spec:

See in the code that we're reusing the check-column function we wrote for the example-based tests.

This change was so easy because of:
  1. clojure.spec can produce a generator for clojure/test.check from a given spec
  2. .
  3. The initial example tests, as I mentioned before, were already checking the properties of a valid bingo card. This means that they weren't concerned with which specific numeric values were included on each column of the bingo card, but instead, they were just checking that the cards followed the rules for a bingo card to be valid.

Going fast with REPL driven development (RDD)

The next user story of the kata required us to check a bingo card to see if its player has won. We thought this might be easy to implement because we only needed to check that the numbers in the card where contained by the set of called numbers, so instead of doing TDD, we played a bit on the REPL did REPL-driven development (RDD):

Once we had the implementation working, we copied it from the REPL into its corresponding name space

and wrote the quicker but ephemeral REPL tests as "permanent" unit tests:

In this case RDD allowed us to go faster than TDD, because RDD's feedback cycle is much faster. Once the implementation is working on the REPL, you can choose which REPL tests you want to keep as unit tests.

Some times I use only RDD like in this case, other times I use a mix of TDD and RDD following this cycle:
  1. Write a failing test (using examples that a bit more complicated than the typical ones you use when doing only TDD).
  2. Explore and triangulate on the REPL until I made the test pass with some ugly but complete solution.
  3. Refactor the code.
Other times I just use TDD.

I think what I use depends a lot on how easy I feel the implementation might be.

Last details

The last user story required us to create a bingo caller that randomly calls out Bingo numbers. To develop this story, we used TDD and an atom to keep the not-yet-called numbers. These were our tests:

and this was the resulting code:

And it was done! See all the commits here if you want to follow the process (many intermediate steps happened on the REPL). You can find all the code on GitHub.

Summary

This experiment was a lot of fun because we got to play with both clojure.spec and clojure/test.check, and we learned a lot. While explaining what we did, I talked a bit about property-based testing and how I use REPL-driven development.

Thanks again to all my colleagues in Clojure Developers Barcelona!

Friday, December 15, 2017

Third open TDD Training in Barcelona (in spanish)

Kata en curso abierto de TDD de Noviembre 2017 en Barcelona

En vista del éxito que habían tenido los dos cursos abiertos de TDD que habíamos hecho este año en Barcelona, nos decidimos a hacer un nuevo curso abierto en Noviembre. Los cursos en abierto me hacen especial ilusión porque fue en uno de estos cursos abiertos dado por Carlos Blé en 2011 donde tuve la oportunidad de empezar a aprender sobre TDD.

En principio ibamos a dar el curso Luis y yo, pero, desgraciadamente, contraje el síndrome de Miller Fisher, y al final Luis tuvo que dar el curso en solitario.

Asistieron ocho personas, de las cuales siete venían de la misma empresa, Systelab. ¡Muchas gracias por confiar en nosotros y kudos especiales para las personas que se pagaron el curso de su bolsillo!

Grupo curso abierto de TDD de Noviembre 2017 en Barcelona

El hecho de que casi todo el mundo viniese de la misma empresa hizo que no hubiera tanta variedad de lenguajes como en otros cursos en abierto: todos usaron Java. Las katas salieron de forma bastante fluida y el feedback que recibimos fue bastante bueno.

2017 ha sido un año muy fructifero en lo que se refiere a los cursos en abierto de TDD en Barcelona. Hemos tenido la suerte de hacer tres cursos (ver posts sobre el primero y el segundo) y de conocer a un montón de gente estupenda. Muchísimas gracias a todos las personas que asistieron.

Para finalizar quería dar las gracias a Magento, a Ángel Rojo, a Fernando Monferrer y a nuestra compañera Dácil por toda la ayuda que nos prestaron para que estos tres cursos llegaran a suceder.

Publicado originalmente en el blog de Codesai.

Saturday, December 2, 2017

TDD Trainings at Merkle Comet

Last October Luis Rovirosa, Jordi Anguela and I had the pleasure to give three TDD trainings at Merkle Comet’s impressive offices in Barcelona. Merkle Comet is a consulting company distributed worldwide which is very committed to deliver not only value but also high quality software to their clients.

We worked with three groups comprised of people from different teams. This mix was challenging but also rewarding because the members of each team brought with them different set of problems, skills and practices, which lead to interesting discussions and opportunities to learn. All in all, I think we managed to satisfy the expectations they had about our training.

Momento 1 curso TDD Comet

I delivered two of the trainings with Luis, and Jordi and Luis the other one. Every time it’s possible we try to give our trainings in pairs because we think it makes them much better. Being two we can devote more time to give feedback and help each person with their doubts while they are working through the katas. It also helps us monitor our audience better so that we can adapt our message to their needs.

We loved working with such great people. We’d like to thank them all for the great time we had. We’d also like to specially thank Nelson Cardenas for having contacted us to work with them. We’d love to continue collaborating with this great team in the future.

asistentes al curso 1 de TDD en Comet
asistentes al curso 2 de TDD en Comet
asistentes al curso 3 de TDD en Comet
Published originally in Codesai's blog.

Monday, June 26, 2017

Second Open TDD Training in Barcelona (in spanish)

El mes pasado Luis y yo hicimos un curso abierto de TDD en Barcelona. Fue una edición muy interesante en la que probamos algunos cambios en el contenido del curso, y en la que participaron algunos conocidos de la comunidad de Barcelona.

curso abierto de TDD deMayo 2017 en Barcelona

En las últimas ediciones del curso, nos habíamos dado cuenta de que el ejercicio de outside-in TDD, la Bank Kata, que hacíamos el segundo día le estaba resultando muy difícil a los alumnos. En outside-in TDD se usan los dobles de prueba como herramienta de diseño y exploración de interfaces, lo cual resulta muy complicado cuando una persona aún no ha acabado de entenderlos y manejarlos con soltura.

Esta dificultad estaba suponiendo un obstáculo para que acabaran de entender los dobles de prueba. Por ese motivo, decidimos mover el ejercicio de outside-in TDD al principio de un curso más avanzado de TDD que estamos preparando, y hacer otro ejercicio más sencillo en su lugar que les ayudase a asimilar mejor los conceptos.

El nuevo ejercicio que elegimos fue la Coffee Machine Kata. Es una kata muy interesante que ya había probado en un dojo de SCBCN. Creemos que nuestro experimento funcionó bastante bien. Usando esta nueva kata se entiende de forma más gradual y menos traumática cómo y cuándo aplicar cada tipo de doble de prueba. Acabamos muy satisfechos con el resultado de nuestro pequeño experimento y el feedback que recibimos.

Esta edición fue la segunda que hacíamos en lo que va de año, y hubo más gente que en la edición anterior. Esto se debió en gran parte a que vinieron desde Zaragoza cuatro personas que trabajan en Inycom. Muchas gracias por confiar en nosotros.

También nos gustaría darle las gracias a todos los asistentes por su entrega y ganas de aprender. Finalmente, agradecer a Magento, especialmente a Ángel Rojo que nos hayan acogido de nuevo en sus oficinas y toda la ayuda que nos prestaron, y a nuestra compañera Dácil por organizarlo todo.

Publicado originalmente en el blog de Codesai.

Sunday, June 4, 2017

Kata: Luhn Test in Clojure

We recently did the Luhn Test kata at a Barcelona Software Craftsmanship event.

This is a very interesting problem to practice TDD because it isn't so obvious how to test drive a solution through the only function in its public API: valid?.

What we observed during the dojo is that, since the implementation of the Luhn Test is described in so much detail in terms of the s1 and s2 functions (check its description here), it was very tempting for participants to test these two private functions instead of the valid? function.

Another variant of that approach consisted in making those functions public in a different module or class, to avoid feeling "guilty" for testing private functions. Even though in this case, only public functions were tested, these approach produced a solution which has much more elements than needed, i.e. with a poorer design according to the 4 rules of simple design. It also produced tests that are very coupled to implementation details.

In a language with a good REPL, a better and faster approach might have been writing a failing test for the valid? function, and then interactively develop with the REPL s1 and s2 functions. Then combining s1 and s2 would have made the failing test for valid? pass. At the end, we could add some other tests for valid? to gain confidence in the solution.

This mostly REPL-driven approach is fast and produces tests that don't know anything about the implementation details of valid?. However, we need to realize that, it follows the same technique of "testing" (although only interactively) private functions. The huge improvement is that we don't keep these tests and we don't create more elements than needed. However, the weakness of this approach is that it leaves us with less protection against possible regressions. That's why we need to complement it with some tests after the code is written to gain confidence in the solution.

If we use TDD writing tests only for the valid? function, we can avoid creating superfluous elements and create a good protection against regressions at the same time. We only need to choose our test examples wisely.

These are the tests I used to test drive a solution (using Midje):

Notice that I actually needed 7 tests to drive the solution. The last four tests were added to gain confidence in it.

This is the resulting code:

See all the commits here if you want to follow the process. You can find all the code on GitHub.

We can improve this regression test suit by changing some of the tests to make them fail for different reasons:

I think this kata is very interesting for practicing TDD, in particular, to learn how to choose good examples for your tests.

Wednesday, April 12, 2017

JavaScript Training at Velneo (in spanish)

El pasado 7 y 8 de marzo de 2017 estuvimos Manuel Rivero y yo, impartiendo un curso de JavaScript para la empresa Velneo, en sus oficinas de Gijón. Estuvimos 10 personas tanto de Velneo, como de otras empresas del grupo.

Este post no sería más que una nota de nuestro día a día, si no fuera porque el empujón definitivo para escribirlo vino de una noticia que salió esta semana. VisualMS (el grupo al que pertenece Velneo) fue reconocida como el mejor lugar para trabajar de España, primer premio Best Place to Work 2017, en la categoría de empresas de 50 a 100 empleados.

Y no nos extraña en absoluto, se nota que tienen una relación especial entre ellos, disfrutamos muchísimo ese par de días compartiendo tiempo y espacio con un grupo de personas fantástico.

algunos miembros equipo Velneo

¿Qué es Velneo? Pues con mis propias palabras. Velneo es un entorno para crear ERPs. Sus clientes son empresas de desarrollo que configuran, adaptan y amplian, el producto para un cliente final. Puede ser una fábrica, una administración o una compañía de seguros. Han visto que su punto más valorado es la comunicación en español y tienen una fuerte presencia tanto en nuestro país, como en Latinoamérica.

Hablando ahora un poco del curso, fue interesante porque hicimos un contenido ajustado a sus necesidades basado en JavaScript ES5. Actualmente el motor que tienen integrado en el producto no soporta ES6, pero no vimos problema, pues en Codesai somos muy fan del patrón módulo o del patrón Function As Object, por ejemplo.

Manuel Rivero en Velneo

El curso fue una combinación de teoría y práctica en forma de katas usando TDD. Era un equipo que nunca lo había practicado y se notó que lo pasaron en grande, programando en parejas, pensando los ‘baby steps’, y consiguiendo unos verdes preciosos.

La valoración del curso en general fue muy alta (nos pusieron un 9 de media, ¡gracias!), y además nos dieron un feedback muy valioso. A destacar nos comentaron que hubiesen sacado más chicha de las katas si el tema fuera más alineado con su negocio, pues tardaban demasiado en comprender lo que se quería conseguir. Usaremos este conocimiento sabiamente.

Antonio de la Torre en Velneo

Hemos recopilado la bibliografía que usamos para crear el curso y que también sirve para ampliar conocimientos. Está disponible aquí: Bibliografía del curso

También recopilamos todos los ejemplos que usamos en este repositorio de Github.

Gracias a todo el equipo de Velneo por esta gran experiencia y esperamos coincidir pronto en otros proyectos.

Publicado originalmente en el blog de Codesai por Antonio de la Torre.