Monday, June 26, 2017

Second Open TDD Training in Barcelona (in spanish)

El mes pasado Luis y yo hicimos un curso abierto de TDD en Barcelona. Fue una edición muy interesante en la que probamos algunos cambios en el contenido del curso, y en la que participaron algunos conocidos de la comunidad de Barcelona.

curso abierto de TDD deMayo 2017 en Barcelona

En las últimas ediciones del curso, nos habíamos dado cuenta de que el ejercicio de outside-in TDD, la Bank Kata, que hacíamos el segundo día le estaba resultando muy difícil a los alumnos. En outside-in TDD se usan los dobles de prueba como herramienta de diseño y exploración de interfaces, lo cual resulta muy complicado cuando una persona aún no ha acabado de entenderlos y manejarlos con soltura.

Esta dificultad estaba suponiendo un obstáculo para que acabaran de entender los dobles de prueba. Por ese motivo, decidimos mover el ejercicio de outside-in TDD al principio de un curso más avanzado de TDD que estamos preparando, y hacer otro ejercicio más sencillo en su lugar que les ayudase a asimilar mejor los conceptos.

El nuevo ejercicio que elegimos fue la Coffee Machine Kata. Es una kata muy interesante que ya había probado en un dojo de SCBCN. Creemos que nuestro experimento funcionó bastante bien. Usando esta nueva kata se entiende de forma más gradual y menos traumática cómo y cuándo aplicar cada tipo de doble de prueba. Acabamos muy satisfechos con el resultado de nuestro pequeño experimento y el feedback que recibimos.

Esta edición fue la segunda que hacíamos en lo que va de año, y hubo más gente que en la edición anterior. Esto se debió en gran parte a que vinieron desde Zaragoza cuatro personas que trabajan en Inycom. Muchas gracias por confiar en nosotros.

También nos gustaría darle las gracias a todos los asistentes por su entrega y ganas de aprender. Finalmente, agradecer a Magento, especialmente a Ángel Rojo que nos hayan acogido de nuevo en sus oficinas y toda la ayuda que nos prestaron, y a nuestra compañera Dácil por organizarlo todo.

Publicado originalmente en el blog de Codesai.

Friday, June 23, 2017

Testing Om components with cljs-react-test

I'm working for Green Power Monitor which is a company based in Barcelona specialized in monitoring renewable energy power plants and with clients all over the world.

We're developing a new application to monitor and manage renewable energy portfolios. I'm part of the front-end team. We're working on a challenging SPA that includes a large quantity of data visualization and which should present that data in an UI that is polished and easy to look at. We are using ClojureScript with Om (a ClojureScript interface to React) which are helping us be very productive.

I’d like to show an example in which we are testing an Om component that is used to select a command from several options, such as, loading stored filtering and grouping criteria for alerts (views), saving the current view, deleting an already saved view or going back to the default view.

This control will send a different message through a core.async channel depending on the selected command. This is the behavior we are going to test in this example, that the right message is sent through the channel for each selected command. We try to write all our components following this guideline of communicating with the rest of the application by sending data through core.async channels. Using channels makes testing much easier because the control doesn’t know anything about its context

We’re using cljs-react-test to test these Om components as a black box. cljs-react-test is a ClojureScript wrapper around Reacts Test Utilities which provides functions that allow us to mount and unmount controls in test fixtures, and interact with controls simulating events.

This is the code of the test:

We start by creating a var where we’ll put a DOM object that will act as container for our application, c.

We use a fixture function that creates this container before each test and tears down React's rendering tree, after each test. Notice that the fixture uses the async macro so it can be used for asynchronous tests. If your tests are not asynchronous, use the simpler fixture example that appears in cljs-react-test documentation.

All the tests follow this structure:

  1. Setting up the initial state in an atom, app-state. This atom contains the data that will be passed to the control.
  2. Mounting the Om root on the container. Notice that the combobox is already expanded to save a click.
  3. Declaring what we expect to receive from the commands-channel using expect-async-message.
  4. Finally, selecting the option we want from the combobox, and clicking on it.

expect-async-message is one of several functions we’re using to assert what to receive through core.async channels:

The good thing about this kind of black-box tests is that they interact with controls as the user would do it, so the tests know nearly nothing about how the control is implemented.

Interesting Webcast: "CS Education Zoo interviews David Nolen"

I've just watched this great CS Education Zoo #5 webcast with David Nolen

Saturday, June 17, 2017

Course: Introduction to CSS3 on Coursera

My current client is Green Power Monitor (GPM) which is a company based in Barcelona specialized in monitoring renewable energy power plants and with clients all over the world.

I'm part of a team that is developing a new application to monitor and manage renewable energy portfolios. We use C# and F# in the back-end and ClojureScript in the front-end.

I'm in the front-end team. We're developing a challenging SPA with lots of data visualization which has to look really good.

We're taking advantage of Om (a ClojureScript interface to React), core.async and ClojureScript to be more productive.

In other teams I've been before, there were different people doing HTML & CSS and programming JavaScript. That's not the case in GPM. We are responsible not only of programming but also of all the styling of the application. We are using SASS.

For me, not having done CSS before, it was a big challenge. I dreaded every time I had to style a new Om control I had programmed. My colleagues Jordi and Andre have helped me a lot (thanks guys!). However I wanted to get more productive to be more independent and use less of their time, so I decided to do a CSS3 course.

I did the Introduction to CSS3 course from the University of Michigan. I learned how to use CSS3 to style pages focusing on both proper syntax and the importance of accessibility design. I really liked Colleen van Lent's classes and how she encourages to experiment and make messes to learn. Thanks Collen!

After the course I'm starting to be able to style my controls with less trial and error and having to ask less doubts to Jordi and Andre.

Learning CSS3 and doing all the styling myself is helping me to become a bit more rounded as a front-end developer.

Sunday, June 4, 2017

Kata: Luhn Test in Clojure

We recently did the Luhn Test kata at a Barcelona Software Craftsmanship event.

This is a very interesting problem to practice TDD because it isn't so obvious how to test drive a solution through the only function in its public API: valid?.

What we observed during the dojo is that, since the implementation of the Luhn Test is described in so much detail in terms of the s1 and s2 functions (check its description here), it was very tempting for participants to test these two private functions instead of the valid? function.

Another variant of that approach consisted in making those functions public in a different module or class, to avoid feeling "guilty" for testing private functions. Even though in this case, only public functions were tested, these approach produced a solution which has much more elements than needed, i.e. with a poorer design according to the 4 rules of simple design. It also produced tests that are very coupled to implementation details.

In a language with a good REPL, a better and faster approach might have been writing a failing test for the valid? function, and then interactively develop with the REPL s1 and s2 functions. Then combining s1 and s2 would have made the failing test for valid? pass. At the end, we could add some other tests for valid? to gain confidence in the solution.

This mostly REPL-driven approach is fast and produces tests that don't know anything about the implementation details of valid?. However, we need to realize that, it follows the same technique of "testing" (although only interactively) private functions. The huge improvement is that we don't keep these tests and we don't create more elements than needed. However, the weakness of this approach is that it leaves us with less protection against possible regressions. That's why we need to complement it with some tests after the code is written to gain confidence in the solution.

If we use TDD writing tests only for the valid? function, we can avoid creating superfluous elements and create a good protection against regressions at the same time. We only need to choose our test examples wisely.

These are the tests I used to test drive a solution (using Midje):

Notice that I actually needed 7 tests to drive the solution. The last four tests were added to gain confidence in it.

This is the resulting code:

See all the commits here if you want to follow the process. You can find all the code on GitHub.

We can improve this regression test suit by changing some of the tests to make them fail for different reasons:

I think this kata is very interesting for practicing TDD, in particular, to learn how to choose good examples for your tests.