Record of experiments, readings, links, videos and other things that I find on the long road.
Registro de experimentos, lecturas, links, vídeos y otras cosas que voy encontrando en el largo camino.
Improving your reds is a simple tip that is described by Steve Freeman and Nat Pryce in their wonderful book Growing Object-Oriented Software, Guided by Tests. It consists in a small variation to the TDD cycle in which you watch the error message of your failing test and ask yourself if the information it gives you would make it easier to diagnose the origin of the problem in case this error appears as a regression in the future. If the answer is no, you improve the error message before going on to make the test pass.
From Growing Object-Oriented Software by Nat Pryce and Steve Freeman.
Investing time in improving your reds will prove very useful for your colleagues and yourself because the clearer the error message, the better the context to fix the regression error effectively. Most of the times, applying this small variation to the TDD cycle only requires a small effort. As a simple example, have a look at the following assertion and how it fails
Why is it failing? Will this error message help us know what’s happening if we see it in the future?
The answer is clearly no, but with little effort, we can add a bit of code to make the error message clearer (implementing the toString() method).
This error message is much clearer that the previous one and will help us to be more effective both while test-driving the code and if/when a regression error happens in the future, and we got this just by adding an implementation of toString() generated by our IDE.
Take that low hanging fruit, start improving your reds!
This is the original version of the code (ported from C# to Java):
As you can see, createUser is a really hard to test static function which has too many responsibilities.
This is the final version of createUser after a "bit" of refactoring:
which is using the CreatingUser class:
The before and after code is not so interesting as how we got there.
Then, with the tests in place, it was a matter of identifying and separating responsibilities and introducing some value objects. This separation allowed us to remove the scaffolding produced by the extract and override technique producing much simpler and easier to understand tests.
You can follow the process seeing all the commits (I committed changes after every refactoring step). There you'll be able to see there not only the process but my hesitations, mistakes and changes of mind as I learn more about the code during the process.
After reflecting on what I did I realized that I could have done less to get the same results by avoiding some tests that I later found out where redundant and deleted. I also need to improve my knowledge of IntelliJ automatic refactorings to improve my execution (that part you can't see in the commits).
All in all is a great kata to practice your refactoring skills.
8. Achieving Dependency Inversion by extracting an interface.
Even though we are already injecting into Alarm the dependency on Sensor, we haven't inverted the dependency yet. Alarm still depends on a concrete implementation.
Normally, it's better to defer this refactoring until you have more information, i.e., until the need for another sensor types arises, to avoid falling in the Speculative Generality code smell.
However, despite having only one type of sensor, we extract the interface anyway, as a demonstration of the refactoring technique.
So first, we rename the method that is being called on Sensor from Alarm, to make it less related with the concrete implementation of Sensor.
Then, following Kent Beck's guidelines in his Implementation Patterns book to name interface and classes, we renamed the Sensor class to TelemetryPressureSensor.
This renaming, on one hand, frees "Sensor" name so that we can use it to name the interface and, on the other hand, gives a more accurate name to the concrete implementation.
Then we extract the interface which is very easy relying on an IDE such as Eclipse or IntelliJ IDEA.
This is the generated interface:
This is Alarm's code and tests after removing any references to the pressure sensor from the naming:
Now we have a version of Alarm that is truly context independent.
We achieved context independence by inverting the dependency which makes Alarm depend on an abstraction (Sensor) instead of a concrete type (TelemetryPressureSensor) and also by moving the specificity of the SafetyRange configuration details towards its clients.
By programming to an interface we got to a loosely coupled design which now respects both DIP and OCP from SOLID.
In a dynamic language, we wouldn't have needed to extract an interface, thanks to duck typing. In that case Sensor would be a duck type and any object responding to the probe method would behave like a sensor. What we would have needed to do anyway, is the process of renaming the method called on sensor and eliminating any references to pressure from the names used inside Alarm, so that, in the end we have names that make sense for any type of sensor.
9. Using a builder for the Alarm.
Finally to make the Alarm tests a bit more readable we create a builder for the Alarm class.
I hope this would be useful for the people who couldn't attend to the events (the SCBCN one or the Gran Canaria Ágil one)and also as a remainder for the people who could.
6. Improving the semantics inside Alarm and adding a new concept to enrich the domain.
Now we turn our attention to the code inside the Alarm class.
We first rename a local variable inside the check method and the method we are calling on Sensor so that we have new names that have less to do with the implementation of Sensor.
Next, we extract the condition inside the check method to an explanatory helper: isNotWithinSafetyRange.
This is the resulting code:
Notice that the Alarm class contains a data clump.
The constants LowPressureThreshold and HighPressureThresholddon't make any sense the one without the other. They together define a range, to which we have already referred both in production and test code as a safety range.
We remove the data clump by creating a new concept, the SafetyRangevalue object:
7. Moving Specificity Towards the Tests.
If you check the tests in AlarmShould class, you'll see that it's difficult to understand the tests at a glance.
Why is the alarm on in some cases and off in some other cases?
To understand why, we have to check Alarm's constructor in which a SafetyRange object is created. This SafetyRange is an implicit configuration of Alarm.
We can make the code clearer and more reusable by moving this configuration details towards the tests.
To be able to refactor Alarm, we first need to protect its current behavior (its check method) from regressions by writing tests for it.
The implicit dependency of Alarm on Sensor makes Alarm difficult to test. However, it's the fact that Sensor returns random values that makes Alarmimpossible to test because the measured pressure values it gets are not deterministic.
It seems we're trapped in a vicious circle: in order to refactor the code (improving its design without altering its behavior) we must test it first, but in order to test it, we must change it first.
We can get out of this problem by applying a dependency-breaking technique called Extract and Override call.
4.1. Extract and Override call.
This is a dependency-breaking technique from Michael Feather's Working Effectively with Legacy code book. These techniques consist of carefully making a very small modification in the production code in order to create a seam:
A seam is a place where you can alter behavior in your program without editing in that place.
The behavior we want to test is the logic in Alarm's check method. This logic is very simple, just an if condition and a mutation of a property, but as we saw its dependence on Sensor makes it untestable.
To test it, we need to alter the collaboration between Alarm and Sensor so that it becomes deterministic. That would make Alarm testable. For that we have to create a seam first.
4.1.1. Extract call to create a seam.
First, we create a seam by extracting the collaboration with Sensor to a protected method, probeValue. This step must be made with a lot of caution because we have no tests yet.
Thankfully in Java, we can rely on the IDE to do it automatically.
4.1.2. Override the call in order to isolate the logic we want to test.
Next, we take advantage of the new seam, to alter the behavior of Alarm without affecting the logic we want to test.
To do it, we create a FakeAlarm class inside the AlarmShould tests that inherits from Alarm and overrides the call to the protected probeValue method:
Now using FakeAlarm we are able to write all the necessary tests for Alarm's logic (its check method):
Now that we have a test harness for Alarm in place, we'll focus on making its dependency on Sensor explicit.
5. Making the dependency on Sensor explicit.
Now our goal is to inject the dependency on Sensor into Alarm through its constructor.
With TDD we drive a new constructor without touching the one already in place by writing a new behavior test with the help of a mocking library (mockito):
And we make the new test pass.
In the version of Alarm above, we also used the Parameterize Constructor technique to make the default constructor (used by FakeAlarm) depend on the new one.
Then we use the new constructor in the rest of the tests one by one in order to stop using FakeAlarm.
Once there are no test using FakeAlarm, we can delete it. This makes the default constructor become obsolete, so we delete it too.
Finally, we also inline the previously extracted probeValue method.
This is the resulting test code after introducing dependency injection
in which we have also deleted the test used to drive the new constructor because we think it was redundant:
and this is the production code:
By making the dependency on Sensor explicit with dependency injection and using a mocking library we have simplified Alarm tests.
This is the second post in a series of posts about solving the Tire Pressure Monitoring System exercise in Java:
I like these exercises very much because they contain clear violations of the SOLID principles but they are sill small enough to be able to finish the refactoring in a short session at a slow pace. This makes possible to explain, answer questions and debate about design principles, dependency-breaking techniques and refactoring techniques as you apply them.
I'd like to thank Luca Minudel for creating and sharing these great exercises.
We had to write and make the following acceptance test pass using Outside-In TDD:
Given a client makes a deposit of 1000.00 on 01/04/2014
And a withdrawal of 100.00 on 02/04/2014
And a deposit of 500.00 on 10/04/2014
When she prints her bank statement
Then she would see
DATE | AMOUNT | BALANCE
10/04/2014 | 500.00 | 1400.00
02/04/2014 | -100.00 | 900.00
01/04/2014 | 1000.00 | 1000.00
The goal of the exercise was to understand the outside-in or London-school TDD style.
After the course Álvaro and I continued working on the exercise.
We had many difficulties to get used to this style for several sessions because we weren't used to test behavior instead of state. We struggled a lot because we were working outside of our comfort zone. In one of the sessions, we rushed too much trying to move forward and make the acceptance test pass, which resulted in allocating some of the responsibilities very badly.
In the end, we managed to get out of the mess we got in by slowly refactoring it to a better design.
All in all, it's been a great experience and we've learned a lot pushing our limits a bit further.
Unfortunately, we didn't commit the code after each tiny step this time, only at the end of each pairing session.
You can find the resulting code in this GitHub repository.
As usual, it's been great pairing with Álvaro, thanks mate!
At the end of the session, we left the code in a point where it was ready to substitute several conditionals
with polymorphism. Then we discussed how we would do it without modifying the Item class (to avoid the goblin's rage).
We talked about wrapping Item in another object DegradableItem and then create several subtypes for it.
Once having this design in place the Conjured items might be implemented using a decorator over DegradableItem.
When I got home I did the whole kata again in Java and coded these ideas we had talked about with some slight variations:
Before starting to refactor, I added a characterization test to describe (characterize) the actual behavior of the original code. Since the only "visible effects" of the code were the lines it was writing to the standard output, I had to use that output to create the characterization test.
This is the code of the characterization test:
Although in this final version, the expected output reflects the fixing of two bugs in the original code, at the beginning it was just what the original code was writing to the standard output given a fixed seed for the random number generator.
I prepared several slides to introduce what we were going to do. I thought that we could just start refactoring for two hours right after this explanation, but, thanks to the suggestions of some attendees, we changed the plan and did two iterations (extracting the employees repository in the first one and extracting the greetings sender in the second one) with a retrospective in the middle.
This was a really good idea because it gave us the opportunity to debate about different approaches to this refactoring and about ports and adapters. Some very interesting points were raised during those conversations and we could exchange personal experiences and views about this architecture and refactoring. This kind of conversations are a very important part of practicing together and they enrich the kata experience.
I think I have a lot to improve as an event host. Next time I'll make explicit that attendees need to bring their laptop and I'll try to help more the beginners.
I had published a possible solution to the kata, but it was just the final solution not the intermediate steps. Some attendees said that they would like to see how this refactoring could be done in small steps. For that reason, I redid the kata again at home committing after every refactoring step and recording the process:
Take this recordings with a grain of salt because I'm still working hard to improve my refactoring skills.
Well I hope this might be useful to someone.
Just to finish, I'd like to thank all the attendees for coming, I had a great time. I'll try to do it better the next time.
I'd also like to thank netmind for their support to our community giving us a place and the means to hold Barcelona Software Craftsmanship events. Having a regular place for our events makes everything much easier.
So far the events have had much more attendees than we had expected. In fact, this last Monday we nearly run out of space. If this trend continues, we'll probably need to ask for a bigger room to hold future events.
It was a lot of fun and we thought and discussed a lot about the problem. Thanks to all the participants and the organizers for the great time.
When I came back to Barcelona I decided to go on practicing by developing a full solution to Conway's Game of Life on my own.
I wanted the solution to include several of the ideas that we had been discussing during the code retreat:
Tracking only living cells.
Open/Closed with regard to the rules based on the number of neighbors.
Open/Closed with regard to the number and location of a cell's neighbors.
Open/Closed with regard to the dimensions of the grid (2D, 3D, 4D, etc).
It took me several rewrites but finally I came up with this solution that respects all the previous ideas.
- Tracking only living cells
A Generation has a collection of cells that are alive, LivingCells, and a set of Rules.
This way the "state of the cell" concept we tried in some iterations of the code retreat becomes pointless and it requires less space to store the current generation.
I also made Generation immutable.
To generate the next Generation of LivingCells (produceNextGeneration method), it first adds the surviving cells (addSurvivors method) and then the newly born cells (addNewCells method).
- Open/Closed with regard to the rules based on the number of neighbors. Rules is an interface with two methods, shouldStayAlive and shouldBeBorn, that you can implement the way you wish.
The rules are still based on the number of neighbors which is the parameter passed to both methods.
In this case, I only implemented the rules of the Conway's Game of Life: ConwaysRules class, but more rules based on the number of neighbors might be added.
- Open/Closed with regard to the number and location of a cell's neighbors
and Open/Closed with regard to the dimensions of the grid (2D, 3D, 4D, etc).
This two are possible thanks to the Cell interface.
It has only a method, getNeighbors, which returns the neighbors of a cell. It's up to the cell to know which are its neighbors.
In this way each implementation of Cell can have a different dimension and a different number of neighbors which can be located following different stencils.
In this case, I implemented several types of cells: ConwaysCell, CellX2D, CellCross3D and CellCross2D. The code that is common to all 2D cells is in the Cell2D abstract class, whereas the one that is common to all 3D cells is in the Cell3D abstract class.