Record of experiments, readings, links, videos and other things that I find on the long road.
Registro de experimentos, lecturas, links, vídeos y otras cosas que voy encontrando en el largo camino.
Monday, August 29, 2022
Books I read (January - August 2022)
- Elemental Design Patterns, Jason McC. Smith
- Antipatterns: Refactoring Software, Architectures, and Projects in Crisis, William J. Brown, Raphael C. Malveau, Hays W. "Skip" McCormick
- The Courage of Hopelessness: Chronicles of a Year of Acting Dangerously, Slavoj Žižek
- Get Your Hands Dirty on Clean Architecture: A hands-on guide to creating clean web applications with code examples in Java, Tom Hombergs
- Strangers on a Train, Patricia Highsmith
- Brothers in Arms, Lois McMaster Bujold
- The Great Mental Models: General Thinking Concepts, Shane Parrish, Rhiannon Beaubien
February
- Invicto: Logra más, sufre menos, Marcos Vázquez
- El rumor del oleaje (潮騒), Yukio Mishima
- Mirror Dance, Lois McMaster Bujold
- Memories, Lois McMaster Bujold
- The Pride of Chanur, C.J. Cherryh
- All Systems Red, Martha Wells
- The Culture of the New Capitalism, Richard Sennett
March
- Komarr, Lois McMaster Bujold
- Universidad para asesinos (Σεμινάρια φονικής γραφής), Petros Markaris
- Sobre héroes y tumbas, Ernesto Sabato
- Continuous Delivery, Jez Humble, David Farley
April
- Java by Comparison: Become a Java Craftsman in 70 Examples, Simon Harrer, Jörg Lenhard, Linus Dietz
- A Civil Campaign, Lois McMaster Bujold
- Indistractable: How to Control Your Attention and Choose Your Life, Nir Eyal
- The Long Way to a Small, Angry Planet, Becky Chambers
- Winterfair Gifts, Lois McMaster Bujold
- Queenie, Candice Carty-Williams
- The Extraordinary and Unusual Adventures of Horatio Lyle, Catherine Webb
- The How of Happiness: A Scientific Approach to Getting the Life You Want, Sonja Lyubomirsky
- Diplomatic Immunity, Lois McMaster Bujold
May
- A Closed and Common Orbit, Becky Chambers
- Captain Vorpatril's Alliance, Lois McMaster Bujold
- The Flowers of Vashnoi, Lois McMaster Bujold
- Design Patterns Explained: A New Perspective on Object-Oriented Design, Alan Shalloway, James R. Trott
- Writing Maintainable Unit Tests, Jan Van Ryswyck
- Amongst our weapons, Ben Aaronovitch
- Cryoburn, Lois McMaster Bujold
- Gentleman Jole and The Red Queen, Lois McMaster Bujold
- Batman: Year One, Frank Miller, David Mazzucchelli
June
- Modern Software Engineering: Doing What Works to Build Better Software Faster, David Farley
- Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, Gene Kim, Jez Humble, Nicole Forsgren
- Records of a spaceborn few, Becky Chambers
- Refactoring for Software Design Smells: Managing Technical Debt, Ganesh Samarthyam, Girish Suryanarayana, Tushar Sharma
- Free fall, Lois McMaster Bujold
- Fundamentals of Software Architecture: An Engineering Approach, Mark Richards, Neal Ford
- The Galaxy, and the Ground Within, Becky Chambers
- Technical Agile Coaching with the Samman Method, Emily Bache
- Shards of honor, Lois McMaster Bujold
July
- Barrayar, Lois McMaster Bujold
- La muerte de Ulises (Τριηµερία), Petros Markaris
- Building Evolutionary Architectures: Support Constant Change, Neal Ford, Rebecca Parsons and Patrick Kua
- Tinker Tailor Soldier Spy, John le Carré
- The calculating stars, Mary Robinette Kowal
- Design It! : Pragmatic Programmers: From Programmer to Software Architect, Michael Keeling
August
- Effective Software Testing: A developer's guide, Maurício Aniche
- Refactoring: Improving the Design of Existing Code (2nd Edition), Martin Fowler
- He, she and it, Marge Piercy
- Prefactoring, Ken Pugh
- Drive: The Surprising Truth About What Motivates Us, Daniel H. Pink
- The fated sky, Mary Robinette Kowal
- The relentless moon, Mary Robinette Kowal
- Otra vida por vivir (Μια ζωή ακόμα), Theodor Kallifatides (2nd time)
- Five Lines of Code: How and when to refactor, Christian Clausen
- The Corrosion of Character: The Personal Consequences of Work in the New Capitalism, Richard Sennett
- Refactoring Workbook, William C. Wake (2nd time)
- Verdades a la cara: Recuerdos de los años salvajes, Pablo Iglesias
Tuesday, August 23, 2022
Example of role tests in JavaScript with Jest
In this post we’ll show our last example applying the concept of role tests, this time in JavaScript using Jest. Have a look at our previous posts on this topic.
This example comes from a deliberate practice session we did recently with some developers from Audiense with whom we’re doing Codesai’s Practice Program in JavaScript twice a month.
Similar to what we did in our previous example of role tests in Java, we wrote the following tests to develop two different implementations of the TransactionsRepository
port while solving the Bank Kata: the InMemoryTransactionsRepository
and the NodePersistTransactionRepository
.
These are their tests, respectively:
As what happened in our previous post, both tests contain the same test cases since both tests document and protect the contract of the same role, TransactionsRepository
, which InMemoryTransactionsRepository
and NodePersistTransactionRepository
implement.
Again we’ll use the concept of role tests to remove that duplication, and make the contract of the role we are implementing more explicit.
Although Jest does not have something equivalent or similar to the RSpec’s shared examples functionality we used in our previous example in Ruby, we can get a very similar result by composing functions.
First, we wrote the behavesLikeATransactionRepository
function. This function contains all the test cases that document the role and protect its contract, and receives as a parameter a testContext
object containing methods for all the operations that will vary in the different implementations of this integration test.
Notice that in the case of Jest we are using composition, whereas we used inheritance in the case of Junit.
Then, we called the behavesLikeATransactionRepository
function from the previous tests and implemented a particular version of the methods of the testContext
object for each test.
This is the new code of InMemoryTransactionsRepositoryTest
:
And this is the new code of NodePersistTransactionRepository
after the refactoring:
This new version of the tests not only reduces duplication, but also makes explicit and protects the behaviour of the TransactionsRepository
role. It also makes less error prone the process of adding a new implementation of TransactionsRepository
because just by using the behavesLikeATransactionRepository
function, you’d get a checklist of the behaviour you need to implement in order to ensure substitutability, i.e., to ensure the Liskov Substitution Principle is not violated.
These role tests using composition are also more readable than the Junit ones, in my opinion at least :)
Acknowledgements.
I’d like to thank Audiense’s deliberate practice group for working with us on this kata, and my colleague Rubén Díaz for co-facilitating the practice sessions with me.
Thanks to my Codesai colleagues for reading the initial drafts and giving me feedback.
References.
Example of role tests in Java with Junit
I’d like to continue with the topic of role tests that we wrote about in a previous post, by showing an example of how it can be applied in Java to reduce duplication in your tests.
This example comes from a deliberate practice session I did recently with some people from Women Tech Makers Barcelona with whom I’m doing Codesai’s Practice Program in Java twice a month.
Making additional changes to the code that resulted from solving the Bank Kata we wrote the following tests to develop two different implementations of the TransactionsRepository
port: the InMemoryTransactionsRepository
and the FileTransactionsRepository
.
These are their tests, respectively:
As you can see both tests contain the same test cases: a_transaction_can_be_saved
and transactions_can_be_retrieved
but their implementations are different for each class. This makes sense because both implementations implement the same role, (see our previous post to learn how this relates to Liskov Substitution Principle).
We can make this fact more explicit by using role tests. In this case, Junit does not have something equivalent or similar to the RSpec’s shared examples functionality we used in our previous example in Ruby. Nonetheless, we can apply the Template Method pattern to write the role test, so that we remove the duplication, and more importantly make the contract we are implementing more explicit.
To do that we created an abstract class, TransactionsRepositoryRoleTest
. This class contains the tests cases that document the role and protect its contract (a_transaction_can_be_saved
and transactions_can_be_retrieved
) and defines hooks for the operations that will vary in the different implementations of this integration test
(prepareData
, readAllTransactions
and createRepository
):
Then we made the previous tests extend TransactionsRepositoryRoleTest
and implemented the hooks.
This is the new code of InMemoryTransactionsRepositoryTest
:
And this is the new code of FileTransactionsRepositoryTest
after the refactoring:
This new version of the tests not only reduces duplication, but also makes explicit and protects the behaviour of the TransactionsRepository
role. It also makes less error prone the process of adding a new implementation of TransactionsRepository
because just by extending the TransactionsRepositoryRoleTest
, you’d get a checklist of the behaviour you need to implement to ensure substitutability, i.e., to ensure the Liskov Substitution Principle is not violated.
Have a look at this Jason Gorman’s repository to see another example that applies the same technique.
In a future post we’ll show how we can do the same in JavaScript using Jest.
Acknowledgements.
I’d like to thank the WTM study group, and especially Inma Navas and Laura del Toro for practising with this kata together.
Thanks to my Codesai colleagues, Inma Navas and Laura del Toro for reading the initial drafts and giving me feedback.
References.
- Role tests for implementation of interfaces discovered through TDD , Manuel Rivero
- Design Patterns: Elements of Reusable Object-Oriented Software, Erich Gamma, Ralph Johnson, John Vlissides, Richard Helm
- Liskov Substitution Principle
- 101 Uses For Polymorphic Testing (Okay… Three), Jason Gorman
- Contract Testing example repository, Jason Gorman
Simple example of property-based testing
Introduction.
We were recently writing tests to characterise a legacy code at a client that was being used to encrypt and decrypt UUIDs using a cipher algorithm. We have simplified the client’s code to remove some distracting details and try to highlight the key ideas we’d like to transmit.
In this post we’ll show how we used parameterized tests to test the encryption and decryption functions, and how we applied property-based testing to explore possible edge cases using a very useful pattern to discover properties.
Finally, we’ll discuss the resulting tests.
Testing the encryption and decryption functions.
We started by writing examples to fixate the behaviour of the encryption function. Notice how we used parameterized tests[1] to avoid the duplication that having a different test for each example would have caused:
Then we wrote more parameterized tests for the decryption function. Since the encryption and decryption functions are inverses of each other, we could use the same examples that we had used for the encryption function. Notice how the roles of being input and expected output change for the parameters of the new test.
Exploring edge cases.
You might wonder why we wanted to explore edge cases. Weren’t the parameterized tests enough to characterise this legacy code?
Even though, the parameterized tests that we wrote for both functions were producing a high test coverage, coverage only “covers” code that is already there. We were not sure if there could be any edge cases, that is, inputs for which the encryption and decryption functions might not behave correctly. We’ve found edge cases in the past even in code with 100% unit test coverage.
Finding edge cases is hard work which sometimes might require exploratory testing by specialists. It would be great to automatically explore some behaviour to find possible edge cases so we don’t have to find them ourselves or some QA specialists. In some cases, we can leverage property-based testing to do that exploration for us[2].
One of the most difficult parts of using property-based testing is finding out what properties we should use. Fortunately, there are several approaches or patterns for discovering adequate properties to apply property-based testing to a given problem. Scott Wlaschin wrote a great article in which he explains several of those patterns[3].
It turned out that the problem we were facing matched directly to one of the patterns described by Wlaschin, the one he calls “There and back again”(also known as “Round-tripping” or “Symmetry” pattern).
According to Wlaschin “There and back again” properties “are based on combining an operation with its inverse, ending up with the same value you started with”. As we said before, in our case the decryption and encryption functions were inverses of each other so the “There and back again” pattern was likely to lead us to a useful property.
Once we knew which property to use it was very straightforward to add a property-based test for it. We used the jqwik library. We like it because it has very good documentation and it is integrated with Junit.
Using jqwik functions we wrote a generator of UUIDs (have a look at the documentation on how to generate customised parameters), we then wrote the decrypt_is_the_inverse_of_encrypt
property:
By default jqwik checks the property with 1000 new randomly generated UUIDs every time this test runs. This allows us to gradually explore the set of possible examples in order to find edge cases that we have not considered.
Discussion.
If we examine the resulting tests we may think that the property-based tests have made the example-based tests redundant. Should we delete the example-based tests and keep only the property-based ones?
Before answering this question, let’s think about each type of test from different points of view.
Understandability.
Despite being parameterized, it’s relatively easy to see which inputs and expected outputs are used by the example-based tests because they are literal values provided by the generateCipheringUuidExamples
method. Besides, this kind of testing was more familiar to the team members.
In contrast, the UUID used by the property-based tests to check the property is randomly generated and the team was not familiar with property-based testing.
Granularity.
Since we are using a property that uses the “There and back again” pattern, if there were an error, we wouldn’t know whether the problem was in the encryption or the decryption function, not even after the shrinking process[4]. We’d only know the initial UUID that made the property fail.
This might not be so when using other property patterns. For instance, when using a property based on the “The test oracle” pattern, we’d know the input and the actual and expected outputs in case of an error.
In contrast, using example-based testing it would be very easy to identify the location of the problem.
Confidence, thoroughness and exploration.
The example-based tests specify behaviour using concrete examples in which we set up concrete scenarios, and then check whether the effects produced by the behaviour match what we expect. In the case of the cipher, we pass an input to the functions and assert that their output is what we expect. The testing is reduced just to the arbitrary examples we were able to come up with, but there’s a “gap between what we claim to be testing and what we’re actually testing”[5]: why those arbitrary examples? Does the cipher behave correctly for any possible example?
Property-based testing “approach the question of correctness from a different angle: under what preconditions and constraints (for example, the range of input parameters) should the functionality under test lead to particular postconditions (results of a computation), and which invariants should never be violated in the course?”[6]. With property-based testing we are not limited to the arbitrary examples we were able to come up with as in example-based testing. Instead, property-based testing gives us thoroughness and the ability to explore because it’ll try to find examples that falsify a property every time the test runs. I think this ability to explore makes them more dynamic.
Implementation independence.
The example-based tests depend on the implementation of the cypher algorithm, whereas the property-based tests can be used for any implementation of the cypher algorithm because the decrypt_is_the_inverse_of_encrypt
property is an invariant of any cipher algorithm implementation. This makes the property-based tests ideal to write a role test[7] that any valid cipher implementation should pass.
Explicitness of invariants.
In the case of the cipher there’s a relationship between the encryption and decryption functions: they are inverses of each other.
This relationship might go completely untested using example-based testing if we use unrelated examples to test each of the functions. This means there could be changes to any of the functions that may violate the property while passing the independent separated tests of each function.
In the parameterized example-based tests we wrote, we implicitly tested this property by using the same set of examples for both functions just changing the roles of input and expected output for each test, but this is limited to the set of examples.
With property-based testing we are explicitly testing the relation between the two functions and exploring the space of inputs to try to find one that falsifies the property of being inverses of each other.
Protection against regressions.
Notice that, in this case, If we deleted the example-based tests and just kept the property-based test using the decrypt_is_the_inverse_of_encrypt
property, we could introduce a simple regression by implementing both functions, encrypt and decrypt, as the identity function. That obviously wrong implementation would still fulfil the decrypt_is_the_inverse_of_encrypt
property, which means that the property-based test using decrypt_is_the_inverse_of_encrypt
property is not enough on its own to characterise the desired behaviour and protect it against regressions. We also need to at least add example-based testing for one of the cipher functions, either encrypt or decrypt. Notice that this might happen for any property based on “There and back again” pattern. This might not hold true for different contexts and property patterns.
What we did.
Given the previous discussion, we decided to keep both example-based and property-based tests in order to gain exploratory power while keeping familiarity, granularity and protection against regressions.
Summary.
We’ve shown a simple example of how we applied JUnit 5 parameterized tests to test the encryption and decryption functions of a cipher algorithm for UUIDs.
Then we showed a simple example of how we can use property-based testing to explore our solution and find edge cases. We also talked about how discovering properties can be the most difficult part of property-based testing, and how there are patterns that can be used to help us to discover them.
Finally, we discussed the resulting example-based and property-based tests from different points of view.
We hope this post will motivate you to start exploring property-based testing as well. If you want to learn more, follow the references we provide and start playing. Also have a look at the other posts exploring property-based testing in our blog we have written in the past.
Acknowledgements.
I’d like to thank my Codesai colleagues for reading the initial drafts and giving me feedback.
Notes.
[1] The experience of writing parameterized tests using JUnit 5 is so much better than it used to be with JUnit 4!
[2] Have a look at this other post in which I describe how property-based tests were able to find edge cases that I had not contemplated in a code with 100% test coverage that had been written using TDD.
[3] Scott Wlaschin’s article, Choosing properties for property-based testing, is a great post in which he manages to explain the patterns that have helped him the most, to discover the properties that are applicable to a given problem. Besides the “There and back again” pattern, I’ve applied the [“The test oracle”][https://fsharpforfunandprofit.com/posts/property-based-testing-2/#the-test-oracle] on several occasions. Some time ago, I wrote a post explaining how I used it to apply property-based testing to an implementation of a binary search tree. Another interesting article about the same topic is Property-based Testing Patterns, Sanjiv Sahayam.
[4] “Shrinking is the mechanism by which a property-based testing framework can be told how to simplify failure cases enough to let it figure out exactly what the minimal reproducible case is.” from chapter 8 of Fred Hebert’s PropEr Testing online book
[5] From David MacIver’s In praise of property-based testing post. According to David MacIver “the problem with example-based tests is that they end up making far stronger claims than they are actually able to demonstrate”.
[6] From Johannes Link’s Know for Sure with Property-Based Testing post.
[7] Have a look at our recent post about role tests.
References.
- Property-based Testing Basics, Fred Hebert
- PropEr Testing online book, Fred Hebert
- Choosing properties for property-based testing, Scott Wlaschin
- Property-based Testing Patterns, Sanjiv Sahayam
- Cipher algorithm
- In praise of property-based testing, David MacIver
- Know for Sure with property-based Testing , Johannes Link
Listening to test smells: detecting lack of cohesion and violations of encapsulation
Introduction.
We’d like to show another example of how difficulties found while testing can signal design problems in our code[1].
We believe that a good design is one that supports the evolution of a code base at a sustainable pace and that testability is a requirement for evolvability. This is not something new, we can find this idea in many places.
Michael Feathers says “every time you encounter a testability problem, there is an underlying design problem”[2].
Nat Pryce and Steve Freeman also think that this relationship between testability and good design exists[3]:
“[…] We’ve found that the qualities that make an object easy to test also make our code responsive to change”
and also use it to detect design problems and know what to refactor:
“[…] where we find that our tests are awkward to write, it’s usually because the design of our target code can be improved”
and to improve their TDD practice:
“[…] sensitise yourself to find the rough edges in your tests and use them for rapid feedback about what to do with the code. […] don’t stop at the immediate problem (an ugly test) but look deeper for why I’m in this situation (weakness in the design) and address that.”[4]
This is why they devoted talks, several posts and a chapter of their GOOS book (chapter 20) to listening to the tests[5] and even added it to the TDD cycle steps:
Next we’ll show you an example of how we’ve recently applied this in a client.
The problem.
Recently I was asked to help a pair that was developing a new feature in a legacy code base of one of our clients. They were having problems with the following test[6]:
that was testing the RealTimeGalleryAdsRepository
class:
They had managed to test drive the functionality but they were unhappy with the results. The thing that was bothering them was the resetCache
method in the RealTimeGalleryAdsRepository
class. As its name implies, its intent was to reset the cache. This would have been fine, if this had been a requirement, but that was not the case. The method had been added only for testing purposes.
Looking at the code of RealTimeGalleryAdsRepository
you can learn why.
The cachedSearchResult
field is static and that was breaking the isolation between tests.
Even though they were using different instances of RealTimeGalleryAdsRepository
in each test, they were sharing the same value of the cachedSearchResult
field because static state is associated with the class. So a new public method, resetCache
, was added to the class only to ensure isolation between different tests.
Adding code to your production code base just to enable unit testing is a unit testing anti-pattern[7], but they didn’t know how to get rid of the resetCache
method, and that’s why I was called in to help.
Let’s examine the tests in RealTimeGalleryAdsRepositoryTests
to see if they can point to more fundamental design problems.
Another thing we can notice is that the tests can be divided in to sets that are testing two very different behaviours:
-
One set of tests, comprised of
maps_all_ads_with_photo_to_gallery_ads
,ignore_ads_with_no_photos_in_gallery_ads
andwhen_no_ads_are_found_there_are_no_gallery_ads
, is testing the code that obtains the list of gallery ads; -
whereas, the other set, comprised of
when_cache_has_not_expired_the_cached_values_are_used
andwhen_cache_expires_new_values_are_retrieved
is testing the life and expiration of some cached values.
This lack of focus was a hint that the production class might lack cohesion, i.e., it might have several responsibilities.
It turns out that there was another code smell that confirmed our suspicion. Notice the boolean parameter useCache
in RealTimeGalleryAdsRepository
constructor.
That was a clear example of a flag argument[8]. useCache
was making the class behave differently depending on its value:
- It cached the list of gallery ads when
useCache
was true. - It did not cache them when
useCache
was false.
After seeing all this, I told the pair that the real problem was the lack of cohesion and that we’d have to go more object-oriented in order to avoid it. After that refactoring the need for the resetCache
would disappear.
Going more OO to fix the lack of cohesion.
To strengthen cohesion we need to separate concerns. Let’s see the problem from the point of view of the client of the RealTimeGalleryAdsRepository
class, (this point of view is generally very useful because the test is also a client of the tested class) and think about what it would want from the RealTimeGalleryAdsRepository
. It would be something like “obtain the gallery ads for me”, that would be the responsibility of the RealTimeGalleryAdsRepository
, and that’s what the GalleryAdsRepository
represents.
Notice that to satisfy that responsibility we do not need to use a cache, only get some ads from the AdsRepository and map them (the original functionality also included some enrichments using data from other sources but we remove them from the example for the sake of simplicity). Caching is an optimization that we might do or not, it’s a refinement or embellishment to how we satisfy the responsibility but it’s not necessary to satisfy it. In this case, caching changes the “how” but not the “what”.
This matches very well with the Decorator design pattern because this pattern “comes into play when there are a variety of optional functions that can precede or follow another function that is always executed”[9]. Using it would allow us to attach additional behaviour (caching) to the basic required behaviour that satisfied the role that the client needs (“obtain the gallery ads for me”). This way instead of having a flag parameter (like useCache
in the original code) to control whether we cache or not, we might add caching by composing objects that implement the GalleryAdsRepository
. One of them, RealTimeGalleryAdsRepository
, would be in charge of getting ads from the AdsRepository and mapping them to gallery ads; and the other one, CachedGalleryAdsRepository
, would cache the gallery ads.
So we moved the responsibility of caching the ads to the CachedGalleryAdsRepository
class which decorated the RealTimeGalleryAdsRepository
class.
This is the code of the CachedGalleryAdsRepository
class:
and these are its tests:
Notice how we found here again the two tests that were previously testing the life and expiration of the cached values in the test of the original RealTimeGalleryAdsRepository
: when_cache_has_not_expired_the_cached_values_are_used
and when_cache_expires_new_values_are_retrieved
.
Furthermore, looking at them more closely, we can see how, in this new design, those tests are also simpler because they don’t know anything about the inner details
of RealTimeGalleryAdsRepository
. They only know about the logic related to the life and expiration of the cached values and that when the cache is refreshed they call a collaborator that implements the GalleryAdsRepository
interface, this means that now we’re caching gallery ads instead of an instance of the SearchResult
and we don’t know anything about the AdsRepository
.
On a side note, we also improved the code by using the Duration
value object from java.time
to remove the primitive obsession smell caused by using a long
to represent milliseconds.
Another very important improvement is that we don’t need the static field anymore.
And what about RealTimeGalleryAdsRepository
?
If we have a look at its new code, we can notice that its only concern is how to obtain the list of gallery ads and mapping them from the result of its collaborator AdsRepository
, and it does not know anything about caching values. So the new design is more cohesive than the original one.
Notice how we removed both the resetCache
method that was before polluting its interface only for testing purposes, and the flag argument, useCache
, in the constructor.
We also reduced its number of collaborators because there’s no need for a Clock
anymore. That collaborator was needed for a different concern that is now taken care of in the decorator CachedGalleryAdsRepository
.
These design improvements are reflected in its new tests. They are now more focused, and can only fail if the obtention of the gallery ads breaks. Having only one reason to fail comes from testing a more cohesive unit with only one reason to change. Notice how these tests coincide with the subset of tests concerned with testing the same behaviour in the original tests of RealTimeGalleryAdsRepository
:
Persisting the cached values between calls.
You might be asking yourselves, how are we going to ensure that the cached values persist between calls now that we don’t have a static field anymore.
Well, the answer is that we don’t need to keep a static field in our classes for that. The only thing we need is that the composition of CachedGalleryAdsRepository
and RealTimeGalleryAdsRepository
is created only once, and that we use that single instance for the lifetime of the application. That is a concern that we can address using a different mechanism.
We usually find in legacy code bases that this need to create something only once, and use that single instance for the lifetime of the application is met using the Singleton design pattern described in the design patterns book. The Singleton design pattern intent is to “ensure that only one instance of the singleton class ever exists; and provide global access to that instance”. The second part of that intent, “providing global access”, is problematic because it introduces global state into the application. Using global state creates high coupling (in the form of hidden dependencies and possible actions at a distance) that drastically reduces testability.
Instead we used the singleton pattern[10]. Notice the lowercase letter. The lowercase ’s’ singleton avoids those testability problems because its intent is only to “ensure that only one instance of some class ever exists because its new operator is called only once”. The problematic global access part gets removed from the intent. This is done by avoiding mixing object instantiation with business logic by using separated factories that know how to create and wire up all the dependencies using dependency injection.
We might create this singleton, for instance, by using a dependency injection framework like Guice and its @Singleton
annotation.
In this case we coded it ourselves:
Notice the factory method that returns a unique instance of the GalleryAdsRepository
interface that caches values. This factory method is never used by business logic, it’s only used by instantiation logic in factories that know how to create and wire up all the dependencies using dependency injection. This doesn’t introduce testability problems because the unique instance will be injected through constructors by factories wherever is needed.
Conclusions.
We show a recent example we found working for a client that illustrates how testability problems may usually point, if we listen to them, to the detection of underlying design problems. In this case the problems in the test were pointing to a lack of cohesion in the production code that was being tested. The original class had too many responsibilities.
We refactored the production code to separate concerns by going more OO applying the decorator design pattern. The result was more cohesive production classes that led to more focused tests, and removed the design problems we had detected in the original design.
Acknowledgements.
I’d like to thank my Codesai colleagues for reading the initial drafts and giving me feedback.
Notes.
[1] We showed another example of this relationship between poor testability and design problems in a previous post: An example of listening to the tests to improve a design.
[2] Listen to his great talk about this relationship: The Deep Synergy Between Testability and Good Design
[3] This is the complete paragraph from chapter 20, Listening to the tests, of the GOOS book: “Sometimes we find it difficult to write a test for some functionality we want to add to our code. In our experience, this usually means that our design can be improved — perhaps the class is too tightly coupled to its environment or does not have clear responsibilities. When this happens, we first check whether it’s an opportunity to improve our code, before working around the design by making the test more complicated or using more sophisticated tools. We’ve found that the qualities that make an object easy to test also make our code responsive to change.”
[4] This quote is from their post Synaesthesia: Listening to Test Smells.
[5] Have a look at this interesting series of posts about listening to the tests by Steve Freeman. It’s a raw version of the content that you’ll find in chapter 20, Listening to the tests, of their book.
[6] We have simplified the client’s code to remove some distracting details and try to highlight its key problems.
[7] Vladimir Khorikov calls this unit testing anti-pattern Code pollution.
[8] A flag Argument is a kind of argument that is telling a function or a class to behave in a different way depending on its value. This might be a signal of poor cohesion in the function or class.
[9] Have a look at the discussion in the chapter devoted to the Decorator design pattern in the great book Design Patterns Explained: A New Perspective on Object-Oriented Design.
[10] Miško Hevery talks about the singleton pattern with lowercase ‘s’ in its talk Global State and Singletons at 10:20: “Singleton with capital ’S’. Refers to the design pattern where the Singleton has a private constructor and has a global instance variable. Lowercase ’s’ singleton means I only have a single instance of something because I only called the new operator once.”
References.
- Growing Object-Oriented Software, Guided by Tests, Steve Freeman, Nat Pryce
- The Deep Synergy Between Testability and Good Design, Michael Feathers
- Design Patterns Explained: A New Perspective on Object-Oriented Design, Alan Shalloway, James R. Trott
- Design Patterns: Elements of Reusable Object-Oriented Software, Erich Gamma, Ralph Johnson, John Vlissides, Richard Helm
- Clean Code Talks - Global State and Singletons , Miško Hevery
- Why Singletons Are Controversial
- Flag Argument, Martin Fowler
- An example of listening to the tests to improve a design, Manuel Rivero
- Code pollution, Vladimir Khorikov
- Guice
- Guice Scopes