Showing posts with label Codesai. Show all posts
Showing posts with label Codesai. Show all posts

Tuesday, August 23, 2022

Simple example of property-based testing

Introduction.

We were recently writing tests to characterise a legacy code at a client that was being used to encrypt and decrypt UUIDs using a cipher algorithm. We have simplified the client’s code to remove some distracting details and try to highlight the key ideas we’d like to transmit.

In this post we’ll show how we used parameterized tests to test the encryption and decryption functions, and how we applied property-based testing to explore possible edge cases using a very useful pattern to discover properties.

Finally, we’ll discuss the resulting tests.

Testing the encryption and decryption functions.

We started by writing examples to fixate the behaviour of the encryption function. Notice how we used parameterized tests[1] to avoid the duplication that having a different test for each example would have caused:

Then we wrote more parameterized tests for the decryption function. Since the encryption and decryption functions are inverses of each other, we could use the same examples that we had used for the encryption function. Notice how the roles of being input and expected output change for the parameters of the new test.

Exploring edge cases.

You might wonder why we wanted to explore edge cases. Weren’t the parameterized tests enough to characterise this legacy code?

Even though, the parameterized tests that we wrote for both functions were producing a high test coverage, coverage only “covers” code that is already there. We were not sure if there could be any edge cases, that is, inputs for which the encryption and decryption functions might not behave correctly. We’ve found edge cases in the past even in code with 100% unit test coverage.

Finding edge cases is hard work which sometimes might require exploratory testing by specialists. It would be great to automatically explore some behaviour to find possible edge cases so we don’t have to find them ourselves or some QA specialists. In some cases, we can leverage property-based testing to do that exploration for us[2].

One of the most difficult parts of using property-based testing is finding out what properties we should use. Fortunately, there are several approaches or patterns for discovering adequate properties to apply property-based testing to a given problem. Scott Wlaschin wrote a great article in which he explains several of those patterns[3].

It turned out that the problem we were facing matched directly to one of the patterns described by Wlaschin, the one he calls “There and back again”(also known as “Round-tripping” or “Symmetry” pattern).

According to Wlaschin “There and back again” properties “are based on combining an operation with its inverse, ending up with the same value you started with”. As we said before, in our case the decryption and encryption functions were inverses of each other so the “There and back again” pattern was likely to lead us to a useful property.

There and back again diagram.

Once we knew which property to use it was very straightforward to add a property-based test for it. We used the jqwik library. We like it because it has very good documentation and it is integrated with Junit.

Using jqwik functions we wrote a generator of UUIDs (have a look at the documentation on how to generate customised parameters), we then wrote the decrypt_is_the_inverse_of_encrypt property:

By default jqwik checks the property with 1000 new randomly generated UUIDs every time this test runs. This allows us to gradually explore the set of possible examples in order to find edge cases that we have not considered.

Discussion.

If we examine the resulting tests we may think that the property-based tests have made the example-based tests redundant. Should we delete the example-based tests and keep only the property-based ones?

Before answering this question, let’s think about each type of test from different points of view.

Understandability.

Despite being parameterized, it’s relatively easy to see which inputs and expected outputs are used by the example-based tests because they are literal values provided by the generateCipheringUuidExamples method. Besides, this kind of testing was more familiar to the team members.

In contrast, the UUID used by the property-based tests to check the property is randomly generated and the team was not familiar with property-based testing.

Granularity.

Since we are using a property that uses the “There and back again” pattern, if there were an error, we wouldn’t know whether the problem was in the encryption or the decryption function, not even after the shrinking process[4]. We’d only know the initial UUID that made the property fail.

This might not be so when using other property patterns. For instance, when using a property based on the “The test oracle” pattern, we’d know the input and the actual and expected outputs in case of an error.

In contrast, using example-based testing it would be very easy to identify the location of the problem.

Confidence, thoroughness and exploration.

The example-based tests specify behaviour using concrete examples in which we set up concrete scenarios, and then check whether the effects produced by the behaviour match what we expect. In the case of the cipher, we pass an input to the functions and assert that their output is what we expect. The testing is reduced just to the arbitrary examples we were able to come up with, but there’s a “gap between what we claim to be testing and what we’re actually testing”[5]: why those arbitrary examples? Does the cipher behave correctly for any possible example?

Property-based testing “approach the question of correctness from a different angle: under what preconditions and constraints (for example, the range of input parameters) should the functionality under test lead to particular postconditions (results of a computation), and which invariants should never be violated in the course?”[6]. With property-based testing we are not limited to the arbitrary examples we were able to come up with as in example-based testing. Instead, property-based testing gives us thoroughness and the ability to explore because it’ll try to find examples that falsify a property every time the test runs. I think this ability to explore makes them more dynamic.

Implementation independence.

The example-based tests depend on the implementation of the cypher algorithm, whereas the property-based tests can be used for any implementation of the cypher algorithm because the decrypt_is_the_inverse_of_encrypt property is an invariant of any cipher algorithm implementation. This makes the property-based tests ideal to write a role test[7] that any valid cipher implementation should pass.

Explicitness of invariants.

In the case of the cipher there’s a relationship between the encryption and decryption functions: they are inverses of each other.

This relationship might go completely untested using example-based testing if we use unrelated examples to test each of the functions. This means there could be changes to any of the functions that may violate the property while passing the independent separated tests of each function.

In the parameterized example-based tests we wrote, we implicitly tested this property by using the same set of examples for both functions just changing the roles of input and expected output for each test, but this is limited to the set of examples.

With property-based testing we are explicitly testing the relation between the two functions and exploring the space of inputs to try to find one that falsifies the property of being inverses of each other.

Protection against regressions.

Notice that, in this case, If we deleted the example-based tests and just kept the property-based test using the decrypt_is_the_inverse_of_encrypt property, we could introduce a simple regression by implementing both functions, encrypt and decrypt, as the identity function. That obviously wrong implementation would still fulfil the decrypt_is_the_inverse_of_encrypt property, which means that the property-based test using decrypt_is_the_inverse_of_encrypt property is not enough on its own to characterise the desired behaviour and protect it against regressions. We also need to at least add example-based testing for one of the cipher functions, either encrypt or decrypt. Notice that this might happen for any property based on “There and back again” pattern. This might not hold true for different contexts and property patterns.

What we did.

Given the previous discussion, we decided to keep both example-based and property-based tests in order to gain exploratory power while keeping familiarity, granularity and protection against regressions.

Summary.

We’ve shown a simple example of how we applied JUnit 5 parameterized tests to test the encryption and decryption functions of a cipher algorithm for UUIDs.

Then we showed a simple example of how we can use property-based testing to explore our solution and find edge cases. We also talked about how discovering properties can be the most difficult part of property-based testing, and how there are patterns that can be used to help us to discover them.

Finally, we discussed the resulting example-based and property-based tests from different points of view.

We hope this post will motivate you to start exploring property-based testing as well. If you want to learn more, follow the references we provide and start playing. Also have a look at the other posts exploring property-based testing in our blog we have written in the past.

Acknowledgements.

I’d like to thank my Codesai colleagues for reading the initial drafts and giving me feedback.

Notes.

[1] The experience of writing parameterized tests using JUnit 5 is so much better than it used to be with JUnit 4!

[2] Have a look at this other post in which I describe how property-based tests were able to find edge cases that I had not contemplated in a code with 100% test coverage that had been written using TDD.

[3] Scott Wlaschin’s article, Choosing properties for property-based testing, is a great post in which he manages to explain the patterns that have helped him the most, to discover the properties that are applicable to a given problem. Besides the “There and back again” pattern, I’ve applied the [“The test oracle”][https://fsharpforfunandprofit.com/posts/property-based-testing-2/#the-test-oracle] on several occasions. Some time ago, I wrote a post explaining how I used it to apply property-based testing to an implementation of a binary search tree. Another interesting article about the same topic is Property-based Testing Patterns, Sanjiv Sahayam.

[4] “Shrinking is the mechanism by which a property-based testing framework can be told how to simplify failure cases enough to let it figure out exactly what the minimal reproducible case is.” from chapter 8 of Fred Hebert’s PropEr Testing online book

[5] From David MacIver’s In praise of property-based testing post. According to David MacIver “the problem with example-based tests is that they end up making far stronger claims than they are actually able to demonstrate”.

[6] From Johannes Link’s Know for Sure with Property-Based Testing post.

[7] Have a look at our recent post about role tests.

References.

Listening to test smells: detecting lack of cohesion and violations of encapsulation

Introduction.

We’d like to show another example of how difficulties found while testing can signal design problems in our code[1].

We believe that a good design is one that supports the evolution of a code base at a sustainable pace and that testability is a requirement for evolvability. This is not something new, we can find this idea in many places.

Michael Feathers says “every time you encounter a testability problem, there is an underlying design problem”[2].

Nat Pryce and Steve Freeman also think that this relationship between testability and good design exists[3]:

“[…] We’ve found that the qualities that make an object easy to test also make our code responsive to change”

and also use it to detect design problems and know what to refactor:

“[…] where we find that our tests are awkward to write, it’s usually because the design of our target code can be improved”

and to improve their TDD practice:

“[…] sensitise yourself to find the rough edges in your tests and use them for rapid feedback about what to do with the code. […] don’t stop at the immediate problem (an ugly test) but look deeper for why I’m in this situation (weakness in the design) and address that.”[4]

This is why they devoted talks, several posts and a chapter of their GOOS book (chapter 20) to listening to the tests[5] and even added it to the TDD cycle steps:

TDD cycle steps including listening to the tests.
TDD cycle steps including listening to the tests.

Next we’ll show you an example of how we’ve recently applied this in a client.

The problem.

Recently I was asked to help a pair that was developing a new feature in a legacy code base of one of our clients. They were having problems with the following test[6]:

that was testing the RealTimeGalleryAdsRepository class:

They had managed to test drive the functionality but they were unhappy with the results. The thing that was bothering them was the resetCache method in the RealTimeGalleryAdsRepository class. As its name implies, its intent was to reset the cache. This would have been fine, if this had been a requirement, but that was not the case. The method had been added only for testing purposes.

Looking at the code of RealTimeGalleryAdsRepository you can learn why. The cachedSearchResult field is static and that was breaking the isolation between tests. Even though they were using different instances of RealTimeGalleryAdsRepository in each test, they were sharing the same value of the cachedSearchResult field because static state is associated with the class. So a new public method, resetCache, was added to the class only to ensure isolation between different tests.

Adding code to your production code base just to enable unit testing is a unit testing anti-pattern[7], but they didn’t know how to get rid of the resetCache method, and that’s why I was called in to help.

Let’s examine the tests in RealTimeGalleryAdsRepositoryTests to see if they can point to more fundamental design problems.

Another thing we can notice is that the tests can be divided in to sets that are testing two very different behaviours:

  • One set of tests, comprised of maps_all_ads_with_photo_to_gallery_ads, ignore_ads_with_no_photos_in_gallery_ads and when_no_ads_are_found_there_are_no_gallery_ads, is testing the code that obtains the list of gallery ads;

  • whereas, the other set, comprised of when_cache_has_not_expired_the_cached_values_are_used and when_cache_expires_new_values_are_retrieved is testing the life and expiration of some cached values.

This lack of focus was a hint that the production class might lack cohesion, i.e., it might have several responsibilities.

It turns out that there was another code smell that confirmed our suspicion. Notice the boolean parameter useCache in RealTimeGalleryAdsRepository constructor. That was a clear example of a flag argument[8]. useCache was making the class behave differently depending on its value:

  1. It cached the list of gallery ads when useCache was true.
  2. It did not cache them when useCache was false.

After seeing all this, I told the pair that the real problem was the lack of cohesion and that we’d have to go more object-oriented in order to avoid it. After that refactoring the need for the resetCache would disappear.

Going more OO to fix the lack of cohesion.

To strengthen cohesion we need to separate concerns. Let’s see the problem from the point of view of the client of the RealTimeGalleryAdsRepository class, (this point of view is generally very useful because the test is also a client of the tested class) and think about what it would want from the RealTimeGalleryAdsRepository. It would be something like “obtain the gallery ads for me”, that would be the responsibility of the RealTimeGalleryAdsRepository, and that’s what the GalleryAdsRepository represents.

Notice that to satisfy that responsibility we do not need to use a cache, only get some ads from the AdsRepository and map them (the original functionality also included some enrichments using data from other sources but we remove them from the example for the sake of simplicity). Caching is an optimization that we might do or not, it’s a refinement or embellishment to how we satisfy the responsibility but it’s not necessary to satisfy it. In this case, caching changes the “how” but not the “what”.

This matches very well with the Decorator design pattern because this pattern “comes into play when there are a variety of optional functions that can precede or follow another function that is always executed”[9]. Using it would allow us to attach additional behaviour (caching) to the basic required behaviour that satisfied the role that the client needs (“obtain the gallery ads for me”). This way instead of having a flag parameter (like useCache in the original code) to control whether we cache or not, we might add caching by composing objects that implement the GalleryAdsRepository. One of them, RealTimeGalleryAdsRepository, would be in charge of getting ads from the AdsRepository and mapping them to gallery ads; and the other one, CachedGalleryAdsRepository, would cache the gallery ads.

So we moved the responsibility of caching the ads to the CachedGalleryAdsRepository class which decorated the RealTimeGalleryAdsRepository class.

This is the code of the CachedGalleryAdsRepository class:

and these are its tests:

Notice how we found here again the two tests that were previously testing the life and expiration of the cached values in the test of the original RealTimeGalleryAdsRepository: when_cache_has_not_expired_the_cached_values_are_used and when_cache_expires_new_values_are_retrieved.

Furthermore, looking at them more closely, we can see how, in this new design, those tests are also simpler because they don’t know anything about the inner details of RealTimeGalleryAdsRepository. They only know about the logic related to the life and expiration of the cached values and that when the cache is refreshed they call a collaborator that implements the GalleryAdsRepository interface, this means that now we’re caching gallery ads instead of an instance of the SearchResult and we don’t know anything about the AdsRepository.

On a side note, we also improved the code by using the Duration value object from java.time to remove the primitive obsession smell caused by using a long to represent milliseconds.

Another very important improvement is that we don’t need the static field anymore.

And what about RealTimeGalleryAdsRepository?

If we have a look at its new code, we can notice that its only concern is how to obtain the list of gallery ads and mapping them from the result of its collaborator AdsRepository, and it does not know anything about caching values. So the new design is more cohesive than the original one. Notice how we removed both the resetCache method that was before polluting its interface only for testing purposes, and the flag argument, useCache, in the constructor.

We also reduced its number of collaborators because there’s no need for a Clock anymore. That collaborator was needed for a different concern that is now taken care of in the decorator CachedGalleryAdsRepository.

These design improvements are reflected in its new tests. They are now more focused, and can only fail if the obtention of the gallery ads breaks. Having only one reason to fail comes from testing a more cohesive unit with only one reason to change. Notice how these tests coincide with the subset of tests concerned with testing the same behaviour in the original tests of RealTimeGalleryAdsRepository:

Persisting the cached values between calls.

You might be asking yourselves, how are we going to ensure that the cached values persist between calls now that we don’t have a static field anymore.

Well, the answer is that we don’t need to keep a static field in our classes for that. The only thing we need is that the composition of CachedGalleryAdsRepository and RealTimeGalleryAdsRepository is created only once, and that we use that single instance for the lifetime of the application. That is a concern that we can address using a different mechanism.

We usually find in legacy code bases that this need to create something only once, and use that single instance for the lifetime of the application is met using the Singleton design pattern described in the design patterns book. The Singleton design pattern intent is to “ensure that only one instance of the singleton class ever exists; and provide global access to that instance”. The second part of that intent, “providing global access”, is problematic because it introduces global state into the application. Using global state creates high coupling (in the form of hidden dependencies and possible actions at a distance) that drastically reduces testability.

Instead we used the singleton pattern[10]. Notice the lowercase letter. The lowercase ’s’ singleton avoids those testability problems because its intent is only to “ensure that only one instance of some class ever exists because its new operator is called only once”. The problematic global access part gets removed from the intent. This is done by avoiding mixing object instantiation with business logic by using separated factories that know how to create and wire up all the dependencies using dependency injection.

We might create this singleton, for instance, by using a dependency injection framework like Guice and its @Singleton annotation.

In this case we coded it ourselves:

Notice the factory method that returns a unique instance of the GalleryAdsRepository interface that caches values. This factory method is never used by business logic, it’s only used by instantiation logic in factories that know how to create and wire up all the dependencies using dependency injection. This doesn’t introduce testability problems because the unique instance will be injected through constructors by factories wherever is needed.

Conclusions.

We show a recent example we found working for a client that illustrates how testability problems may usually point, if we listen to them, to the detection of underlying design problems. In this case the problems in the test were pointing to a lack of cohesion in the production code that was being tested. The original class had too many responsibilities.

We refactored the production code to separate concerns by going more OO applying the decorator design pattern. The result was more cohesive production classes that led to more focused tests, and removed the design problems we had detected in the original design.

Acknowledgements.

I’d like to thank my Codesai colleagues for reading the initial drafts and giving me feedback.

Notes.

[1] We showed another example of this relationship between poor testability and design problems in a previous post: An example of listening to the tests to improve a design.

[2] Listen to his great talk about this relationship: The Deep Synergy Between Testability and Good Design

[3] This is the complete paragraph from chapter 20, Listening to the tests, of the GOOS book: “Sometimes we find it difficult to write a test for some functionality we want to add to our code. In our experience, this usually means that our design can be improved — perhaps the class is too tightly coupled to its environment or does not have clear responsibilities. When this happens, we first check whether it’s an opportunity to improve our code, before working around the design by making the test more complicated or using more sophisticated tools. We’ve found that the qualities that make an object easy to test also make our code responsive to change.”

[4] This quote is from their post Synaesthesia: Listening to Test Smells.

[5] Have a look at this interesting series of posts about listening to the tests by Steve Freeman. It’s a raw version of the content that you’ll find in chapter 20, Listening to the tests, of their book.

[6] We have simplified the client’s code to remove some distracting details and try to highlight its key problems.

[7] Vladimir Khorikov calls this unit testing anti-pattern Code pollution.

[8] A flag Argument is a kind of argument that is telling a function or a class to behave in a different way depending on its value. This might be a signal of poor cohesion in the function or class.

[9] Have a look at the discussion in the chapter devoted to the Decorator design pattern in the great book Design Patterns Explained: A New Perspective on Object-Oriented Design.

[10] Miško Hevery talks about the singleton pattern with lowercase ‘s’ in its talk Global State and Singletons at 10:20: “Singleton with capital ’S’. Refers to the design pattern where the Singleton has a private constructor and has a global instance variable. Lowercase ’s’ singleton means I only have a single instance of something because I only called the new operator once.”

References.

Monday, March 1, 2021

Podcasts sobre caring tasks en The Big Branch Theory

Hace un tiempo me invitaron al podcast de The Big Branch Theory para hablar sobre la narrativa de trabajo de cuidados que hemos aplicado en los últimos años en algunos equipos de Lifull Connect.

Estos son los dos episodios en los que participé: Si la idea les resulta curiosa en el post The value of caring profundizo en esta narrativa y cuento cómo se originó y en qué contexto.

Sunday, October 18, 2020

Sleeping is not the best option

Introduction.

Some time ago we were developing a code that stored some data with a given TTL. We wanted to check not only that the data was stored correctly but also that it expired after the given TTL. This is an example of testing asynchronous code.

When testing asynchronous code we need to carefully coordinate the test with the system it is testing to avoid running the assertion before the tested action has completed[1]. For example, the following test will always fail because the assertion in line 30 is checked before the data has expired:

In this case the test always fails but in other cases it might be worse, failing intermittently when the system is working, or passing when the system is broken. We need to make the test wait to give the action we are testing time to complete successfully and fail if this doesn’t happen within a given timeout period.

Sleeping is not the best option.

This is an improved version of the previous test in which we are making the test code wait before the checking that the data has expired to give the code under test time to run:

The problem with the simple sleeping approach is that in some runs the timeout might be enough for the data to expire but in other runs it might not, so the test will fail intermittently; it becomes a flickering test. Flickering tests are confusing because when they fail, we don’t know whether it’s due to a real bug, or it is just a false positive. If the failure is relatively common, the team might start ignoring those tests which can mask real defects and completely destroy the value of having automated tests.

Since the intermittent failures happen because the timeout is too close to the time the behavior we are testing takes to run, many teams decide to reduce the frequency of those failures by increasing the time each test sleeps before checking that the action under test was successful. This is not practical because it soon leads to test suites that take too long to run.

Alternative approaches.

If we are able to detect success sooner, succeeding tests will provide rapid feedback, and we only have to wait for failing tests to timeout. This is a much better approach than waiting the same amount of time for each test regardless it fails or succeeds.

There are two main strategies to detect success sooner: capturing notifications[2] and polling for changes.

In the case we are using as an example, polling was the only option because redis didn’t send any monitoring events we could listen to.

Polling for changes.

To detect success as soon as possible, we’re going to probe several times separated by a time interval which will be shorter than the previous timeout. If the result of a probe is what we expect the test pass, if the result we expect is not there yet, we sleep a bit and retry. If after several retries, the expected value is not there, the test will fail.

Have a look at the checkThatDataHasExpired method in the following code:

By polling for changes we avoid always waiting the maximum amount of time. Only in the worst case scenario, when consuming all the retries without detecting success, we’ll wait as much as in the just sleeping approach that used a fixed timeout.

Extracting a helper.

Scattering ad hoc low level code that polls and probes like the one in checkThatDataHasExpired throughout your tests not only make them difficult to understand, but also is a very bad case of duplication. So we extracted it to a helper so we could reuse it in different situations.

What varies from one application of this approach to another are the probe, the check, the number of probes before failing and the time between probes, everything else we extracted to the following helper[3]:

This is how the previous tests would look after using the helper:

Notice that we’re passing the probe, the check, the number of probes and the sleep time between probes to the AsyncTestHelpers::assertWithPolling function.

Conclusions.

We showed an example in Php of an approach to test asynchronous code described by Steve Freeman and Nat Pryce in their Growing Object-Oriented Software, Guided by Tests book. This approach avoids flickering test and produces much faster test suites than using a fixed timeout. We also showed how we abstracted this approach by extracting a helper function that we are reusing in our code.

We hope you’ve found this approach interesting. If you want to learn more about this and several other techniques to effectively test asynchronous code, have a look at the wonderful Growing Object-Oriented Software, Guided by Tests book[4].

Acknowledgements.

Thanks to my Codesai colleagues for reading the initial drafts and giving me feedback and to Chrisy Totty for the lovely cat picture.

Notes.

[1] This is a nice example of Connascence of Timing (CoT). CoT happens when when the timing of the execution of multiple components is important. In this case the action being tested must run before the assertion that checks its observable effects. That's the coordination we talk about. Check our post about Connascence to learn more about this interesting topic.
[2] In the capturing notifications strategy the test code "observes the system by listening for events that the system sends out. An event-based assertion waits for an event by blocking on a monitor until gets notified or times out", (from Growing Object-Oriented Software, Guided by Tests book).
Some time ago we developed some helpers using the capturing notifications strategy to test asynchronous ClojureScript code that was using core.async channels. Have a look at, for instance, the expect-async-message assertion helper in which we use core.async/alts! and core.async/timeout to implement this behaviour. The core.async/alts! function selects the first channel that responds. If that channel is the one the test code was observing we assert that the received message is what we expected. If the channel that responds first is the one generated by core.async/timeout we fail the test. We mentioned these async-test-tools in previous post: Testing Om components with cljs-react-test.
[3] Have a look at the testing asynchronous systems examples in the GOOS Code examples repository for a more object-oriented implementation of helper for the polling for changes strategy, and also examples of the capturing notifications strategy.
[4] Chapter 27, Testing Asynchronous code, contains a great explanation of the two main strategies to test asynchronous code effectively: capturing notifications and polling for changes.

References.

Thursday, June 11, 2020

The value of caring

Introduction

We’d like to tell you about a narrative that has been very useful for us in the coaching work we have been doing with several teams during the last year.

Origin

It all started during a consultancy work that Joan Valduvieco and I did at the beginning of 2019 at Trovit. José Carlos Gil and Edu Mateu had brought us to help Trovit’s B2B team. We spent a week with the team asking questions, observing their work and doing some group dynamics to try to understand how we might help them.

After that week of “field work” we had gathered a ton of notes, graphs and insights that we needed to put in order and analyze. This is always a difficult task because it is about analyzing a sociotechnical system (team, knowledge, technology, existing code, organization, pressure, changes,…) which is a complex system in constant flux. In a way, you have to accept that this is something you can’t wholly understand, and even less in a short period of time.

Having that in mind, we tried to do our best to get a first approximation by representing the different dynamics and processes we had observed in several causal loop diagrams. This work helped us clarify our minds and highlight how habits, knowledge, beliefs, culture and practices were creating vicious cycles, (related to some positive feedback loops in the causal loop diagram[1]), that were amplifying and preserving destructive effects for the team and its software which made their system less habitable and sustainable, and created inefficiencies and loss of value.

After that we started to think about strategies that might break those vicious cycles, either by reducing the effect of some node (a problematic habit, practice or belief), by removing an interaction between nodes in a positive loop, or introducing negative feedback in the system (new practices and habits) to stabilize it.

Donella H. Meadows in her book Thinking in Systems: A Primer talks about different types of leverage points which are places within a complex system where you can try to apply what we call change levers to make the system evolve from its current dynamics into another that might be more interesting for the team.

We detected several change levers that might be applied to the system at different leverage points to improve the sustainability of the system, such as, “Improving Technical Skills”, “Knowledge Diffusion”, “Knowledge Crystallization”, “Changing Habits”, “All Team Product Owning” or “Remote First”[2].

All of them attacked vicious cycles that were entrenched in the team’s dynamics, and were valuable in themselves, but we were not sure that they would be enough to change the system’s current dynamics. Something that we observed during our field work was that the team was very skeptical about the probabilities of success of any initiative to improve its situation. They had gone through several failed attempts of “Agile” transformation before, and this has left them in a kind of state of learned helplessness. Why had those previous attempts failed? Would ours succeed using our change levers?

We started to realize that there was a deeper force at play that was exacerbating the rest of the problems and would reduce the effect of any of the other change levers. We realized that they would be useless unless the team had enough slack and company support to apply them. The deeper and stronger force that was inhibiting the possibility of having that slack and support was the very conception of what value meant for the company, it was a cultural problem.

Drucker: Culture eats strategy for breakfast

We then came up with a new change lever: “Redefinition of Value”. This was the most powerful change lever of all the ones we had detected because it was a cultural change[3], and it would increase the probabilities of success of all other change levers. Being a cultural change also made it the most difficult to apply.

Redefinition of Value

What was this “Redefinition of Value” change lever about?

The culture of the team was understanding value only as producing features for their clients as quickly as possible. This definition of value only includes certain kinds of tasks, the ones that directly produce features, but excludes many tasks that don’t directly produce features, but that are necessary for the sustainability of the system (the business and team) itself. The first kind of work is called productive work and the second kind is called caring work[4].

Believing that only one type of work has value, (productive work), and then focusing only on that type of work is an extractive micro-optimization that might end destabilizing the system[5].

Model biased to productive work

The redefinition of value we proposed was that not only producing features for the client as quickly as possible is valuable, that there is also value in keeping the sustainability of the business and team. Aside from working on productive tasks, you need to devote energy and time to work on caring tasks which are concerned with keeping the health of the ecosystem composed of the code, the team and the client, so that it can continue evolving, being habitable for the team and producing value. We think that this kind of work (caring work) has value and is strategic for a company. If you think about it, at the bottom, this is about seeing the team as the real product and establishing a healthier and more durable relationship with clients.

This idea of caring work comes from economic studies from a gender perspective. In feminist economics caring work is defined as, “those occupations that provide services that help people develop their capabilities, or the ability to pursue the aspects of their lives that they value” or “necessary occupations and care to sustain life and human survival”.

We thought that for this redefinition of value to be successful, it needed to be stated very clearly to the team from above. This clarity is crucial to avoid the developers having to solve the conflicts that arise when the value of caring tasks is not clear. In those cases, it’s often caring work which gets neglected.

Those conflicts are actually strategic and, as such, they should be resolved at the right level so that the team receives a clear message that gives them focus, and the peace of mind that they are always working in something that is really valued by the company.

In many companies the value of caring only appears in company statements (corporate language), but it’s not part of the real culture, the real system of values of the company. This situation creates a kind of doublespeak that might be very harmful. A way to avoid that might be putting your money where your mouth is.

So with all these ingredients we cooked a presentation for the team lead, the CTO and the CPO of the company[6], to tell them the strategy we would like to follow, the cultural change involved in the redefinition of value that we thought necessary and how we thought that the possible conflicts between the two types of work should be resolved at their level. They listened to us and decided to try. They were very brave and this decision enabled a wonderful change in the team we started to work with[7]. The success of this experiment made it possible for other teams to start experimenting with this redefinition of value as well.

Is this not the metaphor of technical debt in a different guise?[8]

We think that the narrative of caring work, it’s not equivalent to technical debt.

The technical debt metaphor has evolved a lot from the concept that Ward Cunningham originally coined. With time the metaphor was extended to cover more than what he initially meant[9]. This extended use of the metaphor has been criticized by some software practitioners[10]. Leaving this debate aside, let’s focus on how most people currently understand the technical debt metaphor:

“Design or implementation constructs that are expedient in the short term but that set up a technical context that can make a future change more costly or impossible. Technical debt is a contingent liability whose impact is limited to internal systems qualities - primarily, but not only, maintainability and evolvability.” from Managing Technical Debt: Reducing Friction in Software Development

Technical debt describes technical problems that cause friction in software development and how to manage them.

On the other hand, the caring work narrative addresses the wider concern of sustainability in a software system, considering all its aspects (social, economic and technical), and how both productive and caring work are key to keep a sustainable system. We think that makes the caring work narrative a wider concept than technical debt.

This narrative has created a cultural shift that has allowed us not only to manage technical debt better, but also to create room for activities directed to prevent technical debt, empower the team, amplify its knowledge, attract talent, etc. We think that considering caring work as valuable as productive work placed us in a plane of dialogue which was more constructive than the financial metaphor behind technical debt.

Redefinition of value

How are we applying it at the moment?

Applying the caring work narrative depends highly on the context of each company. Please, do not take this as a “a recipe for applying caring work” or “the way to apply caring work”. What is important is to understand the concept, then you will have to adapt it to your own context.

In the teams we coach we are using something called caring tasks (descriptions of caring work) along with a variation of the concerns mechanism[11] and devoting percentages of time in every iteration to work on caring tasks. The developers are the ones that decide which caring work is needed. These decisions are highly contextual and they involve trade offs related to asymmetries initially found in each team and in their evolution. There are small variations in each team, and the way we apply them in each team has evolved over time. You can listen about some of those variations in the two podcasts that The Big Branch Theory Podcast will devote to this topic.

We plan to write in the near future another post about how we are using the concerns mechanism in the context of caring work.

Conclusions

We have been using the redefinition of value given by the caring work for more than a year now. Its success in the initial team made it possible for other teams to start experimenting with it as well. Using it in new teams has taught us many things and we have introduced local variations to adapt the idea to the realities of the new teams and their coaches.

So far, it’s working for us well and we feel that it has helped us in some aspects in which using the technical debt metaphor is difficult like, for example, improving processes or owning the product.

Some of the teams worked only on legacy systems, and other teams worked on both greenfield and legacy systems (all the legacy systems had, and still have, a lot of technical debt).

We think it is important to consider that the teams we have been collaborating with were in a context of extraction[12] in which there is already a lot of value to protect.

3X model

Due to the coronavirus crisis some of the teams have started to work on an exploration context. This is a challenge and we wonder how the caring work narrative evolves in this context and a scarcity scenario.

To finish we’d like to add a quote from Lakoff[13]:

“New metaphors are capable of creating new understandings and, therefore, new realities”

We think that the caring work narrative might help create a reality in which both productive and caring work are valued and more sustainable software systems are more likely.

Acknowledgements

Thanks to Joan Valduvieco, Beatriz Martín and Vanesa Rodríguez for all the stimulating conversations that lead to the idea of applying the narrative of caring work in software development.

Thanks to José Carlos Gil and Edu Mateu for inviting us to work with Lifull Connect. It has been a great symbiosis so far.

Thanks to Marc Sturlese and Hernán Castagnola for daring to try.

Thanks to Lifull’s B2B and B2C teams and all the Codesai colleagues that have worked with them, for all the effort and great work to take advantage of the great opportunity that caring tasks created.

Finally, thanks to my Codesai colleagues and to Rachel M. Carmena for reading the initial drafts and giving me feedback.

Notes

[1] Positive in this context does not mean something good. The feedback is positive because it makes the magnitude of the perturbation increase (it positively retrofeeds it).

[2] Those were the names we used when we presented what we have learned during the consultancy.

[3] In systems thinking a leverage point is a place within a complex system (a corporation, an economy, a living body, a city, an ecosystem) where a shift can be applied to produce changes in the whole system behavior. It would be a low leverage point if a small shift produces a small behavioral change. It’s a high leverage point if a small shoif causes a large behavioral change. According to Donella H. Meadows the most effective place to intervene a system is:

“The mindset or paradigm out of which the system — its goals, power structure, rules, its culture — arises”.

As you can imagine this is the most difficult thing to change as well. To know more read Leverage Points: Places to Intervene in a System.

[4] Also known as reproductive work. This idea comes from a gender perspective of economics. You can learn more about it reading the Care work and Feminist economics articles in Wikipedia.

[5] Sadly we observe this phenomena at all scales: a business, a relationship, an ecosystem, the planet…

[6] Edu Mateu, Marc Sturlese and Hernán Castagnola were then the B2B team lead, the CTO and the CPO, respectively.

[7] B2B was the first team we worked with. Fran Reyes and I started coaching them in February 2019 but after a couple of months Antonio de la Torre and Manuel Tordesillas joined us. Thanks to their great work of this team, other teams started using the caring work narrative around 6 month after.

[8] Thanks to Fran Reyes and Alfredo Casado for the interesting discussions about technical debt that helped me write this part.

[9] You can listen Ward Cunningham himself explaining what he actually meant with the technical debt metaphor in this videovideo.

[10] Two examples: A Mess is not a Technical Debt by Robert C. Martin and Bad code isn’t Technical Debt, it’s an unhedged Call Option by Steve Freeman.

[11] The concerns mechanism is described by Xavi Gost in his talk CDD (Desarrollo dirigido por consenso).

[12] To know more watch 3X Explore/Expand/Extrac by Kent Beck or read Kent Beck’s 3X: Explore, Expand, Extract by Antony Marcano and Andy Palmer.

[13] From Metaphors We Live By by George Lakoff and Mark Johnson.

References

Books

Articles

Talks

Thursday, June 27, 2019

An example of listening to the tests to improve a design

Introduction.

Recently in the B2B team at LIFULL Connect, we improved the validation of the clicks our API receive using a service that detects whether the clicks were made by a bot or a human being.

So we used TDD to add this new validation to the previously existing validation that checked if the click contained all mandatory information. This was the resulting code:

and these were its tests:

The problem with these tests is that they know too much. They are coupled to many implementation details. They not only know the concrete validations we apply to a click and the order in which they are applied, but also details about what gets logged when a concrete validations fails. There are multiple axes of change that will make these tests break. The tests are fragile against those axes of changes and, as such, they might become a future maintenance burden, in case changes along those axes are required.

So what might we do about that fragility when any of those changes come?

Improving the design to have less fragile tests.

As we said before the test fragility was hinting to a design problem in the ClickValidation code. The problem is that it’s concentrating too much knowledge because it’s written in a procedural style in which it is querying every concrete validation to know if the click is ok, combining the result of all those validations and knowing when to log validation failures. Those are too many responsibilities for ClickValidation and is the cause of the fragility in the tests.

We can revert this situation by changing to a more object-oriented implementation in which responsibilities are better distributed. Let’s see how that design might look:

1. Removing knowledge about logging.

After this change, ClickValidation will know nothing about looging. We can use the same technique to avoid knowing about any similar side-effects which concrete validations might produce.

First we create an interface, ClickValidator, that any object that validates clicks should implement:

Next we create a new class NoBotClickValidator that wraps the BotClickDetector and adapts[1] it to implement the ClickValidator interface. This wrapper also enrichs BotClickDetector’s’ behavior by taking charge of logging in case the click is not valid.

These are the tests of NoBotClickValidator that takes care of the delegation to BotClickDetector and the logging:

If we used NoBotClickValidator in ClickValidation, we’d remove all knowledge about logging from ClickValidation.

Of course, that knowledge would also disappear from its tests. By using the ClickValidator interface for all concrete validations and wrapping validations with side-effects like logging, we’d make ClickValidation tests robust to changes involving some of the possible axis of change that were making them fragile:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.

2. Another improvement: don't use test doubles when it's not worth it[2].

There’s another way to make ClickValidation tests less fragile.

If we have a look at ClickParamsValidator and BotClickDetector (I can’t show their code here for security reasons), they have very different natures. ClickParamsValidator has no collaborators, no state and a very simple logic, whereas BotClickDetector has several collaborators, state and a complicated validation logic.

Stubbing ClickParamsValidator in ClickValidation tests is not giving us any benefit over directly using it, and it’s producing coupling between the tests and the code.

On the contrary, stubbing NoBotClickValidator (which wraps BotClickDetector) is really worth it, because, even though it also produces coupling, it makes ClickValidation tests much simpler.

Using a test double when you’d be better of using the real collaborator is a weakness in the design of the test, rather than in the code to be tested.

These would be the tests for the ClickValidation code with no logging knowledge, after applying this idea of not using test doubles for everything:

Notice how the tests now use the real ClickParamsValidator and how that reduces the coupling with the production code and makes the set up simpler.

3. Removing knowledge about the concrete sequence of validations.

After this change, the new design will compose validations in a way that will result in ClickValidation being only in charge of combining the result of a given sequence of validations.

First we refactor the click validation so that the validation is now done by composing several validations:

The new validation code has several advantages over the previous one:

  • It does not depend on concrete validations any more
  • It does not depend on the order in which the validations are made.

It has only one responsibility: it applies several validations in sequence, if all of them are valid, it will accept the click, but if any given validation fails, it will reject the click and stop applying the rest of the validations. If you think about it, it’s behaving like an and operator.

We may write these tests for this new version of the click validation:

These tests are robust to the changes making the initial version of the tests fragile that we described in the introduction:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.
  3. Adding more validations.
  4. Changing the order of the validation.

However, this version of ClickValidationTest is so general and flexible, that using it, our tests would stop knowing which validations, and in which order, are applied to the clicks[3]. That sequence of validations is a business rule and, as such, we should protect it. We might keep this version of ClickValidationTest only if we had some outer test protecting the desired sequence of validations.

This other version of the tests, on the other hand, keeps protecting the business rule:

Notice how this version of the tests keeps in its setup the knowledge of which sequence of validations should be used, and how it only uses test doubles for NoBotClickValidator.

4. Avoid exposing internals.

The fact that we’re injecting into ClickValidation an object, ClickParamsValidator, that we realized we didn’t need to double, it’s a smell which points to the possibility that ClickParamsValidator is an internal detail of ClickValidation instead of its peer. So by injecting it, we’re coupling ClickValidation users, or at least the code that creates it, to an internal detail of ClickValidation: ClickParamsValidator.

A better version of this code would hide ClickParamsValidator by instantiating it inside ClickValidation’s constructor:

With this change ClickValidation recovers the knowledge of the sequence of validations which in the previous section was located in the code that created ClickValidation.

There are some stereotypes that can help us identify real collaborators (peers)[4]:

  1. Dependencies: services that the object needs from its environment so that it can fulfill its responsibilities.
  2. Notifications: other parts of the system that need to know when the object changes state or performs an action.
  3. Adjustments or Policies: objects that tweak or adapt the object’s behaviour to the needs of the system.

Following these stereotypes, we could argue that NoBotClickValidator is also an internal detail of ClickValidation and shouldn’t be exposed to the tests by injecting it. Hiding it we’d arrive to this other version of ClickValidation:

in which we have to inject the real dependencies of the validation, and no internal details are exposed to the client code. This version is very similar to the one we’d have got using tests doubles only for infrastructure.

The advantage of this version would be that its tests would know the least possible about ClickValidation. They’d know only ClickValidation’s boundaries marked by the ports injected through its constructor, and ClickValidation`’s public API. That will reduce the coupling between tests and production code, and facilitate refactorings of the validation logic.

The drawback is that the combinations of test cases in ClickValidationTest would grow, and may of those test cases would talk about situations happening in the validation boundaries that might be far apart from ClickValidation’s callers. This might make the tests hard to understand, specially if some of the validations have a complex logic. When this problem gets severe, we may reduce it by injecting and use test doubles for very complex validators, this is a trade-off in which we decide to accept some coupling with the internal of ClickValidation in order to improve the understandability of its tests. In our case, the bot detection was one of those complex components, so we decided to test it separately, and inject it in ClickValidation so we could double it in ClickValidation’s tests, which is why we kept the penultimate version of ClickValidation in which we were injecting the click-not-made-by-a-bot validation.

Conclusion.

In this post, we tried to play with an example to show how listening to the tests[5] we can detect possible design problems, and how we can use that feedback to improve both the design of our code and its tests, when changes that expose those design problems are required.

In this case, the initial tests were fragile because the production code was procedural and had too many responsibilities. The tests were fragile also because they were using test doubles for some collaborators when it wasn’t worth to do it.

Then we showed how refactoring the original code to be more object-oriented and separating better its responsibilities, could remove some of the fragility of the tests. We also showed how reducing the use of test doubles only to those collaborators that really needs to be substituted can improve the tests and reduce their fragility. Finally, we showed how we can go too far in trying to make the tests flexible and robust, and accidentally stop protecting a business rule, and how a less flexible version of the tests can fix that.

When faced with fragility due to coupling between tests and the code being tested caused by using test doubles, it’s easy and very usual to “blame the mocks”, but, we believe, it would be more productive to listen to the tests to notice which improvements in our design they are suggesting. If we act on this feedback the tests doubles give us about our design, we can use tests doubles in our advantage, as powerful feedback tools[6], that help us improve our designs, instead of just suffering and blaming them.

Acknowledgements.

Many thanks to my Codesai colleagues Alfredo Casado, Fran Reyes, Antonio de la Torre and Manuel Tordesillas, and to my Aprendices colleagues Paulo Clavijo, Álvaro García and Fermin Saez for their feedback on the post, and to my colleagues at LIFULL Connect for all the mobs we enjoy together.

Footnotes:

[2] See Test Smell: Everything is mocked by Steve Freeman where he talks about things you shouldn't be substituting with tests doubles.
[3] Thanks Alfredo Casado for detecting that problem in the first version of the post.
[4] From Growing Object-Oriented Software, Guided by Tests > Chapter 6, Object-Oriented Style > Object Peer Stereotypes, page 52. You can also read about these stereotypes in a post by Steve Freeman: Object Collaboration Stereotypes.
[5] Difficulties in testing might be a hint of design problems. Have a look at this interesting series of posts about listening to the tests by Steve Freeman.
[6] According to Nat Pryce mocks were designed as a feedback tool for designing OO code following the 'Tell, Don't Ask' principle: "In my opinion it's better to focus on the benefits of different design styles in different contexts (there are usually many in the same system) and what that implies for modularisation and inter-module interfaces. Different design styles have different techniques that are most applicable for test-driving code written in those styles, and there are different tools that help you with those techniques. Those tools should give useful feedback about the external and *internal* quality of the system so that programmers can 'listen to the tests'. That's what we -- with the help of many vocal users over many years -- designed jMock to do for 'Tell, Don't Ask' object-oriented design." (from a conversation in Growing Object-Oriented Software Google Group).

I think that if your design follows a different OO style, it might be preferable to stick to a classical TDD style which nearly limits the use of test doubles only to infrastructure and undesirable side-effects.

Saturday, May 25, 2019

The curious case of the negative builder

Recently, one of the teams I’m coaching at my current client, asked me to help them with a problem, they were experiencing while using TDD to add and validate new mandatory query string parameters[1]. This is a shortened version (validating fewer parameters than the original code) of the tests they were having problems with:

and this is the implementation of the QueryStringBuilder used in this test:

which is a builder with a fluid interface that follows to the letter a typical implementation of the pattern. There are even libraries that help you to automatically create this kind of builders[2].

However, in this particular case, implementing the QueryStringBuilder following this typical recipe causes a lot of problems. Looking at the test code, you can see why.

To add a new mandatory parameter, for example sourceId, following the TDD cycle, you would first write a new test asserting that a query string lacking the parameter should not be valid.

So far so good, the problem comes when you change the production code to make this test pass, in that momento you’ll see how the first test that was asserting that a query string with all the parameters was valid starts to fail (if you check the query string of that tests and the one in the new test, you’ll see how they are the same). Not only that, all the previous tests that were asserting that a query string was invalid because a given parameter was lacking won’t be “true” anymore because after this change they could fail for more than one reason.

So to carry on, you’d need to fix the first test and also change all the previous ones so that they fail again only for the reason described in the test name:

That’s a lot of rework on the tests only for adding a new parameter, and the team had to add many more. The typical implementation of a builder was not helping them.

The problem we’ve just explained can be avoided by chosing a default value that creates a valid query string and what I call “a negative builder”, a builder with methods that remove parts instead of adding them. So we refactored together the initial version of the tests and the builder, until we got to this new version of the tests:

which used a “negative” QueryStringBuilder:

After this refactoring, to add the sourceId we wrote this test instead:

which only carries with it updating the valid method in QueryStringBuilder and adding a method that removes the sourceId parameter from a valid query string.

Now when we changed the code to make this last test pass, no other test failed or started to have descriptions that were not true anymore.

Leaving behind the typical recipe and adapting the idea of the builder pattern to the context of the problem at hand, led us to a curious implementation, a “negative builder”, that made the tests easier to maintain and improved our TDD flow.

Acknowledgements.

Many thanks to my Codesai colleagues Antonio de la Torre and Fran Reyes, and to all the colleagues of the Prime Services Team at LIFULL Connect for all the mobs we enjoy together.

Footnotes:

[1] Currently, this validation is not done in the controller anymore. The code showed above belongs to a very early stage of an API we're developing.
[2] Have a look, for instance, at lombok's' @Builder annotation for Java.

Tuesday, September 11, 2018

Cursos en abierto en Canarias

En la primera mitad de este año no habíamos podido hacer ningún curso de TDD en abierto. Nos absorbió el trabajo diario, la agenda de eventos en los que participamos, y todas las reuniones, decisiones y papeleos que tuvimos para lanzar nuestra cooperativa. Era una pena porque nos encanta hacer cursos en abierto por el entusiasmo y las ganas de trabajar y de aprender con las que vienen las personas que asisten a ellos. Para nosotros estos cursos son también muy interesantes porque nos permiten conocer a personas, que, en ocasiones, acaban colaborando con Codesai. Ese fue, por ejemplo, mi caso o el de Luis.
Por eso, justo después de constituir la cooperativa, Fran y yo decidimos liarnos la manta a la cabeza y anunciar que haríamos un curso en abierto en Tenerife un mes después, a mitad de Julio, justo antes del comienzo de las vacaciones para muchos. Teníamos muy poco tiempo para anunciarlo, pero era una oportunidad muy buena para empezar a movernos como cooperativa y para seguir refinando el nuevo material que ya habíamos usado en los cursos de TDD que hemos hecho este año en Merkle y Gradiant. Una vez empezamos a organizarlo, pensé que podría dar otro curso en Gran Canaria con Ronny para aprovechar el viaje. Con lo que al final dimos dos cursos en abierto en una semana: el 9 y 10 de Julio en Las Palmas de Gran Canaria y el 12 y 13 de Julio en Santa Cruz de Tenerife.

Al curso de Gran Canaria vinieron seis personas. La mayoría de ellas trabajaban en AIDA, una empresa con la que trabajamos durante muchos años y con la que tenemos una relación muy cercana. El curso fue intenso porque el nivel de conocimiento de las personas que vinieron era muy dispar. Trabajamos mucho con las parejas durante los ejercicios prácticos e incluso hicimos una sesión extra una semana después en las oficinas de AIDA en la que terminamos de hacer el último ejercicio del curso. Ronny y yo acabamos muy satisfechos y recibimos muy buen feedback de los asistentes, tanto en persona, como en los formularios para dejar tu opinión de forma anónima que siempre suelo enviar unas semanas después. El curso nos sirvió también para experimentar con algunas nuevas secciones sobre cómo usar dobles de prueba de forma sostenible, y sacamos bastante información para aplicar en próximos cursos.

Al día siguiente volé a Tenerife, donde me esperaba Fran en el aeropuerto, y nos fuimos a su casa donde hicimos los últimos preparativos, y descansamos hasta el día siguiente en que empezaba el curso. Al curso de Tenerife asistieron siete personas que venían de diferentes empresas. La gran mayoría eran personas que habían sido antiguos compañeros de Fran, entre ellos, algunos de sus antiguos mentores, por lo que fue un curso muy especial para él. Yo también disfruté mucho. En este curso el nivel de los asistentes era bastante alto y salieron debates muy interesantes. De nuevo recibimos muy buen feedback al terminar.

Una novedad de estos cursos fue que ofrecimos por primera vez un descuento del 50% para colectivos poco representados en la tecnología. Este descuento es algo que hemos decidido ofrecer de ahora en adelante en nuestros cursos en abierto.
Viendo los cursos en perspectiva después de unos meses, creo que aprendimos varias cosas:
  • Un mes es muy poco tiempo para promocionar un curso (cómo no ㋡), sobre todo en un mercado tan pequeño como Canarias. Sólo gracias a nuestra red de contactos y a empresas con las que ya habíamos trabajado en el pasado individualmente o como Codesai pudimos conseguir gente suficiente para hacer el curso rentable.
  • El nuevo material está funcionando bastante bien. Estamos consiguiendo mantener el nivel de satisfacción que había con el curso anterior, al mismo tiempo que estamos pudiendo profundizar en temas que el material anterior no tocaba o lo hacía de forma muy superficial.
  • Los canales de comunicación que usamos habitualmente para promocionar nuestros cursos, Twitter y LinkedIn no fueron suficiente para atraer a personas de colectivos poco representados en la tecnología. Sólo dos asistentes aprovecharon el descuento del 50%. Tenemos que buscar otras formas de llegar a estas personas. De hecho, si lees esto y conoces a gente a la que le pudiese interesar, por favor, corre la voz.
Personalmente me alegro mucho de haber decidido mover estos cursos, porque he conocido a gente muy interesante y muy agradable con la que espero seguir en contacto, ¡muchas gracias a todos!, por la satisfacción del trabajo en sí mismo, y por los buenos ratos y conversaciones que tuve con mis compañeros Ronny y Fran. Esto es algo que es muy valioso e importante porque al estar distribuidos en diferentes regiones (e incluso países) y trabajar en diferentes clientes se echa de menos el contacto humano con los compañeros. Por eso cada curso y/o cada consultoría presencial que hacemos en pareja es una gran oportunidad para reforzar la relación que nos une. Ronny, Fran, muchísimas gracias por la acogida y todos los buenos ratos.

Antes de terminar quería dar las gracias a Manuel Tordesillas por el gran trabajo que hizo preparando todas las katas del curso en C#.
PS: Nuestro próximo curso en abierto de TDD será el 18 y 19 de Octubre en Barcelona, y de nuevo haremos un descuento del 50% para colectivos poco representados en la tecnología. Corre la voz ㋡

Originally published in Codesai's blog.