Record of experiments, readings, links, videos and other things that I find on the long road.
Registro de experimentos, lecturas, links, vídeos y otras cosas que voy encontrando en el largo camino.
Friday, August 29, 2025
The Productivity Trap: Perils and Promises of AI Coding
Interesting Conversation: Will AI Code Create MOUNTAINS Of Technical Debt?
Monday, August 25, 2025
Interesting Talk: Closing the Knowledge Gap in Your Legacy Code with AI
Interesting Talk: Legacy Code Survival Guide: From Dread to Done Right
Interesting Talk: Exploring a complex codebase with AI
Friday, August 22, 2025
Interesting Talk: "Reading code under the influence of one’s emotions"
Interesting Talk: Tools and practices to help you deal with legacy code
Wednesday, August 6, 2025
Interesting Talk: "Moldable Development in Practice — Patterns for Legacy Modernization"
Thursday, February 2, 2023
Interesting Podcast: "Legacy JavaScript with David Neal"
Wednesday, May 4, 2022
Interesting Talk: "The deep synergy between testability and good design"
Tuesday, February 22, 2022
Interesting Podcast: "Michael Feathers: Looking Back at Working Effectively with Legacy Code"
Monday, March 1, 2021
Podcasts sobre caring tasks en The Big Branch Theory
Estos son los dos episodios en los que participé: Si la idea les resulta curiosa en el post The value of caring profundizo en esta narrativa y cuento cómo se originó y en qué contexto.
Thursday, June 11, 2020
The value of caring
Introduction
We’d like to tell you about a narrative that has been very useful for us in the coaching work we have been doing with several teams during the last year.
Origin
It all started during a consultancy work that Joan Valduvieco and I did at the beginning of 2019 at Trovit. José Carlos Gil and Edu Mateu had brought us to help Trovit’s B2B team. We spent a week with the team asking questions, observing their work and doing some group dynamics to try to understand how we might help them.
After that week of “field work” we had gathered a ton of notes, graphs and insights that we needed to put in order and analyze. This is always a difficult task because it is about analyzing a sociotechnical system (team, knowledge, technology, existing code, organization, pressure, changes,…) which is a complex system in constant flux. In a way, you have to accept that this is something you can’t wholly understand, and even less in a short period of time.
Having that in mind, we tried to do our best to get a first approximation by representing the different dynamics and processes we had observed in several causal loop diagrams. This work helped us clarify our minds and highlight how habits, knowledge, beliefs, culture and practices were creating vicious cycles, (related to some positive feedback loops in the causal loop diagram[1]), that were amplifying and preserving destructive effects for the team and its software which made their system less habitable and sustainable, and created inefficiencies and loss of value.
After that we started to think about strategies that might break those vicious cycles, either by reducing the effect of some node (a problematic habit, practice or belief), by removing an interaction between nodes in a positive loop, or introducing negative feedback in the system (new practices and habits) to stabilize it.
Donella H. Meadows in her book Thinking in Systems: A Primer talks about different types of leverage points which are places within a complex system where you can try to apply what we call change levers to make the system evolve from its current dynamics into another that might be more interesting for the team.
We detected several change levers that might be applied to the system at different leverage points to improve the sustainability of the system, such as, “Improving Technical Skills”, “Knowledge Diffusion”, “Knowledge Crystallization”, “Changing Habits”, “All Team Product Owning” or “Remote First”[2].
All of them attacked vicious cycles that were entrenched in the team’s dynamics, and were valuable in themselves, but we were not sure that they would be enough to change the system’s current dynamics. Something that we observed during our field work was that the team was very skeptical about the probabilities of success of any initiative to improve its situation. They had gone through several failed attempts of “Agile” transformation before, and this has left them in a kind of state of learned helplessness. Why had those previous attempts failed? Would ours succeed using our change levers?
We started to realize that there was a deeper force at play that was exacerbating the rest of the problems and would reduce the effect of any of the other change levers. We realized that they would be useless unless the team had enough slack and company support to apply them. The deeper and stronger force that was inhibiting the possibility of having that slack and support was the very conception of what value meant for the company, it was a cultural problem.

We then came up with a new change lever: “Redefinition of Value”. This was the most powerful change lever of all the ones we had detected because it was a cultural change[3], and it would increase the probabilities of success of all other change levers. Being a cultural change also made it the most difficult to apply.
Redefinition of Value
What was this “Redefinition of Value” change lever about?
The culture of the team was understanding value only as producing features for their clients as quickly as possible. This definition of value only includes certain kinds of tasks, the ones that directly produce features, but excludes many tasks that don’t directly produce features, but that are necessary for the sustainability of the system (the business and team) itself. The first kind of work is called productive work and the second kind is called caring work[4].
Believing that only one type of work has value, (productive work), and then focusing only on that type of work is an extractive micro-optimization that might end destabilizing the system[5].

The redefinition of value we proposed was that not only producing features for the client as quickly as possible is valuable, that there is also value in keeping the sustainability of the business and team. Aside from working on productive tasks, you need to devote energy and time to work on caring tasks which are concerned with keeping the health of the ecosystem composed of the code, the team and the client, so that it can continue evolving, being habitable for the team and producing value. We think that this kind of work (caring work) has value and is strategic for a company. If you think about it, at the bottom, this is about seeing the team as the real product and establishing a healthier and more durable relationship with clients.
This idea of caring work comes from economic studies from a gender perspective. In feminist economics caring work is defined as, “those occupations that provide services that help people develop their capabilities, or the ability to pursue the aspects of their lives that they value” or “necessary occupations and care to sustain life and human survival”.
We thought that for this redefinition of value to be successful, it needed to be stated very clearly to the team from above. This clarity is crucial to avoid the developers having to solve the conflicts that arise when the value of caring tasks is not clear. In those cases, it’s often caring work which gets neglected.
Those conflicts are actually strategic and, as such, they should be resolved at the right level so that the team receives a clear message that gives them focus, and the peace of mind that they are always working in something that is really valued by the company.
In many companies the value of caring only appears in company statements (corporate language), but it’s not part of the real culture, the real system of values of the company. This situation creates a kind of doublespeak that might be very harmful. A way to avoid that might be putting your money where your mouth is.
So with all these ingredients we cooked a presentation for the team lead, the CTO and the CPO of the company[6], to tell them the strategy we would like to follow, the cultural change involved in the redefinition of value that we thought necessary and how we thought that the possible conflicts between the two types of work should be resolved at their level. They listened to us and decided to try. They were very brave and this decision enabled a wonderful change in the team we started to work with[7]. The success of this experiment made it possible for other teams to start experimenting with this redefinition of value as well.
Is this not the metaphor of technical debt in a different guise?[8]
We think that the narrative of caring work, it’s not equivalent to technical debt.
The technical debt metaphor has evolved a lot from the concept that Ward Cunningham originally coined. With time the metaphor was extended to cover more than what he initially meant[9]. This extended use of the metaphor has been criticized by some software practitioners[10]. Leaving this debate aside, let’s focus on how most people currently understand the technical debt metaphor:
“Design or implementation constructs that are expedient in the short term but that set up a technical context that can make a future change more costly or impossible. Technical debt is a contingent liability whose impact is limited to internal systems qualities - primarily, but not only, maintainability and evolvability.” from Managing Technical Debt: Reducing Friction in Software Development
Technical debt describes technical problems that cause friction in software development and how to manage them.
On the other hand, the caring work narrative addresses the wider concern of sustainability in a software system, considering all its aspects (social, economic and technical), and how both productive and caring work are key to keep a sustainable system. We think that makes the caring work narrative a wider concept than technical debt.
This narrative has created a cultural shift that has allowed us not only to manage technical debt better, but also to create room for activities directed to prevent technical debt, empower the team, amplify its knowledge, attract talent, etc. We think that considering caring work as valuable as productive work placed us in a plane of dialogue which was more constructive than the financial metaphor behind technical debt.

How are we applying it at the moment?
Applying the caring work narrative depends highly on the context of each company. Please, do not take this as a “a recipe for applying caring work” or “the way to apply caring work”. What is important is to understand the concept, then you will have to adapt it to your own context.
In the teams we coach we are using something called caring tasks (descriptions of caring work) along with a variation of the concerns mechanism[11] and devoting percentages of time in every iteration to work on caring tasks. The developers are the ones that decide which caring work is needed. These decisions are highly contextual and they involve trade offs related to asymmetries initially found in each team and in their evolution. There are small variations in each team, and the way we apply them in each team has evolved over time. You can listen about some of those variations in the two podcasts that The Big Branch Theory Podcast will devote to this topic.
We plan to write in the near future another post about how we are using the concerns mechanism in the context of caring work.
Conclusions
We have been using the redefinition of value given by the caring work for more than a year now. Its success in the initial team made it possible for other teams to start experimenting with it as well. Using it in new teams has taught us many things and we have introduced local variations to adapt the idea to the realities of the new teams and their coaches.
So far, it’s working for us well and we feel that it has helped us in some aspects in which using the technical debt metaphor is difficult like, for example, improving processes or owning the product.
Some of the teams worked only on legacy systems, and other teams worked on both greenfield and legacy systems (all the legacy systems had, and still have, a lot of technical debt).
We think it is important to consider that the teams we have been collaborating with were in a context of extraction[12] in which there is already a lot of value to protect.

Due to the coronavirus crisis some of the teams have started to work on an exploration context. This is a challenge and we wonder how the caring work narrative evolves in this context and a scarcity scenario.
To finish we’d like to add a quote from Lakoff[13]:
“New metaphors are capable of creating new understandings and, therefore, new realities”
We think that the caring work narrative might help create a reality in which both productive and caring work are valued and more sustainable software systems are more likely.
Acknowledgements
Thanks to Joan Valduvieco, Beatriz Martín and Vanesa Rodríguez for all the stimulating conversations that lead to the idea of applying the narrative of caring work in software development.
Thanks to José Carlos Gil and Edu Mateu for inviting us to work with Lifull Connect. It has been a great symbiosis so far.
Thanks to Marc Sturlese and Hernán Castagnola for daring to try.
Thanks to Lifull’s B2B and B2C teams and all the Codesai colleagues that have worked with them, for all the effort and great work to take advantage of the great opportunity that caring tasks created.
Finally, thanks to my Codesai colleagues and to Rachel M. Carmena for reading the initial drafts and giving me feedback.
Notes
[1] Positive in this context does not mean something good. The feedback is positive because it makes the magnitude of the perturbation increase (it positively retrofeeds it).
[2] Those were the names we used when we presented what we have learned during the consultancy.
[3] In systems thinking a leverage point is a place within a complex system (a corporation, an economy, a living body, a city, an ecosystem) where a shift can be applied to produce changes in the whole system behavior. It would be a low leverage point if a small shift produces a small behavioral change. It’s a high leverage point if a small shoif causes a large behavioral change. According to Donella H. Meadows the most effective place to intervene a system is:
“The mindset or paradigm out of which the system — its goals, power structure, rules, its culture — arises”.
As you can imagine this is the most difficult thing to change as well. To know more read Leverage Points: Places to Intervene in a System.
[4] Also known as reproductive work. This idea comes from a gender perspective of economics. You can learn more about it reading the Care work and Feminist economics articles in Wikipedia.
[5] Sadly we observe this phenomena at all scales: a business, a relationship, an ecosystem, the planet…
[6] Edu Mateu, Marc Sturlese and Hernán Castagnola were then the B2B team lead, the CTO and the CPO, respectively.
[7] B2B was the first team we worked with. Fran Reyes and I started coaching them in February 2019 but after a couple of months Antonio de la Torre and Manuel Tordesillas joined us. Thanks to their great work of this team, other teams started using the caring work narrative around 6 month after.
[8] Thanks to Fran Reyes and Alfredo Casado for the interesting discussions about technical debt that helped me write this part.
[9] You can listen Ward Cunningham himself explaining what he actually meant with the technical debt metaphor in this videovideo.
[10] Two examples: A Mess is not a Technical Debt by Robert C. Martin and Bad code isn’t Technical Debt, it’s an unhedged Call Option by Steve Freeman.
[11] The concerns mechanism is described by Xavi Gost in his talk CDD (Desarrollo dirigido por consenso).
[12] To know more watch 3X Explore/Expand/Extrac by Kent Beck or read Kent Beck’s 3X: Explore, Expand, Extract by Antony Marcano and Andy Palmer.
[13] From Metaphors We Live By by George Lakoff and Mark Johnson.
References
Books
-
Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency, Tom Demarco
-
Patterns of Software: Tales from the Software Community, Richard P. Gabriel
-
Filters against Folly: How to Survive despite Economists, Ecologists, and the Merely Eloquent, Garrett Hardin
-
Managing Technical Debt: Reducing Friction in Software Development, Philippe Kruchten, Robert Nord, Ipek Ozkaya
Articles
-
Leverage Points: Places to Intervene in a System , Donella H. Meadows
-
Positive feedback (or exacerbating feedback) (Wikipedia)
-
Leverage points places to intervene in a system (Wikipedia)
-
Twelve leverage points (Wikipedia)
-
Care work (Wikipedia)
-
Feminist economics (Wikipedia)
-
System Dynamics (Wikipedia)
-
Causal loop diagram (Wikipedia)
-
Bad code isn’t Technical Debt, it’s an unhedged Call Option, Steve Freeman
-
Kent Beck’s 3X: Explore, Expand, Extract, Antony Marcano, Andy Palmer
Talks
Monday, May 25, 2020
Interesting Talk: "Escaping the Technical Debt Cycle"
Saturday, May 16, 2020
Interesting Talk: "From Legacy Chaos to the Promised Land of DDD"
Saturday, April 25, 2020
Friday, April 24, 2020
Interesting Talk: "How to rewrite, a bit at a time"
Monday, April 20, 2020
Wednesday, October 3, 2018
Giving new life to existing Om legacy SPAs with re-om
Introduction.
We’re pleased to announce that our client GreenPowerMonitor has allowed us to open-source re-om, an event-driven functional framework which is giving new life to an existing legacy SPA that uses Om.
Why re-om?
1. The problem with our SPA: imperative programming everywhere.
We are working on a legacy ClojureScript SPA, called horizon, that uses Om.
This SPA might have had some kind of architecture in some moment in the past but technical debt, lack of documentation and design flaws had blurred it. Business logic (in this case, pure logic that decides how to interact with the user or data transformations) and effectful code were not clearly separated.
This lack of separation of concerns was making the SPA hard to evolve because its code was difficult to understand and to test. This resulted in a very low test coverage that amplified even more the problems to evolve the code safely and at a sustainable pace. This was generating a vicious circle.
Even more, conflating pure and effectful code destroys the advantages of functional programming. It doesn’t matter that we’re using a language like Clojure, without clear isolation between pure and effectful code, you’ll end up doing imperative programming.
2. A possible solution: effects and coeffects.
Using effects and coeffects is a way of getting the separation of concerns we were lacking. They help achieve a clear isolation of effectful code and business logic that makes the interesting logic pure and, as such, really easy to test and reason about. With them we can really enjoy the advantages of functional programming.
Any piece of logic using a design based on effects and coeffects is comprised of three parts:
- Extracting all the needed data from “the world” (using coeffects for getting application state, getting component state, getting DOM state, etc).
- Using pure functions to compute the description of the side effects to be performed (returning effects for updating application state, sending messages, etc) given what was extracted from “the world” in the previous step (the coeffects).
- Performing the side effects described by the effects returned by the pure functions executed in the previous step.
At the beginning, when re-om wasn’t yet accepted by everyone in the team, we used coeffects and effects which were being manually created to improve some parts of the SPA, (have a look at Improving legacy Om code (I) and Improving legacy Om code (II)), but this can get cumbersome quickly.
3. An event-driven framework with effects and coeffects: re-frame.
Some of us had worked with effects and coeffects before while developing SPAs with re-frame and had experienced how good it is. After working with re-frame, when you come to horizon, you realize how a good architecture can make a dramatic difference in clarity, testability, understandability and easiness of change.
Having a framework like re-frame removes most of the boilerplate of working with effects and coeffects, creating clear boundaries and constraints that separate pure code from effectful code and gives you a very clear flow to add new functionality that’s very easy to test and protect from regressions. In that sense re-frame’s architecture can be considered what Jeff Atwood defined as a pit of success because it is:
“a design that makes it easy to do the right things and annoying (but not impossible) to do the wrong things.”
4. Why not use re-frame then?
In principle using re-frame in our SPA would have been wonderful, but in practice this was never really an option.
A very important premise for us was that a rewrite was out of the question because we would have been blocked from producing any new feature for too long. We needed to continue developing new features. So we decided we would follow the strangler application pattern, an approach which would allow us to progressively evolve our SPA to use an architecture like re-frame’s one, while being able to keep adding new features all the time. The idea is that all new code would use the new architecture, if it were pragmatically possible, and that we would only change bit by bit those legacy parts that needed to change. This means that during a, probably long, period of time inside the SPA, the new architecture would have to coexist with the old imperative way of coding.
Even though, following the strangler application pattern was not incompatible with introducing re-frame, there were more things to consider. Let’s examine more closely what starting to use re-frame would have meant to us:
4. 1. From Om to reagent.
re-frame uses reagent as its interface to React. Although I personally consider reagent to be much nicer than Om because it feels more ‘Clojurish’, as it is less verbose and hides React’s complexity better that Om (Om it’s a thinner abstraction over React that reagent), the amount of view code and components developed using Om during the nearly two years of life of the project made changing to reagent too huge of a change. GreenPowerMonitor had done a heavy investment on Om in our SPA for this change to be advisable.
If we had chosen to start using re-frame, we would have faced a huge amount of work. Even following a strangler application pattern, it would have taken quite some time to abandon Om, and in the meantime Om and reagent would have had to coexist in our code base. This coexistence would have been problematic because we’d have had to either rewrite some components or add complicated wrapping to reuse Om components from reagent ones. It would have also forced our developers to learn and develop with both technologies.
Those reasons made us abandon the idea of using re-frame, and chose a less risky and progressive way to get our real goal, which was having the advantages of re-frame’s architecture in our code.
5. re-om is born.
André and I decided to do a spike to write an event-driven framework using effects and coeffects. After having a look at re-frame’s code it turned out it wouldn’t be too big of an undertaking. Once we had it done, we called it re-om as a joke.
At the beginning we had only events with effects and coeffects and started to try it in our code. From the very beginning we saw great improvements in testability and understandability of the code. This original code that was independent of any view technology was improved during several months of use. Most of this code ended being part of reffectory.
Later our colleague Joel Sánchez added subscriptions to re-om. This radically changed the way we approach the development of components. They started to become dumb view code with nearly no logic inside, which started to make cumbersome component integration testing nearly unnecessary. Another surprising effect of using re-om was that we were also able to have less and less state inside controls which made things like validations or transformation of the state of controls comprised of other controls much easier.
A really important characteristic of re-om is that it’s not invasive. Since it was thought from the very beginning to retrofit a legacy SPA to start using an event-driven architecture with an effects and coeffects system, it’s ideal when you want to evolve a code base gradually following a strangler application pattern. The only thing we need to do is initialize re-om passing horizon’s app-state atom. From then on, re-om subscriptions will detect any changes made by the legacy imperative code to re-render the components subscribed to them, and it’ll also be able to use effect handlers we wrote on top of it to mutate the app-state using horizon’s lenses and do other effects that “talk” to the legacy part.
This way we could start carving islands of pure functional code inside our SPA’s imperative soup, and introduced some sanity to make its development more sustainable.
re-om & reffectory
We’ve been using re-om during the last 6 months and it has really made our lives much easier. Before open-sourcing it, we decided to extract from re-om the code that was independent of any view technology. This code is now part of reffectory and it might be used as the base for creating frameworks similar to re-om for other view technologies, like for example rum, or even for pure Clojure projects.
Acknowledgements.
We’d like to thank GreenPowerMonitor for open-sourcing re-om and reffectory, all our colleagues at GreenPowerMonitor, and the ones that are now in other companies like André and Joel, for using it, giving feedback and contributing to improve it during all this time. We’d also love to thank the re-frame project, which we think is a really wonderful way of writing SPAs and on which we’ve heavily inspired re-om.
Give it a try.
Please do have a look and try re-om and reffectory. We hope they might be as useful to you as they have been for us.
Originally published in Codesai's blog.Saturday, May 19, 2018
Improving legacy Om code (II): Using effects and coeffects to isolate effectful code from pure code
Introduction.
In the previous post, we applied the humble object pattern idea to avoid having to write end-to-end tests for the interesting logic of a hard to test legacy Om view, and managed to write cheaper unit tests instead. Then, we saw how those unit tests were far from ideal because they were highly coupled to implementation details, and how these problems were caused by a lack of separation of concerns in the code design.
In this post we’ll show a solution to those design problems using effects and coeffects that will make the interesting logic pure and, as such, really easy to test and reason about.
Refactoring to isolate side-effects and side-causes using effects and coeffects.
We refactored the code to isolate side-effects and side-causes from pure logic. This way, not only testing the logic got much easier (the logic would be in pure functions), but also, it made tests less coupled to implementation details. To achieve this we introduced the concepts of coeffects and effects.
The basic idea of the new design was:
- Extracting all the needed data from globals (using coeffects for getting application state, getting component state, getting DOM state, etc).
- Using pure functions to compute the description of the side effects to be performed (returning effects for updating application state, sending messages, etc) given what was extracted in the previous step (the coeffects).
- Performing the side effects described by the effects returned by the called pure functions.
The main difference that the code of horizon.controls.widgets.tree.hierarchy
presented after this refactoring was that the event handler functions were moved back into it again, and that they were using the process-all! and extract-all! functions that were used to perform the side-effects described by effects, and extract the values of the side-causes tracked by coeffects, respectively. The event handler functions are shown in the next snippet (to see the whole code click here):
Now all the logic in the companion namespace was comprised of pure functions, with neither asynchronous nor mutating code:
Thus, its tests became much simpler:
Notice how the pure functions receive a map of coeffects already containing all the extracted values they need from the “world” and they return a map with descriptions of the effects. This makes testing really much easier than before, and remove the need to use test doubles.
Notice also how the test code is now around 100 lines shorter. The main reason for this is that the new tests know much less about how the production code is implemented than the previous one. This made possible to remove some tests that, in the previous version of the code, were testing some branches that we were considering reachable when testing implementation details, but when considering the whole behaviour are actually unreachable.
Now let’s see the code that is extracting the values tracked by the coeffects:
which is using several implementations of the Coeffect protocol:
All the coeffects were created using factories to localize in only one place the “shape” of each type of coeffect. This indirection proved very useful when we decided to refactor the code that extracts the value of each coeffect to substitute its initial implementation as a conditional to its current implementation using polymorphism with a protocol.
These are the coeffects factories:
Now there was only one place where we needed to test side causes (using test doubles for some of them). These are the tests for extracting the coeffects values:
A very similar code is processing the side-effects described by effects:
which uses different effects implementing the Effect protocol:
that are created with the following factories:
Finally, these are the tests for processing the effects:
Summary.
We have seen how by using the concept of effects and coeffects, we were able to refactor our code to get a new design that isolates the effectful code from the pure code. This made testing our most interesting logic really easy because it became comprised of only pure functions.
The basic idea of the new design was:
- Extracting all the needed data from globals (using coeffects for getting application state, getting component state, getting DOM state, etc).
- Computing in pure functions the description of the side effects to be performed (returning effects for updating application state, sending messages, etc) given what it was extracted in the previous step (the coeffects).
- Performing the side effects described by the effects returned by the called pure functions.
Since the time we did this refactoring, we have decided to go deeper in this way of designing code and we’re implementing a full effects & coeffects system inspired by re-frame.
Acknowledgements.
Many thanks to Francesc Guillén, Daniel Ojeda, André Stylianos Ramos, Ricard Osorio, Ángel Rojo, Antonio de la Torre, Fran Reyes, Miguel Ángel Viera and Manuel Tordesillas for giving me great feedback to improve this post and for all the interesting conversations.