Monoliths and Decomposition: Making Better Architectural Decisions

How failure, lifecycle, and state shape architectural boundaries

1/30/2026 · systems

In my first job as a developer, I was hired to do web.

That part made sense. I had just come out of a frontend bootcamp, I was programming every day, and I felt genuinely confident in my ability to figure things out. I was active in the Discord full of people I’d gone through the program with, and riding the high of finally being “in it.”

The role came through a VP at a financial firm where I was working as a financial advisor, which meant I understood the business problem unusually well for someone so green. I had lived it and experienced his pain points firsthand. When I noticed a clear gap and a need for an internal application, I threw myself at it and, with the unbounded optimism of someone who hadn’t yet been humbled by production systems, sold myself as the solution. After all, how hard could it be?

The first thing I had to do was learn a more performant backend language. I was the quintessential baby dev, thinking I would put on my big boy pants and go toe-to-toe with seasoned kernel maintainers on github. I was imagining getting PRs merged the same way kids imagine they are playing against Michael Jordan when shooting around in the driveway. “3 seconds to the buzzer and DEPLOYED! Charles Deane has won programming! And the crowd goes WILD!”

My first instinct was to go and learn Rust, after all, everyone on Reddit said it was super safe, and super secure; and people on Reddit know everything, right? I spent 2 weeks walking over the hot coals of the Rust book, fresh out of a JS bootcamp where mutability had never so much as been mentioned, and I was racking up lock conditions faster than you could say “borrow checker.” It became readily apparent that maybe, for this specific REST backend, Rust might not be the best solution.

So I put my thinking cap back on and looked elsewhere. I landed on Go, unlike Rust it felt approachable. I worked through the tutorials on the website over a couple of hours and remember thinking, “this is pretty straightforward.” It felt like Python, which I had used before teaching kids at summer camp but had a robust concurrency model that I could quickly wrap my brain around. Not to mention, it was FAST. I was able to get to a place where I was able to instantiate necessary building blocks relatively quickly.

At this point I was a solo developer, the system was new, I had overcome my first obstacle, and I was extremely confident. A dangerous combination.

The stakeholders were non-technical and enthusiastic. Every time I made something work, I reported back to them proudly. It created a kind of excitement-feedback cycle that you can really only get at your first startup.

What I didn’t realize is that this enthusiasm was making way for a scope creep of legendary proportions. Every feature I showed seemed to lead to a question about adding yet another feature. Big startup energy. Feature rollout became a constant stream of small, reasonable asks; and rather than organizing a backlog of requests, they were all integrated as fast as my fingers could type them.

At the same time, I was hyper-focused on security. This was a regulated environment, and nothing would have been more embarrassing or damaging than a breach. I obsessed over access controls and authentication. I wrote careful transactional queries and did audit after audit of validation. I made sure nothing leaked. In hindsight, I was so focused on preventing the worst possible failure that I barely thought about how the system would change, recover, or evolve.

And so it grew like unattended bamboo.

The first prototype started at around three thousand lines of code on the backend handling basic business logic and authentication. Before I really noticed what was happening, it had grown to fifty thousand. Still one service. Still one binary. Still “temporary.” The software worked. It shipped. People used it. It stayed up.

What bothered me was not system failure. Reliability was never the issue, there weren’t discernable sharp edges anywhere in the codebase. It was that I no longer had a clear way to reason about failure in the system if I encountered it. Feature rollout slowed and required short system-wide outages for deployment. Small changes began to carry unexpected weight. I could steer the ship back on course when needed, but seeing the faults humbled me. I came to the conclusion that I just hadn’t learned how, or when, to ask the right questions; and not asking those questions was my fundamental mistake.

Eventually I paid my debt. All of the code was revisited, refactored, and carefully split into multiple services. The system is still in use today, and it’s still a critical part of the business. It’s also much easier to reason about and more resilient to failure.

That experience changed how I think about monoliths, microservices, and decomposition. Not as architectural preferences to be defended, but as tools for deciding how much failure a system can absorb before it starts to surprise you.

Decomposition is not a style choice

Anyone familiar with Bob Ross has to be aware of the whimsical nature by which he adds “happy little trees” to his paintings. He’ll begin working systematically from the top of the canvas to the bottom and then once a reasonably proportioned landscape seems to be intact as a substrate, he will begin adding trees in the foreground, instantly revealing a perspective and depth that makes his paintings universally majestic.

However, if you have ever followed an episode of The Joy Of Painting behind an easel with a brush in hand and attempted to add a happy little tree of your own then you will quickly find that the delicacy and expertise necessary to achieve the same effect as Bob Ross is actually quite difficult. I made an attempt to do this (once) and my happy little tree seemed to defy space-time, residing on a different plane of existence than the scenic lake and mountains I had just intermittently paused the show for over the preceeding 3 hours.

It’s easy to liken decomposition of a system to adding your own happy little trees. We have all observed it being successfully implemented in the wild, and it seems like a logical next step. So when a system becomes hard to reason about, the instinct to decompose it is almost automatic. Breaking things apart feels like progress. We’re hasty to put a second mortgage on our system because it promises clarity, ownership, and a way to regain control.

However, in the absence of a clear failure model, decomposition can often become a stand-in for understanding. We all want to superimpose a happy little tree of our own and gain control over failure state the same way Bob masters perspective. What we don’t want is an incoherent and foreign-looking landscape hanging on our wall. It is important to realize that decomposition does not reduce coupling by default, it reduces proximity. Whether it reduces risk depends entirely on what kind of failure the boundary is meant to contain. If there is no failure story, there is no boundary, and decomposition just adds more moving parts. That is why decomposition is a process and not a design pattern.

Decomposition as failure analysis

As a decision exercise, rather than asking what should be a service, ask the following:

1. What failure am I trying to contain?

2. If component B fails, what must component A do differently?

3. Where does authoritative state live?

4. How do lifecycles interact across this boundary?

5. What does this boundary do to debugging and recovery?

6. How reversible is this decision?

Monoliths, services, and false binaries

To be crystal clear, none of this is an argument for or against monoliths. A large binary can be perfectly reasonable when failure modes are shared, latency matters, or state needs to move together efficiently. A distributed system can be the right choice when failures need to be isolated and orthogonality is real.

What matters isn’t form, it’s intent. A modular monolith with clear failure boundaries can outperform a distributed system with components cosplaying as independent. A set of small services can be more fragile than a single process if they are tightly coupled through ephemeral state and retries.

Say it louder for the people in the back:

Decomposition is a process, not a design pattern.

Moving Forward

When I built that first monolith in the coffee shop, I was solving the problem in front of me with the tools I had. The mistakes I made were not about carelessness or ambition. They were about scope outpacing understanding. What changed over time was not my preference for one architecture over another. It was learning to take a moment before committing to form, and to ask different questions while the system was still small enough to answer them simply.

Today, I try to understand what actually has to hold when something breaks, what can be restarted without ceremony, and what needs to move together to avoid leaving the system in an in-between or brittle state. Once that picture is clear, boundaries tend to suggest themselves. This way of thinking is not complicated, and it is not reserved for special cases. It just takes a commitment to methodology before drawing lines and transparency about what failure would really look like if it showed up tomorrow.

I still think about that first system almost every time I begin a new project. Not with embarrassment, but as a reminder of how much easier the work becomes once I learned where to be deliberate.