Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microservices – Please, Don’t (2016) (riak.com)
136 points by s4i on Sept 11, 2022 | hide | past | favorite | 92 comments


My crazy advice: Build a monolith as microservices.

By which I mean focus on Domain Driven Development, focus strongly on modularity even when, and this is the key part, it's not required because it's in a monolith.

There are a great deal of advantages to microservices like teams being able to focus on a subset that can still be accomplished with a well designed monolith. While still avoiding the biggest headaches with microservices like managing a crazy network of deployments, across multiple environments where teams don't have the neccessary discipline to communicate.

If at a certain point scalability does become a big enough issue, then it's much more manageable to segment what's needed (and even transition back if needed). It still might take months, but the quality of the transition is much higher.


> By which I mean focus on Domain Driven Development, focus strongly on modularity even when, and this is the key part, it's not required because it's in a monolith

Here, here! If I had a dollar for every team I’ve seen adopt micro services as a solution to code modularization I’d have several dollars.

“Hey our code could be cleaner and our git repo seems to be getting big”

“Sweet let’s inject the network in between everything”


Having lived between packaged based repos (Amazon) and mono repo (FB), I've come to greatly appreciate the mono repo life. All my dependencies match up, and I can build everything in one command. Furthermore, I can run all 4007 tests in 13 minutes.


Agreed! I worked at a job where most of the company worked on a monolith, with the frontend in its own repo, then another 25 microservices hanging off the monolith. Everything was different in every repo, including how to deploy and run tests etc. The project I was working on was everything in one repo with a nice deploy.sh and test.sh and no chasing people to find out how to deploy that one microservice that was last touched months ago.


Cloud provider's best trick was to convince everyone that network calls are better than function calls.


> Cloud provider's best trick was to convince everyone that network calls are better than function calls.

Not really. Cloud providers did convince everyone that if you hit a resource limit in one of your boxes then your best strategy is being able to add more boxes, and not move everything and the kitchen sink to a larger box.

Also, not everything sits in the hot path of anything. Sometimes all you want is a long-running job to run somewhere.


I think they are, though. Off the top of my head:

* Network calls are traceable by default — logging all incoming queries is easy, logging all function calls is harder.

* Network calls are easily made — function calls in a backend can require passing some kind of a huge config structure, or might be entirely unavailable if you're using a compiled language, while network calls can be done with nothing more than curl/grpcurl/etc.

The downsides are a) decreased performance b) you have to handle failures c) dev ergonomics, but eh. I would still choose network calls over function calls for a big backend, probably.


> logging all function calls is harder

It's really not though. Many (most?) languages have some way to annotate function calls with logging logic

> function calls in a backend can require passing some kind of a huge config structure, or might be entirely unavailable if you're using a compiled language

Not really sure what you're referring to here


Oof. Everything in my 12-13 years of engineering has led me to the exact opposite conclusion of literally every point you made here.

Are you genuinely suggesting that making a network call is easier than calling a function in the same code base running in the same process?


Completely disagree if you work on low latency systems, or a high volumes of requests: functions are traceable if you need to in a crisis: plug a profiler, run like prod, see the hot paths.

Dev ergonomics explode the cost of everything and you spend your time trying to figure out what the hell is going on where rather than just... plugging the profiler and sending the input, then observe the internals moving around.

If the system can work without micro services, it should.


> Network calls are easily made — function calls in a backend can require passing some kind of a huge config structure, or might be entirely unavailable if you're using a compiled language, while network calls can be done with nothing more than curl/grpcurl/etc.

Define "easy" because curl is not "easy" compared to a function call and involves building a few large datastructures to work at all.


I think this is a blanket statement on my part, so to counter it; networked databases are definitely important. Some micro services maybe necessary for things outside of your control (HIPAA, Finance audits) and geo-proximity processing needs. And sometimes good ol micro services became popular because it was easy business in a giant org chart (Amazon, every team has an API).

Real world is messy.


I’m not anti micro services. I’m anti micro services as a code organization strategy.

There are very real reasons to need process isolation in larger systems. Code modularity is not one of them.


This is a great approach. I like to add “macroservices” into conversations around this because I think it sums up the end state better. It’s not micro. It’s an entire billing system. It just only has API’s and not a front-end. Integrate it. That’s a macroservice. Or a service that handles the subscription, billing, and entitlements as one. That’s a macroservice and that’s totally acceptable.

What kills me is when there’s a monolith that breaks when other teams commit code. If I have to stash changes and deal with merge conflicts outside my team, it needs to be broken up.

Worry about scaling when you have a need to scale.


Macroservices are the way to go. Currently I work on a system that is a macroservice basically. We have more than one team working on the service which is great, since that means on-call rotations, upgrades, etc are much more reasonable. Having 50 people on a service really takes work off compared to 6, and there is a lot less reinventing the wheel. As long as the CICD pipeline is solid there really isn't any issues with a lot of teams working on the same code base.

As you say having a "Billing" service is much better than what you would typically see in a microservices architecture that you'll have a Billing-Stripe, Billing-Visa, Billing-Invoices, and so on, creating a web of dependencies.


Exactly. So long as merge conflicts stay within the team (i.e. your macroservice is logically divided within the code, models and services aren't just thrown into a models and services package or folder. I really detest merge conflicts when I can't track the work or have to reach out to another team to discern intent.

Having microservices for billing where you have Billing-stripe, Billing-visa, would make me want to scream. Why not a generic abstract or interface and those are simply implementations within the billing service? Macroservices FTW.


The important advantage is that you can scale parts of the "macroservice" independently when they're microservices. You can isolate stateful stuff into one microservice and then it is much easier to scale the other microservices horizontally.


You might find the concept of Self-contained Systems, or SCS, interesting:

https://scs-architecture.org/

Disclaimer: I helped with the concept a tiny bit.


> You might find the concept of Self-contained Systems, or SCS, interesting:

It sounds like you invested some effort to come up with a brand new buzzword to refer to microservices.


Sounds like microservices to me


I might start calling this a Megalith, since it kinda fits if you squint at the definition.


I do this all the time. I call it Service Oriented Monolith. I.e. you use the principle of micro-services in a single code base. It works pretty well because micro-services gives you clear segregation.


Does each module get its own separate data store? (Genuine question as I see some micro services with direct access to shared data.)


Not the person you replied to, but I do the same thing and by default no.

So the definition of what a service is shrinks down really to a deployment artefact; each "service" is deployed individually as its own process (container). All shared data is persisted via a single REST-style API service. We nearly went all the way and deployed PostgREST for this but some specifics around security and a few other aspects stopped us. But in spirit, that is how it is architected : data persistence is a service that all the other "services" use.

The nice thing about this is you do still get a lot of the upsides. Because deployment is decoupled, you don't to have everybody constrained to the same development cycle. Service A can stay pinned at a specific version while service B is advanced to hotfix a critical change. Meanwhile Service C isn't blocked from deployment because Service A's tests are still failing. And we don't have to (necessarily) roll back Service D if prod deployment of Service E failed. etc etc.


I wouldn't. I would group them in logical groups, with the principle of moving complexity closer to where the data is.

A "Users" service might have crud operations, and maybe a few report generators that can run SQL against the Users database. But the "Users" services should not have access to "Account Balance" service, e.g. Then say, the KYC service can access the Users and the Account Balance services and then do a manual join on the data.

But say if KYC is directly accessing the Users database and the User table schema changes drastically, not only do you have to change the Users services, but you have to change KYC and coordinate KYC to deploy their changes at the same time your database changes.

The big problem is if the coupling is too tight, then it's hard to refactor in the future. OTOH if the coupling is too loose, then you might be making lots of overhead in computing when you could drastically simplify it by a tighter coupling of code.

The right answer I believe is to move the complexity close to where the data is. So maybe there is some complex KYC calculation on the Users service that is trivially solved by a custom SQL statement on the Users table. In this case it should probably be in the Users service even if it's only used by KYC, if this makes sense.


This is similar to the premise of "Righting software" and volatility based decomposition as opposed to functional decomposition.

Your example is functional but you wouldnt have to change much to have it fit in the "engines and managers" pattern the author is fond of.


Yes. But in a single database instance. The Service Oriented Monolith also applies to the database.


To put it differently, logical separation of state doesn't have to mean physical separation of state. One Postgres instance can hold your whole app's state -- just, put each service's data into a separate database within Postgres. I've done this and I like it a lot.


Guessing you don't need atomic DB transactions in that case.


Well, I don't think you can have transactions open across logical databases any more than you can join across them. And you might anyway design the relevant component to serve multiple clients simultaneously, so you could still be executing multiple simultaneous transactions.

The component is the sole owner of its state. How it interacts with that state is an internal concern.


> Build a monolith as microservices.

See also, Boundaries[1]. Apologies, I still can’t find a transcript since I last linked to it. The general idea is to write most of your code as functional and isolate small parts to handle state. As relevant here, by doing that you can trivially break out functions into services wherever you find it advantageous to do so.

1: https://www.destroyallsoftware.com/talks/boundaries


Try to build your app as if you're going to reuse as much as possible in another app. This will make you think about clear boundaries and keeping things as standalone as possible.


> Build a monolith as microservices

This is a case of applying Phillipe Kruchten's 4+1 Views of Architecture slightly differently, such that logical, development and process views are not tightly coupled to one another - and works very well!


There's already a term for it, the "modular monolith".


I'm doing this for my next-gen search engine, but I also queue several requests into a single one with my own webservice. Is this a known technique in the industry?


My previous company did this and it worked well enough! We used deployment groups to control traffic and specialize the monolith to serve certain part of the service


Just me, or does the definition of “Bounded Context” read similarly to typical recommended architectural boundaries for microservices?


That's the point. At my last job, the training for microservice design was training in Domain Driven Design with Bounded Contexts. My predecessors at my current job could have used that training, because what I got was a different microservice for every data type, along with another microservice to essentially perform `JOIN` operations across these other microservices... When in reality, all these different pieces of data are just supporting information for the main entity we actually care about, and modeling it as a singular aggregate works much better.


Microservices is the current mass yak shave exercise.


A lot of these programming fads seem to come and go over the years. I've witnessed the same cycle with OOP, then FP. What a lot of people don't seem to realise is that by trying to split up a system into more smaller pieces in an attempt to decrease complexity, it's often the case that you end up with even more complexity, just hidden in the interactions between the pieces.


> What a lot of people don't seem to realise is that by trying to split up a system into more smaller pieces in an attempt to decrease complexity, it's often the case that you end up with even more complexity, just hidden in the interactions between the pieces.

I think that is a good observation that I also made and I want to add to that. Moving from a monolith to microservices, is moving the complexity from inside the monolithic system to the interactions between microservices. This means that the job of developers becomes easier since a developer only has to think in terms of the microservice he/she is developing and ensuring that that 'simple' system externally behaves according to spec. But it also means that the job of the architect becomes more difficult because he/she has to get a grasp on all communication (synchronously/asynchronously) between the different microservices.

So, even though there might be more complexity overall, I do believe that from the point of view of a developer, the complexity is lower.


Read the last paragraph of "Fallacy #4".


Not all of these are the same. Since when is FP a fad? Would you say that structured programming is a fad? Is garbage collection?


2015 was the year of FP. Haskell was all the rage, elm was hot, large scale dismissal of OOP was in the air. To be clear I don’t think FP or OOP are really fads but their popularity waxes and wanes. I’ll see y’all in 2027 when static typing is whack and everyone is going back to dynamic typing.


Employed practices change in response to pain points. As an example, the microservices movement corresponds pretty well to an era of (overly) rapid organizational growth during a tech boom where a technical response to org chart problems became necessary.

In contrast, static versus dynamic typing doesn't seem likely to be so cyclical given the significant improvements in usability for gradually typed systems. Dynamic typing became most recently fashionable when the experience of using statically typed languages was sometimes unpleasant. Static typing has improved to the point where even retrofits onto other systems are very very good.

In areas where a reasonably complete gradual typing solution has emerged for a major dynamically typed language, it's rapidly become or becoming standard practice--Python and TypeScript being the two most obvious ones, but not the only ones. (Ruby may not get there. Its core implementation doesn't seem great. I wish they went with Sorbet.)

I don't see a reason that would unwind to the point where it's a serious conversation again.


I think the most recent popular use of dynamic typing is Elixir, and IMO that's because Phoenix (especially with LiveView) is the most interesting server-side web framework to come along in a while. Yes, you can add type checking with Dialyzer, but that's optional and AFAIK not a widespread default in Phoenix projects. I wish something like Phoenix and LiveView was available in a strongly statically typed language, but for now I just make do with the dynamic typing.


Elixir projects not using Dialyzer is a major reason why I don't use it much; I use it in my code, but not enough others do to make it comfortable. At the same time, my understanding is that Elixir has some pretty interesting gradual typing stuff in the works right now.


Maybe pure FP was a fad around 2014, but practical FP, say, as embodied in React, is very much entrenched and is not going anywhere.

(To say nothing of Excel as the most widespread functional reactive programming tool, but few count it as a "real" programming environment.)


I could make more or less your exact statement, but substituting 2015 for 2004


as a whole, interest in it comes in waves, but it never seems to make it to the mainstream.

Many mainstream ideas are incubated in the fp world, and wind up in the mainstream. for exmaple: GC, closures, higher order functions. but not any of the "functional programming languages" themselves.


What you're describing is literally FP ideas and abstractions going mainstream


uhh...yeah, i thought i was clear on that?


Sounded like you were disagreeing with me on that when you said

> but it never seems to make it to the mainstream


I actually think that’s the key to building a system that everyone can reason about.

The goal is to have these isolated “modules” where stuff is heavily interconnected, and the connections between modules are well-defined protocols. Then people know the common denominator of what’s possible and build with that. Meanwhile, you can prove things about the behavior of a module and uss the isolation between modules to prove various properties about the whole system.


> it's often the case that you end up with even more complexity, just hidden in the interactions between the pieces

That complexity is not "hidden in the interactions." It is exposed by explicit interactions.

The original ball-of-mud system likely had lots of actual hidden complexity, but it wasn't exposed so you couldn't see it, test it, or reason about it.

The added complexity of microservices is not from its breaking things into pieces, but by all of the tools needed to manage those separate pieces and their interactions. With that complexity comes new superpowers that weren't there before like blue-green deploys. Whether or how you use those powers is up to you.


OOP is still here and microservices(especially since aws lambda) is just getting stronger and stronger every year


Yes and so is FP; but you don't have the same level of dogmatism for OOP that was prevalent in the mid 90s up to early 2000s.

As for microservices, the fact that it's great for cloud services nickel-and-diming you probably makes them likely to remain popular as long as the cloud computing propaganda continues to have effect.


Curious as to what constitutes MS on lambda? Because whether you have a dozen lambda functions where each function translates to one function vs one function with multiple features, the outcome is exactly the same.

API Gateway also muddies the water, it no longer makes sense for each lambda function to maintain stuff like services endpoint like in Kubernetes.

In fact, if you take away containers, you can completely achieve orchestration without Kubernetes. Simply use Step Functions to coordinate lambda functions or even better, avoid it altogether and have one lambda function coordinate the orchestration procedurally.

My bet is that as time goes on and companies realize the overhead from kubernetes and "cloud independence mandate", they will drive more business towards AWS, in particular Fargate is rapidly progressing, as well as ECS Anywhere allows you to run hybrid setups with complete ease and without the headache from Kubernetes.

Just realizing these things as I learn kubernetes and I keep thinking "wait I can just X from AWS and this bypasses the need for kubernetes altgoether" but seems like companies are already knee deep.


This.

Instead of a simple method call, you now have the joys of distributed systems to deal with instead.

I'm a fan of right size architecture. Monoliths have their place. Microservices have their place. Neither are necessarily better than the other. Different tools for potentially different jobs.


There are many excellent problem solving techniques in software engineering. Taken as religion, to extremes, most of these will lead the zealot to destruction. Microservices have their place, as do monoliths.

The author's most valuable point is roughly 'don't believe the hype' - and that's timeless. On the specifics, they're certainly right on each point, but they missed the (imo main) one of 'by forcing a conceptual boundary between different pieces of the application, one can build true silos around competences and so scale their org more effectively'. Put simply, a 200 person monolith will usually feel like it has 200 people working on it. That same org split into 5 groups of 40 can be more nimble. In some cases this is due to infra ('we need to get off Java 8' is easier if the software can be run in 5 pieces). In some cases this is due to problem scope (less intertwining of concepts possible).


“There are many excellent problem solving techniques in software engineering. Taken as religion, to extremes, most of these will lead the zealot to destruction.”

This is actually true for a lot of things like nutrition, health or exercise. They start out as a useful thing but then people with a big ego promote them as the solution for everything.


> Fallacy #5: Better for Scalability

> [The assumption: only by - ] .. packaging your services as discrete units .. [you can] via .. Docker .. [achieve] horizontal scalability.

> However, it’s incorrect ... [to assume it can] only .. [with a] microservice.

Interesting, how monolith can do the same?

> You can create logical clusters of your monolith

> which only handle a certain subset of your traffic.

Oh... so MS... but with extra steps. lol. not to mention we usually care about the DB scale also, which is harder to scale as a monolith with segmented traffic.


It's not necessarily much in the way of extra steps. The Elasticsearch Cloud-on-K8s operator IMO does this quite well.

You define 'node sets' which each play a role in the cluster. Node sets can have tags applied to them, and this affects the procedures they run.

For example, you could say:

1. Let's have a node set of 3 master nodes. 2. Let's have a nodeset of 10 hot data nodes, backed with SSDs and tagged with 'hot'. 3. Let's have a nodeset of 5 cold data nodes, backed with HDDs and tagged with 'cold'. 4. 4 client nodes for load balancing.

You could also say - let's have 3 nodes that play all roles.

On startup, each node starts only the parts of the app that correspond to the roles it's playing.

It's fairly seamless and I think this model would expand quite nicely to 'business logic' services

Anecdotally, have seem some people attempt to retrofit a poor man's version of this model into their microservice in response to e.g. performance problems.


> Oh... so MS... but with extra steps.

Not quite: since with monoliths that are deployed in parallel or have sets of functionality behind feature flags, you're still dealing primarily with one codebase and each instance should be able to service requests on its own, instead of making further network requests across N other instances.

I actually wrote more about this a while ago, "Moduliths: because we need to scale, but we also cannot afford microservices": https://blog.kronis.dev/articles/modulith-because-we-need-to...

A single codebase will generally be easier and quicker to develop against (up to a certain scale), since you don't need so much "internal glue" code between your microservices (network calls and error handling, data type marshalling etc.).

That said, personally I'd prefer microservices for other reasons - so that I could let old parts of the overall system (possibly developed by others) not get in the way of progress, such as not being able to update the application/service to JDK 11/17 because of a PDF library only runs on JDK 8 or something like that.

Furthermore, if you work on a project for long enough, the slow startup and compile times of a large monolith (without the modularization at least, which might not be possible to ensure in certain stacks), the high resource usage and possibly sluggish performance, the numerous approaches that have been utilized to get something like logging or scheduled processes over 5-10 years of development clashing with one another and many other things will just get very annoying and painful to deal with.

Being able to spin up a new service with whatever is easy to work with for the particular case, making it your own little corner of the overall system with good tests and arguably good code (at least for now) is really nice. Until you have 20 of these services, written by people with varying approaches/standards and it's a mess again.

I think there's really no winning with either approach, each has their benefits and shortcomings.

> not to mention we usually care about the DB scale also, which is harder to scale as a monolith with segmented traffic.

This is a fair point!

Though I think that GitLab recently split their database across the boundary of CI functionality and everything else, which was a pretty good example of how far you can get with a single DB (no doubt with replication/clustering and standby instances though): https://about.gitlab.com/blog/2022/06/02/splitting-database-...

In contrast, forgoing something like foreign keys and ending up with distributed transactions (sagas or similar), having to enforce your own data consistency etc. can be asking for trouble before you actually need to care about scaling and split everything up so much.

It's probably a matter of choosing whichever approach works for your scale now and will work for you in the near future.


There is no reason microservices can't be in the same codebase. In fact, for things like infrastructure and testing that may need to touch more than one service at a time, it's incredibly useful to have a monorepo.

I need a t-shirt: monorepos, not monoliths!


At my current job we have this setup - one repo contains 14 different services. Admittedly there are still way more services than their need to be but having everything in one place helps to keep bit rot at bay, and it avoids the need to deploy and synchronise patches across multiple repos.


Sean Kelly (aka Stabby), is one of the most talented engineers I’ve ever worked with, and is someone who understands the challenges and pitfalls of architecture ahead of his time, as this article highlights so well. To this day, I try to channel Sean’s approaches to design, coding, and architecture, in my own work.

Sean is a treasure in the Boston and Golang community, and folks should seek out his wisdom and guidance.


I reacted to this headline quite negatively. Reading the article, it’s actually quite reasonable. It’s trying to cut through all the incorrect reasons microservices/service oriented architecture is adopted. Kudos to that!

IMO service oriented architectures should be adopted when your engineering organisation has gotten too large to manage a monolith.

The service interface generally can’t be modified after it’s available. You can do the same with a monolith but it’s harder to enforce.


This is part of the problem: people say "do" or "don't" or "considered harmful" and suddenly this becomes a fashion everybody seems to follow no matter what. I saw the same with OOP, at some point it was almost a heresy to say there are many cases when OO design is inefficient or problematic. You would be labelled as uneducated or ignorant. For the same reason I'd never use a goto at work even though I know it has some limited valid uses. Etc. etc.

One would think that by know everybody understood that each technology has its own advantages and limitations and it makes zero sense to follow current trends but rather to examine the project well and use the tech that fits best, regardless what current "evangelists" say.


The issue with these articles, is, they assumed, bad architect divided wrong domains into "wrong microservices", then concluded, "microservice" is bad strategy.

Step 1: Understand the domain first.

Step 2: Use microservice for each domain.

If you failed at step 1, step 2 is wrong.

Monothlich architecture just "skips" step 1, and use one big fat service for all.


I don't think it's that simple. As he states, domains change and requirements change. Or maybe you find value providing something adjacent to or on top of the domain you're working in. I can't imagine a situation where this doesn't happen to some extent, except in the absolute simplest of domains.

I also strongly agree with the author in that network i/o and distributed transactions (sagas) are huge blockers for all but the most mature organizations. In my experience microservices that aren't in completely isolated domains slow development significantly and accrue much more tech debt, though admittedly these weren't the most mature organizations. You could argue that they simply did it wrong (and, oh BOY, they most certainly did!!), but to some extent these practices need to address that programmers aren't perfect.


Use a microservices for each domain is step 3.

Step 2 is to set up your organization so that single teams can own those domains.

This is where the difficulty making good microservices lies - your code will match your org structure, and microservices makes future changes to that harder


> bad architect divided wrong domains into "wrong microservices", then concluded, "microservice" is bad strategy

The more advanced someone is in their career, the less likely they are to personally take responsibility for failure.

https://thecontentauthority.com/blog/what-does-it-is-a-poor-...


I would recommend just using SAM or Chalice for MS on AWS and layering on Step Functions/Fargate/ECS Anywhere for hybrid setups if you are not using Kubernetes. The only difference is that you are coupled to AWS but if you are already have been using AWS for everything else, its very natural and easy to just learn a new product and integrate it to your existing infra setup on AWS.

CDK is also another interesting tool in that you can create classes that handles existing series of scaffolding and automation and it feels naturally code like but drift detection still requires improvement vs Terraform's.

If you work for a large company where they have mandated Kubernetes then obviously its gonna be difficult but I am seeing a trend where companies are starting to "hedge" their bets as K8 projects take far longer to get to market and requires more experts to keep it going so I'm half heartedly learning Kubernetes, especially on the back of my mind the existing AWS solutions offer far quicker time to market and accomplishes the same thing.


My advice: there is no universal services advice, therefore, don't adopt (or un-adopt) things just because someone wrote about it. At the same time: don't ignore pain, regardless if it's because you're too big, too small, too segregated or too monolithical.


Related:

Microservices? Please Don't - https://news.ycombinator.com/item?id=13167188 - Dec 2016 (122 comments)

Microservices – Please, don’t - https://news.ycombinator.com/item?id=12572859 - Sept 2016 (93 comments)

Microservices - https://news.ycombinator.com/item?id=12508655 - Sept 2016 (146 comments)


I spent years reading on here about how microservices are bad, but I don't understand how you can avoid them when everything has to be in a container network to run. Airflow workers, containers that wrap GPL licensed code, little FastAPI interfaces to Bokeh functions, a customized Keycloak. Everything has its own dependencies and deploy process. We have like eight developers and seventeen git repos and I have no idea how to keep it from getting worse.


Interestingly, I think that almost all of the downsides listed in the article and in the comments go away if you use a good framework for creating microservices (example for this: [0]):

No internal glue code, no overhead by writing network interfaces, no slow HTTP calls (faster gRPC instead), no manual handling of asynchronous tasks.

[0]: https://github.com/jina-ai/jina


> Microservices – Please, Don’t

Sadly, most OSes, Linux included, are built as a bunch of microservices. Systemd is effectively a management system for microservices.


Moved from a monolith to a microservices project. The main and almost only benefit (and the reason I asked to move) is that startup time is now practically non-existent (from 8-10 minutes to 30-40 seconds). This allows for faster testing and iterative-programming approach, which is the type of programming I do.


Also, micro-frontends


Approaches like Polylith allow you to delay any decision on how or if to split up your services until you are ready. In the meantime you can gradually stake out your boundaries by defining suitable interfaces.


Every time the monolith/microservices debate comes up, there's a git submodule fan out there (somewhere?) with a single tear rolling down their face.


Gotta keep the "devops" teams employed, though!


I love how these paradigmas change every two years.


I find the hackernews obsession with bashing micro services tiresome. We get it, your organization didn’t need the scale/had a relatively simple problem/couldn’t solve bad code by doing microservices.

Sometimes it’s the wrong call. Sometimes it’s the right one. It’s not a panacea, obviously. Can’t we just use our expertise and judgement to determine the correct architecture? There are always trade-offs. Our job is to rigorously evaluate them and make a good decision, not blindly follow or dismiss any particular implementation.


Tired of being bashed in the office when devs out of college want to rewrite everything because it’s not microservices. Microservices are web scale they say.

Management not knowing any better pour resources into microservices because it’s so hot. Devs who want promotions go over engineer new systems.

Devs who resist are made to maintain the entire monolith themselves, or find another job because they are not team players.

Eventually there ends up two massive piles of code and all the old and ‘new’ devs have left by then. Not sure what happens after that, I never stayed to find out.


Why do they really want to rewrite everything?

I'm willing to bet that if the everything were clear and easy to modify, they'd prefer not rewriting everything. If it "just worked," they could focus on adding customer-pleasing, revenue-increasing functionality, a far more reliable path to promotion.


First everyone knows monoliths are bad. Next understanding existing code is hard work, writing your own new code is perfectly understandable and fun, because you’re the one who wrote it, and you write perfect code! It’s so good you don’t even need to comment it!

The default thought process is 1. I don’t understand this code and business domain, 2. It is too complex, 3. We should rewrite it as microservices 4. You’ve replace 2% of the system with your microservice, 5. New devs quit because your own service is now too much hassle to maintain, 6. Old devs quit because you wasted enough of their time 7. The org now has n+1 layers of tech debt, ready for the cycle to repeat.

When I left n=5. 4 had a good run and we really tried to keep it going. Can’t fight the HN zerg sometimes.


Why wouldn’t management know better? Where I work all management is engineers. And everyone universally agrees including experienced devs we need to finish busting up the monolith. No one wants to work on it or deal with it everyone wants to work on the new services.


Management doesn't want to go against the latest fad which is like you say, 'busting up the monolith' and 'everyone wants to work on new services'


That’s pretty frustrating, for sure. But I don’t think it’s reasonable to call a hammer worthless just because there are people who think swapping all screwdrivers for hammers is a good idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: