Really liked reading your blog. Bookmarked for future. One question: for databases, do you recommend using containers as well because in development, I love the ease of using databases in docker compose as well but I always worry about production in terms of resilience. Thoughts ?
For databases, I usually host them on a separate server. This could either be through Docker Compose or a managed DB server. If a managed DB is affordable enough I'd reach for it.
It's because I like keeping my servers stateless when possible. It makes it easier to upgrade them in a zero downtime way later.
If your web server has your DB too, then you can't do zero downtime system upgrades. For example I would never upgrade Debian 12 to 13 on a live server. Instead, I'd make a new server with 13, get it all ready to go and tested and then when I'm ready flip over DNS or a floating IP address to the new server. This pattern works because both the old and new server can be writing to a database on a different server.
With all that said, if you were ok with 1 server, then yeah I'd for sure run it in Docker Compose.
It depends on the business use case and requirements.
Using a managed database solves this problem, so there's that an option.
If you self host your DB, if the data is on block storage you can at least spin up a new instance and connect that storage device onto the new instance with a short period of downtime. This is usually a satisfactory level of downtime for an event that doesn't happen too frequently.
What I like about the above is it'll work with any database and avoids needing to even think about performing real-time or near real-time replication with multiple writers.
There's also the scary truth that there's a ton of stuff out there where compliance requirements aren't enforced. I'm not saying it's a good idea but you can choose not to upgrade too. This is a risk assessment you'd need to do. At the very least if you go down this route, please make sure your server doesn't even have a public IP address. If it's super locked down, that doesn't mean it's safe but you'll want to limit the number of attack vectors as much as you can.
At this stage the volume/persistence configuration for all of the major DBs is arguably extremely well understood and has been for years. The only real risk in running the DB as a container for most people is not configuring volumes for persistence correctly.
For most DBs it's one or two paths in the container, and virtually all DBs vendors have a reference Docker Compose example somewhere showing volume config. I can't remember the last time I ever "natively" installed a DB personally!
Do you prefer self hosting DB with container OR using a managed service liek RDS ? I guess both can work depending on your level of comfort and even though I am a big self host guy, db hosting is something that makes me nervous and I end up just leaving it to RDS etc.
The answer to this for me anyway depends entirely on the size of the solution, what the rest of the stack looks like, how many users, what is my support contract like etc etc, do I have to collaborate with other engineers or is it just me? Similarly, if you already have a bunch of ops guys managing some RDS stuff, it might make sense to just take advantage. RDS also comes with a ton of features a simple compose stack won't, especially around redundancy and disaster recovery.
I don't think there's a good one size fits all answer to whether hosting in Compose or RDS is right for you or a given project.
Not OP, but I think it depends heavily on your use case and where you are deploying... I've used containerized DBs as well as leaned into hosted DBs in a given cloud environment. I've tended to favor PostgreSQL for container dev simply because it is well supported in pretty much every first and second tier cloud provider out there.
It really comes down to YMMV... Sometimes for a singular app surface, it's easier to just use a compose file that includes the database. mailu/mailcow is a good example... you don't necessarily want to comingle email on the same server as other services.
That said, if you need to share a single DB or set of DBs across an applicaiton with several instances/deployments, then it makes much more sense for a central deployment. I almost never do my own host level install, instead relying on cloud hosting and mgt. The only real exception is MS-SQL on internal servers... MS-SQL in Docker is barely acceptable for dev, and missing a few key features you may actually want/need.
I probably used the wrong word. I meant more about managing volume property so we dont have data loss, backups, replication etc etc. I assume going managed is easier if you can pay for it (e.g. RDS)