If this scenario isn't a good reason for provisioning an internal 'cloud', either on-premise or co-location, I don't know what is.
AWS, Azure, and GC are horrendously expensive for what they are. Especially for long-lived services. They do have a use case but 99% of businesses just do not need it.
The fewer system parts running "if" statements in delivering your service, the better.
The use case of delivering the service to a consumer of the service happens all the time, the use case of "oh, I didn't understand how this works and foot-gunned" is relatively rare.
AWS can eat that cost less expensively than maintaining the "if" statements inline on every request.
The most reliable code is code you don't write at all.
But how can you do that in a way that makes logical sense?
Let's say you have 4 instances, a database, some storage. Every minute you have those things costs you money.
And how do you "stop" the bill? Delete everything? What if you need that data? Does it make sense to delete all your backups from the past year over $1?
> And how do you "stop" the bill? Delete everything? What if you need that data? Does it make sense to delete all your backups from the past year over $1?
What would you do if you were manually monitoring this and taking action yourself? Can't you code an approximation of that for common scenarios?
There must be a middle ground between no help and "delete all your data when you go over your bill limit by $1".