SAAS is a scam developed by venture capital to make their otherwise nominally profitable tech gambits able to bilk clients of cash on a scale not even Barnum could fathom.
In my experience almost no outage happens because hardware failures. And most outages happen because bad configurations and/or expired certs, which in turn are a symptom of too much complexity.
Is there 🤔? I’ve seen things in production you wouldn’t believe. Rigs from the stone age, a 30+ year old DEC still running their version of UNIX and people saving files on tapes. Why? It’s how it has always been done 🤷. A firewall/router configured back in 2001 (no one’s touched it ever since). An Ubuntu 12.4 install running a black box VM that no one knows what it’s actually for, except that it was needed back in 2012 for something related to upgrading the network… so don’t touch it cuz shit might stop working.
Trust me, I’ve seen homelabs that are far better maintained than real world production stuff. If you’re talking about the 0.2% of companies/banks that actually take care of their infrastructure, they are the expection, not the norm.
Homelabs will always be better maintained. In most cases it’s a one man show and the documentation can be slight hints that will help you remember the process when you need it.
Most of the documentation for my homelab server is a README file in the folder next to the docker compose. At work I’m forced to write a lengthy explanation as to why things are the way they are in Confluence.
You are off your rocker if you think most saas products can be replaced by docker 🤣
There is a big gap between you running jellyfin in your basement and securely and reliably maintaining services.
SAAS is a scam developed by venture capital to make their otherwise nominally profitable tech gambits able to bilk clients of cash on a scale not even Barnum could fathom.
👌👍
it’s funny that you use that as a selling point.
In my experience almost no outage happens because hardware failures. And most outages happen because bad configurations and/or expired certs, which in turn are a symptom of too much complexity.
Imagine thinking availability is all you need to do.
Your experience must be extremely limited.
Is there 🤔? I’ve seen things in production you wouldn’t believe. Rigs from the stone age, a 30+ year old DEC still running their version of UNIX and people saving files on tapes. Why? It’s how it has always been done 🤷. A firewall/router configured back in 2001 (no one’s touched it ever since). An Ubuntu 12.4 install running a black box VM that no one knows what it’s actually for, except that it was needed back in 2012 for something related to upgrading the network… so don’t touch it cuz shit might stop working.
Trust me, I’ve seen homelabs that are far better maintained than real world production stuff. If you’re talking about the 0.2% of companies/banks that actually take care of their infrastructure, they are the expection, not the norm.
Homelabs will always be better maintained. In most cases it’s a one man show and the documentation can be slight hints that will help you remember the process when you need it.
Most of the documentation for my homelab server is a README file in the folder next to the docker compose. At work I’m forced to write a lengthy explanation as to why things are the way they are in Confluence.
If there is documentation… subcontractors come and go, some leave documentation, others don’t.
Most saas products no, most of software i saw advertising on those kind of channels yes.
So you’re telling me all those products built on top of docker are !!MILITARY GRADE!! ?