If you do anything at all on the internet you rely on technology you have absolutely no control of.
I started self-hosting my own sites because I ran out of space on the free hosts and am too much of a tightwad to pay for it. I have come to love the fact that I am fully responsible for it as much as I can be.
I’m also fully aware I rely on my ISP to connect to the internet, the people at Apache for writing the server software, the domain registrars, DNDExit for providing the name servers and dynamic DNS and so on.
Despite that, I still find it worrying that there seems to be a growing trend that whenever someone mentions self-hosting the advice is always Cloudflare and its services, Docker containers, reverse proxies, VPN tunnels and so on.
Am I being weird? Too much stuck in the past and need to get “with it”? Or am I beacon of self-reliance in a sea of overreliance on other services?
I would say that you’re a “beacon of self-reliance”. I wish I could do more to self-host.
Generally, there seems to be less concern for making things that are reliable over the long-term when it comes to technology because so much of it is motivated by planned obsolescence, corporate trends, etc.
Sometimes reading the advice, I can’t but help think people are making all these accounts for the services, that they may as well pay for the hosting or space if they’re doing things like data-hoarding, and let someone else worry about it all.
WOW. How timely! I thought I was finished on 32bit Cafe for the day so I just hopped over to another self-hosting community. It is alive with what Cloudflare just did and people posting alternatives.
Classic enshittification, as Cory Doctorow calls it. You get roped into this service that functions well and takes a bunch of finicky work off your hands and then once you’re embedded and it’d be hard to switch to a competitor, the hostage situation starts. Suddenly the things you need are no longer free but part of a premium plan, ads will start popping up on those captchawalls, and so on.
I know Cheapskate keeps his own blocklists which is definitely more work but it’s a lot more stable than relying on Cloudflare’s already incredibly hit-or-miss service - I give up the moment I see that horrific Cloudflare captcha. I can only pass it if I disable all privacy extensions and settings in my browser, and it’s just not worth the hassle.
So here’s another vote for self-reliant.
I’m also feeling pretty pleased with myself for having hosted my own guestbook for close to a decade with the hokiest possible spam control and I can count on one hand the number of spam messages I’ve had to remove. No need for a billion dollar corporation when you have a text field that’s positioned 5000 pixels to the left of the viewport.
I think your approach is perfectly valid. I’d love to be able to self-host my website, email, and other services. It’s not like I don’t have a machine capable of doing the job; I’ve got an i7 with 32GB of RAM and a terabyte SSD running Slackware whose capacity I don’t come close to using, but good luck self-hosting on residential fiber or cable in the USA; telecom monopolies hate that shit.
Frankly, it’s easier for me to just rent space and capacity on Nearly Free Speech and have Fastmail handle email for my domains. Though I could fire up a VPS on Vultr instead and host my websites there (along with a Gopher hole). It might be a worthwhile winter vacation project.
@starbreaker - US ISPs say in their TOS you cannot host your own server, but I think they are trying to protect their own network and perhaps denying any responsibility for what people do. I’ve been running servers from home using Verizon, Frontier, Time Warner, Charter and now Spectrum. It’s difficult sometimes to remember who my ISP is as they keep merging or buying each other out.
I’m certain they must realize, or have the means of checking, simply from the traffic in and out.
I was talking to a tech once who had to come out to replace the modem as a local storm destroyed it. He said they know people do it but have a don’t ask, don’t tell policy. Their support is told not to answer anything about servers.
Some ISPS will block ports, both AT&T and Spectrum list the ones they block. More ISPs block port 25 (SMTP) rather than 80 and 443, because of the risk of that being used for email spam.
I know. That’s why I don’t self-host on my residential connection, and if I decide I’ve outgrown Nearly Free Speech and Fastmail I’ll probably migrate to a VPS on Vultr. I don’t particularly like the situation, but my reps in Congress don’t seem particularly interested in doing anything about it; they seem content to let the internet become QVC with a comments section.
This is a great question! I think on some level the internet has become too complex for an individual to be fully self-reliant for the entire stack (if it was even possible in the first place). In the context of self-hosting, I think self-reliance means designing your solutions in such a way that you are not reliant on a particular company’s services, instead relying on general technologies that can be provided by many companies, so that you always have a backup plan if one goes down the drain for whatever reason. Similarly, this is why using FOSS software for self-hosting is mandatory in my view.
Some of the items you list are just some modern technologies. While Docker is a private company, the container technology (OCI) is not owned by them, so if Docker goes to shit someday, all your containers are still usable via podman or some other alternative. Similarly, reverse proxies are a capability most HTTP server software can do these days, allowing you to route connections from one port (80/443) to many. I would argue you can still be self-reliant using these technologies, because you are not bound to Docker or any one reverse proxy specifically. That said, I do agree that the increased reliance on Cloudflare is problematic, especially given the reddit post you linked elsewhere in the thread. Ultimately I think relying on generic technologies is inevitable, but relying on specific companies should be avoided.
If I were running multiple servers other than just a web server then the container technology probably would be the way to go.
It seems I got lucky with some of the companies I rely on. The original reverse DNS provider I used got bought by another company and the free service disappeared. The one I found after that has been great. Some companies are just nice like that.
I’d vote for self-reliance, too, but at the same time I recognise that not everyone is served by learning everything for themselves. Pre-built docker setups and the like (sometimes community-built ones, not corporate) serve those people better, either because they’re more securely set up, set up more efficiently, or because now they can dedicate more time to working on their site (or anything else).
So I guess I see it more as letting someone else specialize in that thing while I specialize in another thing, strengthening and diversifying the greater whole. But I agree that using corporate services is self-defeating when open, community-driven alternatives usually exist.
I think the capacity for self-reliance is more important than consistently being self-reliant. Sometimes a system or application that somebody else built is good enough that there’s no point in reinventing a perfectly adequate wheel just to prove that you’re a big boy and can do it all by yourself.
But if you’re determined to have something done your way, and existing solutions don’t fit without a lot of struggle, then sometimes it’s better to start from scratch and do it yourself. I might build my own website with shell scripts and a makefile that I wrote myself, but that was after I tried existing solutions like WordPress, Jekyll, Pelican, Hugo, and Org Mode’s ox-publish and found them lacking for various reasons.
the reason is mainly for your security as you’re unlikely to be able to “eat it up” in case of a potential DDoS. and trust me they can and will happen eventually even if you’re no one important. although i kinda resent CF and haven’t put my VPS behind it.
another reason you may need tunneling and reverse proxies is because majority of ISPs are CG-NATed, meaning you literally cannot open ports. or if you need to hide the fact you’re hosting something (some ISPs are against that but traffic will just look like cloudflare traffic if you use their tunnels)
there’s nothing wrong with reverse proxies tho. you could set up a VPS as a caching server but reverse proxy it to your home server.
you could technically host your own but it’s massive pain in the ass