Moving flamedfury.com to Forgejo and Bunny.net

I’m in the middle of moving flamedfury.com from GitHub to a Forgejo instance that has a 500 MB per-repo limit, and I’ve hit a wall with images.

Right now, my repo contains both the original source images and all of the optimised variants generated during the Eleventy build. Together they’re sitting at around 800 MB, which obviously isn’t going to fly in Forgejo. So I’m looking to shift image delivery completely out of the repo and onto Bunny.net, while keeping the Git repo code-only.

Where I’m getting stuck is understanding what people’s actual looks like when Bunny storage is separate from the web server.

If you’re using an SSG (bonus points for 11ty), are you still generating responsive images locally during the build and pushing the generated files to Bunny during your deploy, with the final HTML pointing to something like cdn.flamedfury.com? Or have you moved that responsibility entirely to Bunny Optimiser and let it handle resizing, format conversion, and quality on the fly?

I’m also unsure what people are doing with their source images. Do you still keep originals in the repo, store them in Bunny and treat that as the source of truth, or pull them in during the build from somewhere else?

The other piece I’m trying to think through is cache invalidation. If an image is replaced, are you relying on hashed filenames from your SSG, Bunny versioning, or another approach?

My main goals are to keep the repo comfortably under the 500 MB limit, maintain a sane, reasonably fast CI/build process, and ideally not lose the benefits of Eleventy’s image pipeline if it still makes sense to use it.

I expect I could move building the site again with an equivalent to whatever an action is on Forgejo if it doesn’t have to process thousands of images and get the build times down to sub-1 minute.

I’d really like to hear how people have structured this in practice rather than the high-level “just use a CDN” answer.

I don’t have any experience with bunny.net, but I do have experience with significantly larger image sets and cache handling professionally (about 20tb of images that expire over time)

If we ever need to expire a cache we generally just don’t bother. If we remove an image and it needs to be gone then we flush it manually and directly. Everything else we can let cycle through for the natural lifetime of the cache… Nothing we do is particularly “we need results right now!” situations.

500mb limit doesn’t sound like a lot to be working with… Is it a hosted one?

For the images I’d probably use a pre commit script to squish my images to the largest version I’d want to display, but even then 500mb is going to run out quite quickly if you have a lot of images.

If you’re definitely stuck with that 500mb limit I’d go with (pseudo idea, I don’t use 11ty):

/root
/root/cdn
/root/cdn/images/
/root/blahlahetc

I’d have cdn in my gitignore.

I’d link to images in the cdn folder, and in my build script have a replace on those image paths starting with /cdn/ to replace it with the URL to my CDN.

Build script would push cdn folder as it is including subfolder structure to the CDN.

Bish bash bosh?

I’d not bother expiring caches. Nothing is that critical and it’ll sort itself out in the wash. If I really needed it gone I would flush the whole cache.

Thanks for your insights

This is where I have been thinking of heading. I’ll dig in a bit more and have a play to see if it works. Will cross-post this to the 11ty community to see who’s doing what in there.