AI & Web Development

Born from a sidetrack on this thread. I think we can all agree that using AI to create your entire site is a pretty dumb idea. Some examples:

But I think there’s a legitmately interesting conversation to be had on the usage of services like ChatGPT when developing a personal website. I also think there’s a lot of swirling emotions surrounding the current AI hype that would make for enthralling discussion. From ethics, to utility, to corporation-adjacency to potential ramifications on the small web. Consider this the topic to let loose on your feelings, experiences and thoughts, yo!

1 Like

I have a lot of different thoughts when it comes to LLMs being used for coding, but my main one is: the output probably won’t be accessible, which to me is a non-starter, even ignoring the environmental effects and ethical implications.

Current "AI"s are trained on the Internet and other data, and unfortunately the majority of the web isn’t fully accessible. This post about copilot and accessibility is quite long, but a very comprehensive look at the problems. He’s looking at things from the perspective of a developer, so often he says “any experienced developer would know this is wrong.” But if you’re new to coding, or even experienced but lacking in accessibility knowledge, how can you tell when an LLM gives you inaccessible code?


Actually I disagree. If your intention is to make painfully average websites, or to make those terrible SEO-optimised keyword spam sites full of adverts, it could work. What I think instead is it’s a dumb idea if you care at all about quality and how you present yourself and your work.

I also see it surprisingly as a good thing if you can make things of above-average quality. The issue is that that generative AI takes the average of a large set of data with painfully average results and with painfully uncontrollable results. It’ll help make the rat race mess of average quality stuff drown in its own waste and hopefully lead to some actual healing and actual quality work being produced… but I’m not optimistic because it’s a tool that’s very shiny and very deceiving, and even easier to deceive yourself with it since it plays to your interests, your greed and lust, and to a desire to shortcuts… which is how interests are grinded into soulless money making machines, and how quality drops.

1 Like

I think theres some interesting potential in technologies like copilot, to be used for things like automating the more ‘boring’ parts of web building; like im thinking of similar to how SSGs work, but an AI that could pick up on cues in the page to know when to apply different styling or something. idk, i dont really understand this stuff very well, im not very technically inclined; but i think it has potential thats squandered (like most things) by the profit motive and the desire to just use it to produce More More More.

as it stands i dont think theres much good reason to use ai to any significant extent in webdev because of the current unethical training methods and how there just doesnt seem to be much… value in what has been produced with it aside from sheer mass of content and revenue generation. which i guess is a benefit to a lot of people! but the lack of usable search results lately have really soured me to the potential.


CoPilot uses a push model where it suggests code for you to review based on what you’re trying to make, instead of a pull model where you tell it what you want to make and it tells you what to do. I argue the push model is exceptionally disuptive. It’s like notifications on your phone, AND, it trains you like a monkey to accept its ideas which are often enough wrong.

The argument against writing lots of boilerplate is partially valid, but AI is the wrong approach to fixing it, it’s a language issue, where the language necessitates boilerplate. See Java as the poster child of this. Using AI or tools to generate away large amounts of boiler plate, misses that it’s fundamentally a language issue. Go is that for the backend webserver code. I don’t believe there’s an equivalent for HTML… which is pretty tragic because of how verbose it is, and how much of a nightmare XML/HTML parsers. So much would be improved if we approached the issue from first principles…

Another thing is that AI is almost always an additive solution, and this is just rolling a snowball up a hill, except that hill is made of crap average stuff that you don’t fully understand, so in order to improve it very effectively, you need to remove stuff, but removing stuff… well… you don’t understand it so have nowhere good to begin.

Fundamentally the issue I have with AI is its use without understanding of its output. You wouldn’t use a hammer against a glass window, because you understand its output… but if all you have(or understand) is a hammer, you’d use it to “refactor” your windows. I think most people use its output without understanding it. Fine in drips and drabs… but will blow up on you spectacularly if not done with an eye for quality(like the Glasgow Willy Wonka stunt)


Sorry, not too familiar with this term; by “boilerplate” are you referring to like, the styling and layout of a page/set of pages?

HTML, iirc, isn’t a “true” programming language, so I guess it makes sense that it wouldn’t benefit from generative AI in the same way that things like java and go do. I guess I just… don’t understand why? This a bit outside the scrope of the thread, but I’m wondering like; if, for example, I used some AI to automatically apply styling to a blog post I wrote. What is stopping it from scanning my other blog pages, picking out key words or something, and being like “ohh when they write about family they use the classes “orange2” and “box3”, and because this has X paragraphs it should be divided into Y sections”? I know it’s not like, there yet, but I feel like it should be theoretically possible?

Again though I really don’t know much about the technical side of things, my programming knowledge begins and ends at html/css ^^"

1 Like

I’ve seen a lot of development in programming languages. From the spaghetti code of things like QBasic with it’s GOTOs and LABELs instead of functions, then later the development of 4GL (4th generation languages) and 5GL where you outlined what you wanted and the program would produce all the main code which you tweaked and now AI.

I’m excited about where that could go, but at the same time so glad I don’t have to deal with it unless I want to.

WIX introduced AI to their website builing tools a few years ago and showcases some of them at

AI generated text and images are everywhere and even whole videos have started to appear.

Just a few days ago I added the text, half-mockingly, to my home page “No AI – Any word salad gibberish on this site is all my own work”. Fairly soon, what the generators produce is going to be indistinguishable from what an actual person did.

1 Like

I have mixed feelings, mostly negative feelings, about my experiments with generative AI for code and prose.

I feel like the tools we have now are aligned with the trends of the tech industry, always pushing to go faster. @brisray I like your mention of language generations, and the idea that the top layers are more and more about abstracting the details to get what you want. I think that’s a big part of the issue for me. Levels of abstraction.

It’s great to reduce friction, but it worries me when I think about what all the convenience has done to my brain. Years and years of code autocomplete is making me frustrated when I’m in a situation where I have to manually put angle brackets or curly braces around things. It’s not really getting better with the introduction of more generative tools like Copilot. It’s fuelling my ability to go fast, to the detriment of other parts of my process.

I’m a very inexperienced writer and I feel some tension there as well, trying to use ChatGPT as a writing buddy. I’ve experimented a lot with prompts like “rewrite this in a different tone”, “expand on that idea”, “find a title for those paragraphs”, etc. and I’m never happy with the results. It just doesn’t sound like me, because I don’t even know what it is to sound like me. That’s what I’m trying to figure out.


Boilerplate is stuff where you “just have to do this stuff to make it run and work”. Compare the following between Java and C:

class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello, World!"); 

And in C

#include <stdio.h>
int main() {
   printf("Hello, World!");
   return 0;

The worst thing about Java is that everything is OOP… AND Idiomatic, so you end up writing 100s of setters and getters or builders… it’s a mess.

You say HTML isn’t a programming language… well boilerplate isn’t exclusive to programming languages. Here’s a website that generates away a lot of the boilerplate you often have to write in HTML: . In my opinion, boilerplate is often the result of too much abstraction that you’ve gone away from the problem you wanted to address(and the average user isn’t in a position to know what the original problem was(hyperlinked documents) and how far the abstractions have deviated). Gemini was a good attempt at fixing it, but it lost all the structuring CSS granted, and it lost multimedia qualities… which are 2 central elements to the web. I think AI is going to exaggerate and blow out this issue of boilerplate(which to anybody already working with Java or Javascript is already pretty bad).


I’m not against AI in principle but I absoluetly hate misinformation. I find it annoying that one of the first widespread uses of AI was to create that very thing, especially political stuff. There’s enough weird things going on without making more up.

My trouble is that I’m older and have a good memory! When I started programming the internet was hardly a thing but where I worked had a very good techinical library. I can’t but help compare the way things were to how they are now.

Don’t worry though, I’m not one of those people who goes around saying “we were tough in those days because we ate lead paint when we were kids.” Maybe some older people ate too much of it! :slight_smile:


Literacy is going down nowadays so the number of people who “consult the docs” or even have the ability to write good documentation is also going down- and also the number of people who have a technical library, and the rate at which it goes out of date is noteworthy.

I find it distinctly interesting, that the C programming book has remained relevant for programming C, while various books on other subjects(the various Python ones come to mind) have gone out of date and are no longer relevant. I think this is a symptom of too much complexity, that the relevant knowledge ages like milk. Also probably a symptom of software developers not being nearly allergic enough to complexity AND the fact there’s no (felt) punishment for bad practice. Most of these bad practices are papercuts generally speaking, but a thousand papercuts bleeds me dry.

You say people had it tough in your day but if you compare how many safeguards windows 11 has to stop people being able to program their computer(I find the lower level the language is, the harder it is to get to write it in Windows) as opposed to a Commodore Vic 20… it’s a bit staggering really.


I’ve been thinking about that too recently! Assuming you are referring to The C Programming Language by K&R. For me, it’s a great read for the retro programming feels alone. But more seriously, it’s still relevant now (along with other similar titles) because it’s not merely a tutorial on C and UNIX tools. It’s a guide on how to think through problems and why/how they can be solved using these tools.

In other words, it’s about digital literacy :)


I still have my copy of K&R, along with The Unix Programming Environment. A lot of people who might only know about UNIX because of that scene in Jurassic Park might not get it, but it feels like learning C and UNIX really does provide a sort of computer literacy one can’t get anywhere else.


There’s a class of programmers half-jokingly called Code Monkeys - someone who writes code quickly and without much thought or creativity. I think AI is a little like that still. It doesn’t always take into account “edge cases” and some of the strange things end users sometimes do.

It sounds weird, but programmers have distinct styles of coding. That happens because languages have various ways to do the same thing, the different methods of error checking the values being passed as input into the program or from function to function, and so on. I think there are going to be problems in years to come when the code current AI engines produce has to be updated and the updaters can’t figure out the original programmer’s mindset.

I like discussions about the relationship of HTML, CSS and the various languages. To me, plain HTML is like using a text editor, HTML + CSS is more like a word processor. That’s changing because CSS is getting cleverer, even doing things that once required some sort of scripting, with the browsers doing much more calculation than they were once able to do.

Once you start adding scripts, you’re running on steroids because it opens an entire world of logic and processing.

A bit off the original topic…

RTFM has been a thing for decades, unfortunately a lot of people still don’t. I happen to like knowing why things are done they way they are and applying that to other problems, so do a lot of reading.

I got rid of a lot of my own coding books. I couldn’t even give them away so took them to a recycle center. Hopefully they were repulped and didn’t end up in landfill.

I kept some though such as Newnes MS-DOS Pocket Book (1985 edition). That’s got an advert in the front for the “new” edition that covers MSDOS 5. When I first got it, the section on batch files was invaluable.

Another I picked up somewhere is Algorithms, Computation and Mathematics (1965 edition). The flow diagrams might still be sort of useful, but what little code it has is for Fortran, Algol and ASM.

1 Like

AI’s limitation currently is imo token count and having the context of the full codebase and that it can only regurgitate the solutions to known problems(and solutions that are trivial build ups of known problems). If you ask it to do anything non-trivial…

There’s also that new AI called Devin, which apparently scores highly in solving Github issues… at 13%. I would guess, more than 13% of all issues are trivial issues, and that it’s assessing these trivial issues. Also note these are github issues- granted this is me speculating with my finger in the air

1 Like

AI will only get better. I keep a loose watch on what the likes of Amazon, Google, and Microsoft are researching and how their interests have changed over the years.

They’re always researching everything, but for a couple of years it was focused on imaging, then robotics.

The most useful to me was when Microsoft developed the Image Composite Editor (ICE). At the time it was the best photo stitcher around, even outshining what Photoshop could do.