Finally, something I can talk about. I’ve been lurking this thread for a while waiting for an opportunity to jump in.
I relate very heavily to this point of view. It can be hard not to go doomsday mode seeing everyone around you suddenly rely so much on a machine that gives very subpar results, especially knowing there’s a better, albeit harder way.
What I think is that it isn’t AI itself that is making people lazy or uncritical, it’s that these people were frankly never willing to put in the effort anyway. There has always existed (or perhaps capital CCapitalism has encouraged) people who want the end product without the skillset required to make it. People who use Chatgpt to write fanfiction (which I can vet is indeed real) do not want to develop writing as a hobby, they want to read more fics of their favourite character. My university peers who use Chatgpt to summarise readings and write essays were already not putting in the effort even before LLMs.
On the other hand, I think people who want to develop a skill aren’t going to be deterred because an AI can technically make a better than beginner level product. It’s like wanting to learn swimming by watching other people… at least, I do not use AI to generate artworks because it is not the end product I am interested in (and I rarely know how my artworks will turn out anyway), but because I enjoy the process, maybe even more than the final product.
Very much rambling here, I guess what I want to say is that whatever issues LLMs have become the catalyst for is nothing new. When you say “what if good enough becomes good enough”, the truth is that it has been this way for a while now. The AI generated ads I see everywhere now are no more lifeless than their stock image predecessors are. I might just be unreasonable hopeful too lol? People have always reacted this way to new developments (google convenience triumphing over forums, basically all of web1 vs web2), thinking society will collapse, yet we are still here.
I have more personal anecdotes and thoughts I want to add but I’ve got work
yeah people were definitely still unwilling to put in the effort before (and i dont necessarily blame them, like i think making music would be cool but i dont really want to learn it so i just dont) but like. if they didnt have fanfiction of blorbo and blorbette kissing they would either pick up writing themselves (thats what i did lol) or find a real person willing to write it for them, either for free or for money, instead of just getting the random words machine to do it for them. i know someone who uses chatgpt for literally everything from beta-reading to finding new ideas and just as a friend and like. shes literally in a discord server full of other writers who would gladly do all of those things with her! but shes decided ai is just superior to humans in every way (even at things like empathy…) and refuses to even try to do these things with real people anymore.
im starting to think some of the people who use chatgpt as their bestie want a relationship where they never have to put any work. when you talk to chatgpt everything is about you and youre the most wonderful and talented human on the planet with the greatest ideas all the time. but when youre friends with a real person they might want to like, talk about how they had a bad day because it was raining and they missed their bus, or tell you when youre being kind of an asshole, or be busy with other things,… it just feels like this whole “ai bestie” thing is perfect for the people with the extreme “you dont owe anyone anything” mentality.
i did actually try one of those “your ai best friend” apps in like… 2018 i think? i dont remember the name of the app (edit: it was replika) but there were ads everywhere about it and i think it only kept my interest for like 2 days because it didnt feel like a real interaction at all since everything was always centered about me and nothing else. idk i like when my friends tell me about their new favourite show or their cool drawing ideas
this ended up really long because i really got my train of thought going lol
the points in your last 2 paragraphs are my new AI fear LOL I dont ever keep up with whats going on with Tiktok but lots of stories about AI as bff, therapist, guide, godly being (???) have popped up everywhere. I know there’s always people falling for the ‘new tech is magic’ thing but Ive never seen it so bad with people who seem to understand that it’s just a complex algorithm. The worst one for me was about the lady who had her AI recite why it calls her ‘the oracle’. It just kept going on and on about her being the only one who can tell the truth, her actively fighting through an injustice, etc. Real ego-inflating stuff, just like you said, all about you. Call me a Luddite but that shit really creeps me out. I can’t even totally blame her because its a direct balm to your insecurities, maybe if I was not a skeptic I’d fall for all that bullshit just as easy.
yeaah its really made to keep people using the chatbot as long as possible in the hope of making them pay for a subscription and of course telling people how amazing and perfect they are makes them want to stay
I can very easily imagine why some people would prefer the convenience and “““privacy””” of getting chatgpt to make their fanfic, even at the expense of subpar product.
It does get a little more baffling though in cases like yours, where the person does seem invested in the hobby but still prefers to offload so much work to the LLM. My personal anecdote is a friend of a friend of a friend thing, where they were writing fanfiction using the company Chatgpt account of all places. And some of the prompts were really long, basically an entire fic of its own! Makes me wonder whether it’s even worth it at that point to fight with the AI versus just writing it yourself.
Exactly what it is! I have heard of acquaintances or otherwise using the AI as a therapist. Not sure of some of the more extreme examples. And they’re all people with real life friends as well. You’d be surprised the level of dissonance people can maintain when receiving their daily affirmations from a bot.
It’s an unfortunate consequence of these companies trying to show investors that their billions of dollars aren’t vanishing into thin air, that LLMs like Chatgpt are made to accommodate so heavily for the user (e.g. if you insist something you say to be right, even if objectively wrong, the AI will try to agree with you rather than give an actual answer), just so people will use them for longer.
It’s already been mentioned previously in this thread, it’s a shame that LLMs could be actually useful tools. But if they were useful and only that, they would lose even more money than they already are.
A lot of really good points raised in this thread.
My unsolicited take on all this:
The main thing is that I dislike using the term “AI” by itself. That term is extremely vague as it can mean so many things. Marketers know this and are using it on purpose because the companies win if it’s a simple buzzword that makes their products “special”, and it misleads the consumer into thinking they know how it works when they actually don’t.
Regarding LLMs and generative AI in general when it comes to the small Web, I personally see no reason to ever touch them. To me, one of the big points of the small Web is being a means of self-expression, and using LLMs or any sort of GenAI seems completely antithetical to that. Besides the numerous environmental and technical issues others have pointed out, the other huge issue is the dependence people will put on a fundamentally centralized, cloud-based service. What happens if it goes away? People won’t know what to do unless they’ve spread out where they get their information from, like various sites and forums and such.
I really think that in an ideal world we would never need LLMs even for just asking questions. It seems to me that ChatGPT and these other chatbots are a bandage solution to much deeper problems, mainly that of inaccessible documentation, unhelpful communities, and programming languages and tools being too difficult for users who are not already experts. The sad part is that all the things HTML and CSS do are so much that it isn’t a simple fix. But some small things can be done to lessen the dependence on LLMs, like the knowledge sharing methods that were shared earlier.
The only time I’d ever use neural networks or any sort of machine learning would be at a way smaller scale, trained on my own data, and for doing smaller tasks that would still be hard to do with any other method. Things such as line smoothing in a drawing program, or automated in-betweening in a 2D animation. But never to generate entire pieces. More often than not, neural networks are a bandage solution slapped on top of a foundation that desperately needs rethinking and redesigning. Rather than hoping neural networks will help me make enjoy 2D animating more, I’m rethinking the entire style and process behind how I make animations in the first place. It gets a lot more interesting and personal that way. After all, creativity happens from constraints. I also have yet to touch neural networks for any of my stuff and I want to see how far I can go before I actually need them.
I’m very against any form of so-called AI/LLM/etc to the point I’m starting a web host for people who don’t want their stuff crawled. I hope small web/indieweb communities don’t become attached to the AI-slop folk. It pains me to see Mozilla adding this stuff to the browser.
I’m pretty anti-LLM in general. One of the reasons why I enjoy the small web space as much as I do is precisely because I know I’m interacting with other humans, not faceless AI powered bots. Social media is (and has been for some time) a dead zone. The rise of widespread ChatGPT use has only added to that feeling. I can instantly tell when something has been written with ChatGPT or some other LLM, and it immediately makes me irritated (even angry) when I see it. There is SO much LLM slop on Reddit, Substack, and other platforms I previously enjoyed… I’ve tried to aggressively curate my feeds, but the slop still finds a way in. You can only block so many accounts.
The small web has been such a nice refuge from it all, and I’d hate to see it follow in social media’s footsteps. The only way I see LLMs being an actual problem for the small web, though, is if the slop starts to take over our discovery surfaces (like forums, webrings, the Neocities front page, blogroll dot org, etc.) It’s hard enough to find good independent websites as it is, without the possibility of having to wade through a digital landfills’-worth of AI-generated garbage…
I think a lot of very good points have been brought up in this thread – a really interesting read. I’m low-key less of an AI-hater than many people I interact with (it probably has a place… somewhere?) but I definitely don’t want to interact with it, despite what my (over) use of em-dashes imply (I just love all kinds of punctuation, especially if it lets me cram even more stuff into my already overflowing sentences lmao)
I’ve seen different takes on most of the issues, ranging from that the environmental issues are a bit overblown (I personally have no clue and am not looking into it since I already don’t use AI*) and that it’s not melting our brains any more than books did lol. I do know that a lot of the big AI companies are connected to highly unethical companies in the war industry (from my understanding at least) and that’s where it really falls apart for me. Hell no.
As for interacting with AI, I just have no interest in seeing what someone couldn’t be bothered to make. Like no, I don’t want to follow your AI-generated slop page, thanks. I’d rather be, say, on the indie web looking at something that someone took the time to make themselves, because they cared.
As for the learning thing i think it connects beautifully to the “Question about technical skills and creativity”-thread. Yup, learning kinda sucks. I agree. But as someone who’s been doing frustrating hobbies my entire life, having learned is amazing. It was really difficult to find the info I needed at first, but then I learned and now I can find the right keywords much easier. Or I can’t!! and I make some jumbled mess that me-in-three-years will think is horrid and be able to fix. Idk I think I just adore the learning process too much for it to even have an appeal.
*I did use it once for an assignment where they forced me to. I have never been so frustrated before in my life, lol
Totally agreed! There is so much value in learning something – even (and arguably especially) when it’s a struggle. You come away from the experience knowing more than you did when you started, and this is always a good thing, IMO.
The sense of accomplishment you feel when you finally figure out something that’s tricky or difficult is so good too… I love it so much, for instance, when I’m trying to figure out why a bit of CSS isn’t working or whatever, and that “eureka” moment happens after hours (perhaps even days) of frustration. You just don’t GET THAT EXPERIENCE or have that same sense of pride if you ask Claude or ChatGPT to just do it for you.
I worry that future hobbyists and/or young people who grow up with LLMs will miss out on that experience entirely, because it’ll just be a societal expectation that they’ll get the LLM to do all their creative work or to do anything new that requires extensive learning… I know my life would be a whole lot less enjoyable without the experience of learning and mastering new, difficult things.
I have Libre.fm with lots and lots of users. By harnessing the login information from Libre.fm I’m able to identify a human user better than before.
At this stage my prototype uses Cloudflare as an initial layer, following by some other data sources I’ve been given access to and finally my own data.
A lot of it is blocking hosting companies. This also blocks some “VPN” services (let’s call them what they are: proxy servers) that use some of the same hosting companies.
But also blocking user agents. By logging into your existing Libre.fm account you’ll be able to identify yourself as a human.
Are there any ethical concerns about using data obtained from one service to enable another in a separate space, or was that consent provided by the original user agreement?
EDIT: Ah, so in a follow up, you provide a bit more detail. It sounds like you’re just using Libre.fm login info as an additional login model, similar to “Log in with Google.” This is different than my initial interpretation that you were somehow utilizing existing user data (insert arcane hand-waving or ML) to provide a score/signal about a hypothetical new user. I’ll leave my question for posterity though
Ah, no. When people sign up for 1800www.com they’ll need a Libre.fm account. When they first sign up they’ll give permission for this but it’s the only way to get an account.