What are your thoughts on AI-assisted writing processes?

I’d love to hear people’s thoughts on using AI tools to assist in writing. More specifically I mean beyond spelling or grammar. I’m talking about creating outlines, reorganizing sentence structure, offering critiques and edits and pushing towards one idea or another.

Context: I’ve come across a blog post where the author describes their extensive use of AI tools throughout the writing process. And I don’t feel good about it. What comes to mind is this Joan Didion quote:

All I know about grammar is its infinite power. To shift the structure of a sentence alters the meaning of that sentence, as definitely and inflexibly as the position of a camera alters the meaning of the object photographed. Many people know about camera angles now, but not so many know about sentences. The arrangement of the words matters, and the arrangement you want can be found in the picture in your mind. The picture dictates the arrangement. The picture dictates whether this will be a sentence with or without clauses, a sentence that ends hard or a dying-fall sentence, long or short, active or passive. The picture tells you how to arrange the words and the arrangement of the words tells you, or tells me, what’s going on in the picture. Nota bene:

It tells you.

You don’t tell it.

But I also have to acknowledge that writing isn’t everyone’s passion nor interest. And for some folks, for example with a language barrier (my first language is Spanish so I’m familiar with this) these tools can make communication easier and more democratic.

I’m preparing a blog post about this, and I want to make sure I am respectful of the author because it is a direct reply.

What opinions do you have about it? Any and all comments are helpful. Thank you very much in advance.

I wrote my thoughts here: Pablo Enoc - On AI-enhanced Writing

4 Likes

Aside from the general issues with gen AI, I really think it all depends on who the writer is, the style of writing, and exactly what they’re using it for. I don’t think it’s useful for any kind of informal or fictional stuff because that takes genuine humanness and creativity, but for the formal, academic stuff that the LLMs have been trained on, I actually think it’s okay to use it for organisation and critique (again, if we ignore every other problem about it).

I love to write, but I’m not very good at the informal kind. It’s definitely a passion thing for me and I don’t think AI can really help with that.

3 Likes

I used to be a copywriter/technical writer for a few years at my previous job and we HAD to use Grammarly. It was the bane of my existence: completely removes your writing style, eradicates anything but the most basic and repetitive sentence structure, forces you to use bland synonyms, etc. The problem with that for me is that it did not make the text more accessible or easy to read. It made it drone which made it really hard to focus, made saying things succinctly actually harder.

Im dyslexic though (how did I get a job in writing lmao? don’t ask me…) so I always misspell like a 100 words on the first draft and only notice them on a re-read and with the help of spell-checkers. I think when it comes to creative writing everything but a spellchecker is a detractor from your writing. Yes, even for outlines, you synthesizing your own outline will keep making sense to you when you come back to it, less likely if something else wrote it for you.

A lot of people assume that academic writing is often quite bland and simple but I’ve enjoyed papers where an author clearly has a writing style and specific intent with their structure much more than something that was written carelessly.

Ive read both the posts and agree with your take on this method, one thing I found particularly interesting is what these kind of AI-users often see as the text and not the text. Arun uses Claude or whatever for “fixing incorrect words, spelling, and grammar — without making any significant changes to the text“. But… words, spelling, grammar IS the text, making changes to them makes significant changes to the text! Arun claims that AI does the following well: “transforming information from one format to another, rearranging ideas, offering critique“. Like you said, and I agree, this is what the thinking/writing process actually requires so Arun is breaking one of his own principles. But more importantly to me, that is the easiest way to NEVER improve in those areas. Outsourcing it to a machine denies you the learning experience. Many writers, me too, don’t like doing those parts because they are hard… and they get easier with more time spent on them. If you just never do it and always leave it to AI, it will always be bland, boring, repetitive, and difficult. I’d love to see the first draft of Arun’s writing, I suspect it looks nothing like what the LLM conjures at the end.

I wish writers like this would just say they don’t like the process of writing and are unwilling to improve/develop their own style. It’s the same gripe I have with “artists“ using Gen AI: they are not interested in the craft/process at all and only want the end result. But that’s not what makes art and writing.

7 Likes

Thank you for this response!! I agree whole heartedly. I distinctly remember the red ink from a teacher who underlined a spelling mistake I made my first year living in the United States: “we HEAR with our EAR” (I had spelled ‘hear’ as ‘here’). I’ve never made that mistake again.

I suspect there’s a fear of public failure. What I mean by this is that it’s embarrassing and humbling to put our writing out for the whole world. We never know who is going to read us. For that Hacker News post (which was submitted by one of my readers) I never read any of the comments. I just know the HN crowd leans heavily towards negativity.

I absolutely love academic writing with character! Cecile Fabre and Helen Sword have great things to say about this.

Thank you also for putting into words the aspect of “words, spelling, grammar” being the text. I’m passionate about this topic because I think of myself as a reader and a writer before any other titles. Whether it’s code, personal correspondence or even comments like this on the internet, what I write under my name represents me, and I want to take full ownership of it.

2 Likes

If someone is describing their practices this way, I take it as an indication that something is missing from their relationship to writing communities.

Maybe they don’t know anyone who has the requisite skills to help them with these tasks. Maybe they know someone, but they’re too insecure to ask. Maybe they know someone and would be willing to ask, but they want someone who’s available 24/7 and that describes nobody. Maybe they think all writing nerds are stuck up and impossible to work with. I don’t know. All I know is that something has ruled out the possibility in their mind or otherwise discouraged them from talking to people.

A lot of people haven’t read much of it. In the humanities there’s a ton of writing that goes for the throat.

Anyway, I’m reminded of this recent research study where people relied on LLM-generated text and didn’t recognize how it was influencing them:

Biased AI [sic] writing assistants shift users’ attitudes on societal issues

Writing assistants powered by large language models (LLMs) are increasingly used to make autocomplete suggestions to people as they write text. Can these AI [sic] writing assistants affect people’s attitudes in this process? In two large-scale preregistered experiments (N = 2582), we exposed participants writing about important societal issues to an AI writing assistant that provided biased autocomplete suggestions. When using the AI assistant, the attitudes participants expressed in a posttask survey converged toward the AI’s position. However, a majority of participants were unaware of the AI suggestions’ bias and their influence. Further, the influence of the AI writing assistant was stronger than the influence of similar suggestions presented as static text, showing that the influence is not fully explained by these suggestions, increasing accessibility of the biased information. Last, warning participants about assistants’ bias before or after exposure does not mitigate the attitude-shift effect.

1 Like

Agreed on all counts!

Regarding how some tools make text more bland ostensibly in service to it’s accessibility, there’s this lovely art piece called new legibility that points out something being enjoyable to read increases it’s legibility, and it illustrates that point in a really lovely way.

1 Like

Yeah, I agree with all these points. (And I like that Joan Didion quote; I hadn’t read it before.)

I also simply don’t see the appeal of using “AI” this way. Why would I want the things I write to be a bland, smear of words that are statistically most likely to be near one another? Even if I had cause to write marketing copy, which LLMs are so often used for these days, I’d want it to stand out rather than be derivative.

I’ve seen a few folks point out that there seems to be a focus on “ideas” above execution in some tech spaces. It’s a belief that having an idea (for an app, for a blog post) is the important part of making something. I’m not sure how widespread this mindset is, but it feels like generative AI would feed into it. Think of an app idea, a blog post, or a meme, and the “AI” will make code suggestions, organize concepts into an outline, or generate it.

2 Likes

i think as long as ai content isnt copy pasted verbatim at length (eg grammar checks, single-sentence clarity rewrites and stuff) then im fine with that.

If you’re not good with English, write in your own language. We need more variety anyway.

id like to address this point in particular though: 英语至少在现在是国际通用语言,用英语不用英语写的东西能吸引到的读者范围大相径庭,不然为什么大部分国家学生都要学英语呢。大家说实话,你们会不会经常去找一些别的你不会说的语言的网站博客然后拿谷歌翻译去看人家写的东西?我写的这一段话你是不是也在用谷歌翻译看着呢,好看吗谷歌翻译?我想说的是啥呢反正就是你别给我用英文告诉别人他们可以不用拿英文写作。怎么说呢就是有一点“哎呀你活着是不是有点困难,那你可以不活呀”的味道。总之我就是觉得,作为会说英语,用英语沟通没有障碍的人,有英语说得不是很好的人试图用英语跟我们沟通,这样的情况下不论 ta 是怎么写出那些英语的我们都不应该跟 ta 说,哎呀你不能用这个那个工具,你要么别说英语要么回去学

well that was ranty, feel free to ignore that, but, if you misunderstand me because google translate wasnt very accurate……

3 Likes

The point of language is to communicate. Eloquence may be appreciated, but it should not be the primary driver of language choice for everyday communication.

Come on now, that’s a little too testy, don’t you think? @Tofutush has a reasonable point.

And yes, I did have to use Google Translate to read that, my Chinese is limited to “xie xie” (I think that’s the right romanization?). :joy:

I just finished an academic paper today. For funsies, I wrote it from scratch myself AND had Cursor Composer 2 write a second version of it. The AI version was inaccurate, misleading, stilted, and just not at all fun to read. I will not be experimenting with AI “writing” again.

The thing with writing outlines is that it’s not just laying a blueprint on paper for your words. It’s also structuring your views in your head, and maybe even teaching you something about what you’re writing when you think deeply about how to organize your words. You lose that when an AI writes an outline for you.

2 Likes

You’re the second person in this topic who goes out of the way to get my post the completely wrong way. I’m out.

If you rely on a machine to do your thinking for you, you will forget how to think. Writing, in particular, is something you only get better at by doing. Using AI in any step in the process means that you aren’t learning how to do that step.

I agree wholeheartedly with this. In fact, I think it applies to non-creative writing too. GenAI lies. It makes stuff up that sounds good but aren’t facts. If you use AI in your paper/essay/study or whatever, you risk it tainting it with these falsities.

AI writing is also the only writing I’ve encountered that doesn’t have a style or voice, which has become its style and voice. When I say “this reads like it was written by AI”, what I mean is that anybody could have written this. There is no personality or creativity. It is souless and reminds me vaguely of the prose of very new writers who haven’t figured out how or what they want to write yet. AI writing is bad writing. Why would you take critique or advice from something that writes worse than you? Even beginners have better storytelling skills.

My little sister is a perfect example. Her writing skills aren’t the best. She doesn’t quite have a handle on how to make her prose sound good or how to pace scenes or anything so complicated as that. Really, she just knows the most basic rules of grammar. But the story itself is interesting. You can already sense a bit of what her style will be like after she polishes it. Even poorly written, it’s more intriguing than a lot of things I’ve read, both human and AI. And every time she has me read the updated draft, she gets leagues better.

Improving your writing isn’t difficult, in my opinion. It’s literally about volume. Just straight up how many hours you’ve sunk into writing and rewriting and editing. I feel like people who outsource those hours to AI just… don’t like writing. Which is okay! But I’d like it if we stopped pretending it’s something else.

1 Like

I just read that post of yours, I feel quite moved by the way you phrased your plea to actually embrace one’s own very human, even when faulty, original writing. If there’s negativity about the post with no actual meaningful critique or input, so long to those guys, you know? I think getting over the embarrassment is key, especially if it leads to you being open to accepting help. I remember getting riled up over any small critique about my writing when in school because I was kind of the literature/grammar kid, so it was like I had to carry this banner everywhere. English is actually not my first language but its the language in which I have dreams, nervous breakdowns, inner monologues etc LOL so its my most primal language. If people say I don’t write in it well, its like I have nothing… but that’s not true, everyone makes mistakes even in their best crafts and the same is true even for something that seems simple like a blog post.

I see what you mean! I have the same but I usually feel like I’m an artist first, no matter what. And maybe a cinephile ahah!

True! I wouldnt have myself after uni either tbh but did because of my job. That’s a very interesting study, going to have to read that one… they always remind me of how easily we are swayed by what he assume has authority or more knowledge than us, even if there is no evidence of it.

1 Like

very cool link, thank u! made me feel like going thru a bit of a chose your own adventure thing ahah

1 Like

You could, y’know, explain your position and the context you meant. I’m not sure why you felt the need to delete your posts, especially since I didn’t take issue with all of the content, just a single sentence.

This sums up my feeling for AI-assistance in any kind of creative work (both writing and visuals). I don’t believe in the “you have to suffer for your art to be good” but taking away that challenge completely cheapens the experience for me?

I’m not the best writer either (my blog posts can be very disorganized), but I do feel like when someone reads my blog, they get a very true representation of what it might be like to speak with me in person.

I’m also not interested in reading writing that is AI-assisted. More than anger or resentment, I just feel disappointment.

2 Likes

I’d rather be a bad writer on my own merits than put my name to a LLM’s idea of “good writing”. I didn’t spend 30 years developing what voice and craft I possess to let a mere computer act as my ghostwriter.

4 Likes

At this point there have been many examples on this front. I’ve been collecting some of them at the Machine-Generated Garbage Hall of Shame.

@enocc I saw that some of the responses to the powRSS poll asked about where to draw the line, re: what is or isn’t LLM “assisted,” and while I think “assisted” is the wrong framing to use, the spirit of that question is something I would address like this:

Don’t Ask [Chatbots] for Plant Advice is an article that contains bot-generated text, in that the author quotes directly from the output of a chatbot, while labeling it as such, as part of an evaluation of the chatbot’s output quality. To me, this lies on one side of the line. On the other side of the line lies the kind of blogpost in which the blogger has treated prompting a chatbot as a part of the writing “process.”

The thornier question is how to tell when it’s not disclosed, but that question is moot in cases where the blogger just outright says it.

2 Likes

When I say organisation, I don’t mean outlines. I mean advice on structure. AI shouldn’t be doing any actual writing!

As with anything AI related, it depends on how precise and detailed your manifest is and how specific the task is. Works well with hard science (and even there they use a totally different type of AI), but anything that falls outside of the realm of strict definitions (In astrophysics a star will always have a discreet array of meanings) will inevitably have some corners cut and meanings twisted.

I’m not a professional writer, but I do believe even a grammar fixer can significantly hamper the meaning and readability of a text. Even human grammar corrections can stunt the original feeling of a text. I remember when I was in high school and I joined a poetry contest, I asked my teacher to check my work first, she saw some awkward grammar and changed the ending of a haiku to make it more correct. The meaning was completely changed, it was not what I intended to say and I hated it with a passion. If this is how it works between humans, I reckon it gets worse when a machine is asked to replace one of them.

2 Likes