The way the post is written, I wonder if the author is working for a company going through a growth spurt and where, through sheer size, everything is becoming more "corporate".
There's a huge difference between having AI clean up a text you send privately to someone you have worked closely with for years, versus a broad spectrum text sent by a VP to hundreds of people or more. The first case is reprehensible, for the reasons the author lays out. But as for the second case, corporate doublespeak has been a meme since long before the advent of AI and it would remain even in some AI-pocalypse. Just because your boss puts out sanitized language in a mass communication, doesn't inherently mean your boss won't still be present and real with you in a more private setting.
borski 2 hours ago [-]
I find that AI is very useful for getting me past the 'blank page' writing block, but inevitably it writes in ways I would never, and so I end up editing it heavily. But, for me, a boy with ADHD, editing something is infinitely easier than writing it from scratch.
I think this is the opposite of how most people tend to use LLMs, and I actually think my way is the "better" way. My issue has never been the act of writing well, or clearly expressing what I mean... it has been the inertia of putting words on a page at all.
(and an LLM had nothing to do with this comment :P)
moondance 19 minutes ago [-]
I can relate to the inclination, but so many new insights and moments of inspiration are necessarily confined to that painstaking iterative line-by-line process of real writing. When you are simply prompting and editing, you will fill the page (and it might even sound like “you”), but you will not have that delightful experience of encountering something unexpected along the way to filling it.
glitchcrab 39 minutes ago [-]
Yes this is my use-case for it too - it's great to generate a structure which I will keep but I always end up reworking all the actual content so it sounds like me. It is a great way to get past the 'getting started' hurdle though.
Nashooo 49 minutes ago [-]
You're the first articulating my exact use case with AI as well! It really helps get me in 'the zone'. I actually now dictate as well and then the AI rewrite it and then I start editing. To lower the barrier even more.
36 minutes ago [-]
santamex 45 minutes ago [-]
I havily use llms for internal communication.
I receive docen request per day from colleagues asking me very specific stuff by mail or teams about processes, setups, master data, my particular experiences with approaches, for contacts within our big corp or just general knowledge questions and how I would recommend to tackle certain problems: Setting up conditions in sap, where to find certain info or just send them current setups. Also they ask me about strategic advices. I use my personal knowledge base to automatically prepare drafts of the answers based on previous answers to other colleages. Before the llm time I could barely help all of then. I got more productive by x-times. I then digest the emails again back to my knowledge system.
People have no problem with receiving obviously llm written answers. But because of the particular domain knowledge they know it can only come from me.
Excuse my writing, this did not went though the same system :)
Edit:
And now I forgot the most important. When the knowledge the llm retrieved is insufficient to answer colleagues question or the agent skill can not execute the requested task from my colleague, it asks me just for the missing info or skill and with me (the human) in the loop work is done x times faster. Eventually it will replace me and all my colleagues one day. Looking forward to do other stuff then
pmoati 1 hours ago [-]
I totally agreed with you. I'm French (nobody is perfect ^^), I'm not so fluent in english and I'm dyslexic, that why I often write my message, then I ask to Claude to translate it in english because i'm feeling I will lose the credibility of my message if there is too much mistake...
But you're right, so this message is not translated by LLM :D
vaylian 19 minutes ago [-]
> I will lose the credibility
There's grammatical mistakes and then there is sloppiness. Only the second makes me disregard someone's comment.
> I will lose the credibility of my message if there is too much mistake...
The correct way to write this is "if there are too many mistakes", because mistakes are countable and plural. And it's fine to make grammatical mistakes if English is not your native language. You can only get better by practising :-)
Mordisquitos 20 minutes ago [-]
I'm curious, why would you use an LLM to translate French to English? Why not use a dedicated translator such as DeepL, which will not only save you tokens/energy, but will also be much closer to your personal phrasing?
arjie 4 hours ago [-]
I really don't mind text filtered through an LLM per se. But I prefer high signal-to-token so to speak. The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting.
This is not always true. Once there was an online reaction to short content that made people treat "long-form" content as desirable entirely due to its length. I rather like reading books and the New Yorker's fiction section when I still subscribed, but much of this "long-form" content was token-expansion of a formulaic nature which I did not enjoy. LLMs have mastered this kind of long-form token-expansion.
This is assuming people are using an LLM in good faith, obviously. One day, perhaps LLMs will learn to express what someone is saying in an elegant way that is enjoyable for people like me to read. But even then, I will have the difficulty of distinguishing whether this is a human speaking through an LLM in good faith or a human who has set up a machine that is set up to mimic a human.
The latter is undesirable to me because I have access to the best such machines at a remarkably low cost. Were I to desire a conversation with an LLM, it is trivial for me to find one. I'm not coming here for that[0].
A sufficiently insightful LLM which prompts my thinking in certain ways wouldn't be unwelcome to me, I suppose. I have a couple of my friends for whom I still go on Twitter to read what they say even after I have stopped using the site routinely. If I found out the posts were entirely an LLM I think I would still read them simply because I find the posts useful and with sufficiently high signal-to-token.
0: Certainly, if every place only spoke about things I was interested in and never in things I was not interested in, I wouldn't need separation of interest spaces at all. But the variation of interest vectors for different humans has made this impossible.
chistev 1 hours ago [-]
> The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting.
dexterlagan 26 minutes ago [-]
I used to use LLMs to 'clean up' my own writings, and in the end I agree with the author here: it doesn't really help. The reader will have this impression of 'too perfect', and will have a diminished feeling of value, of honesty. I think we would benefit from a standardized way of signaling text and content that is exclusively human. Say, some sort of logo that says 'genuine', 'untouched by the hand of AI'. I'll be thinking about a way to do this.
stingraycharles 5 hours ago [-]
Yeah, some colleagues started using ChatGPT for internal communication as well. While we don’t like to mandate or prohibit anyone from using any tools, we did need to make it really clear to everyone that this is not productive. Grammarly to make small corrections to external recipients is fine. Using ChatGPT to “polish” your message is not. If you’re not sure about your English abilities, we offer you free English lessons and encourage giving each other feedback during chats.
LLMs shouldn’t be used for communication at all if you want any form of authenticity.
faangguyindia 4 hours ago [-]
You can take one step ahead and let user write in their own language then you figure out how to make sense of it.
em-bee 37 minutes ago [-]
i do that when i don't trust the persons ability to translate to english without error. if they are using a tool to translate to english, then i might as well use that tool myself, with the benefit that i then have the original untranslated message too and can use it to get a second opinion if the translation doesn't make sense. if all i have is the translation then i am stuck with that.
4 hours ago [-]
citizenpaul 4 hours ago [-]
Tbe hard truth is at work there is no authenticity.
stingraycharles 4 hours ago [-]
Definitely not getting any better if everyone starts using ChatGPT for private communications.
ares623 4 hours ago [-]
Bad thing X has been happening for a while. Let's all work towards making it worse.
add-sub-mul-div 3 hours ago [-]
Why play this word game that has nothing to do with their point? I can write an email about TPS reports in my own voice without caring about the subject matter. That's authentic. I care about performing my job well and with individuality and (no pun intended) agency.
eterevsky 1 hours ago [-]
I don't often use AI to cleanup my texts, but when I do, I fully own the output. I make a conscious decision whether to leave in every AI suggestion or not. The final text _is_ what I want to say.
sutib 58 minutes ago [-]
The point of the article is it is not what you would've said.
Even though you take responsibility for the result, you were never 100% the origin.
em-bee 42 minutes ago [-]
and the reason for that is that we passively understand more than we actively use, but when reading something we often can not distinguish our active and passive knowledge of an expression. so when you read a filtered text, it will sound fine because you are familiar with the expressions used, but you don't realize that some of those expressions are not actually in your active vocabulary.
This is starting to become my latest pet peeve, people using Claude to write their messages in Slack. I'm going to just stop communicating via text with these people.
It's one thing to have Claude polish a message and another thing for it to write out an entire message.
ahf8Aithaex7Nai 3 hours ago [-]
That’s exactly why I’ve refused to use autocomplete on smartphone keyboards from the very beginning. I want to express myself in my own words.
In a work context, of course, things are a bit different: I want to move the project forward and not jeopardize my future paychecks. Authenticity tends to take a back seat there. However, I’d be more concerned about inefficiency. Is it really necessary to run every piece of communication through ChatGPT to refine the wording? Are you sure nothing gets lost in the process? Doesn’t that end up wasting a lot of work time without adding any real value?
And on top of that, it leads to alienation and frustration. If you talk to me as if you were an LLM, don’t be surprised if I talk to you as if you were an LLM.
DrammBA 4 hours ago [-]
It feels so disrespectful sometimes too, having to read a long paragraph that conveys so little meaning knowing full well the original prompt was probably very short and I'm now wasting extra time parsing the hollow LLM text expansion.
stingraycharles 4 hours ago [-]
Easy fix: use an LLM to summarize it.
(only half-joking, a part of me fears that this is the reality we’re moving towards)
mrwh 4 hours ago [-]
That's absolutely what's happening already: write for me for the writer, summarise this for me for the reader. At some point it will become clear how absurdly wasteful we're being (right now, we're being paid to ignore that waste).
devsda 3 hours ago [-]
> write for me for the writer, summarise this for me for the reader.
It's funny though. For computer to computer conversation, we have invented (deflate+inflate) algorithms to save bandwidth, time and money.
On the other hand for human to human communication, we are in the process of inventing a (inflate+deflate) method and at the same time we are spending insane amounts of time, money & bandwidth to make it possible!
rogerrogerr 3 hours ago [-]
We need to come up with a catchy buzzword salad to market to executives. Something like "increased communication efficiency between workers by direct brain-email-brain interface"
devsda 3 hours ago [-]
Imagine going to work or a social meeting where everyone looks and sounds the same(or just a limited set) all with the same perfect tone, body language and communication style. Sounds like a nightmare and I would find it hard to relate and get that "perspective", when there is nothing to differentiate a person.
I guess everyone using LLMs for text is similar to that. If everyone uses the same LLM style, its hard to understand where the other person is coming from. This is not a problem for technical and precise communication though(the choice of LLMs in that context has other risks).
It is also strictly not an LLM capability problem because they can mimic or retain the original style and just "polish" with enough hints but that takes time, investment and people go through path of least resistance. So, we all end up with similar text with typical AI-isms.
There are other reasons to dislike LLM text like padding and effort asymmetry that have been discussed here enough.
altilunium 1 hours ago [-]
Last time I did that, I got pointed out as an ESL and got insulted and laughed at.
tbossanova 1 hours ago [-]
Sounds like terrible people. I’ve worked with plenty of people who didn’t start with English and if you give them time they usually excel
Scrapemist 3 hours ago [-]
When I wrote a snarky mail to the MD and I couldn’t suppress my anger, Claude did a great job smoothing it out while keeping it pointy.
Scrapemist 3 hours ago [-]
Once asked Claude to guess what the prompt was that generated a mail.
Didn’t work unfortunately.
quectophoton 3 hours ago [-]
I think there was an SMBC comic about this topic, but I don't think I can find it, and the site doesn't exactly make it easy. I don't even remember if it was pre-2020 or not.
It was about how people would get a thing (a robot?) that would repeat whatever they said but in a more fancy way (or something along those lines), to make them sound smarter. Then the people would start depending on these robots to communicate at all, to the point their speech degrades and they start making unintelligible noises that the robots still translate into actual speech.
There are two ways to write an email. One is to keep it short and to the point that so there are obviously no errors, the other is to waffle on and obfuscate the message with an LLM so that the reader's eyes glaze over...or something like that.
Havoc 4 hours ago [-]
In emails...whatever. I can tell it's there but fine whatever, we're just trying to get a message across LLM or otherwise.
But this was the first year I saw it in performance review write-ups which frankly was jarring. Here is feedback supposedly 1:1 that massively affects this person's life and their perception of "worth" so to speak...and it's just AI.
Notably it was split by geography. EU countries closest to organic, india slop trainwreck, US in the middle
Sorta made me conclude "ok i guess that's the end of performance reviews that vaguely mean anything & actually get read"
anal_reactor 2 hours ago [-]
I use ChatGPT for communication. It started with "please fix typos" and now it's "write me a slack message about this and that". This is mostly an effect of the communication environment we created - taking risks is rarely rewarded, and mistakes can be very costly. Remember, you're always one misunderstood message away from being fired. Of course there are people whom I trust and I'd never offend them with AI-generated slop, but the rest of the humanity - it is what it is, LLMs help me a lot.
borski 2 hours ago [-]
> Remember, you're always one misunderstood message away from being fired.
If this is true, you really want to be fired. That is a horrendous work environment, and you should quit if at all possible.
Most workplaces (any certainly any good workplace) will seek to understand, not fire you immediately.
anal_reactor 57 minutes ago [-]
Blessed are those who haven't worked corproate.
bigstrat2003 31 minutes ago [-]
I've worked corporate jobs all my life, and I was never one misunderstood message away from being fired. Instead they would've talked to me and, even if they figured it was my fault, they would've given me a warning since it was the first time. No worthwhile employer is firing people for the first offense, corporate or otherwise.
sapphirebreeze 5 hours ago [-]
[dead]
rexpop 5 hours ago [-]
> It robs me of getting to know you.
Ugh, you are not entitled to get to know me. There is a threshold between all that I share with the world and the rest of me. Hell, not every person gets the same picture, and that's deliberate and healthy--my customers don't get to know what my proctologist knows. My mother doesn't get to know what my wife knows.
You don't get to know all of me, because I don't trust you.
This post comes across as sweet, and innocent. It also comes across as absurdly self-entitled, and it's not an OK posture to take towards the world. It's not OK when the police take this posture, it's not OK when private companies take this posture, and it's not OK when strangers on the internet take this posture.
You are entitled to withdraw from relationships that don't fulfill your emotional needs. A reasonable audience for this missive is your girlfriend, your child (who relies on you), or your employer (to whom you are vulnerable).
applfanboysbgon 4 hours ago [-]
Weaponised therapy speak is gross. This article was not asking you to spill your life story to every person you meet, it was asking you to speak with your own voice, which is a perfectly normal and in no way entitled thing to be asking.
stingraycharles 4 hours ago [-]
What are you rambling about? It’s not about your doctor using ChatGPT for his newsletter, it’s about your colleagues using ChatGPT on Slack or email.
I personally think that the people who can’t be bothered to actually write authentic messages, and assume that everyone will just read their word salad full of repetitive AI patterns, are being the ones acting entitled.
lich_king 2 hours ago [-]
It is, because of the baked-in asymmetry. "I couldn't be bothered to write it, but you have to read it". Unless your expectation is that I'm going to have my chatbot summarize the messages from your chatbot, in which case, maybe we should just both ride off into the sunset.
ryandrake 3 hours ago [-]
I'm so tired of hearing that word online.
True: Nobody is entitled to be treated nicely. Nobody is entitled to an open, friendly relationship. Nobody is entitled to get to know you. If we only did what we were entitled to do, and received what we were entitled to receive, the world would be an even shittier place than it already is. We have enough people walking around with the "You're not entitled to me being nice, so I'm not gonna be! nyaaaaa!" attitudes.
Rendered at 08:28:27 GMT+0000 (Coordinated Universal Time) with Vercel.
There's a huge difference between having AI clean up a text you send privately to someone you have worked closely with for years, versus a broad spectrum text sent by a VP to hundreds of people or more. The first case is reprehensible, for the reasons the author lays out. But as for the second case, corporate doublespeak has been a meme since long before the advent of AI and it would remain even in some AI-pocalypse. Just because your boss puts out sanitized language in a mass communication, doesn't inherently mean your boss won't still be present and real with you in a more private setting.
I think this is the opposite of how most people tend to use LLMs, and I actually think my way is the "better" way. My issue has never been the act of writing well, or clearly expressing what I mean... it has been the inertia of putting words on a page at all.
(and an LLM had nothing to do with this comment :P)
Edit: And now I forgot the most important. When the knowledge the llm retrieved is insufficient to answer colleagues question or the agent skill can not execute the requested task from my colleague, it asks me just for the missing info or skill and with me (the human) in the loop work is done x times faster. Eventually it will replace me and all my colleagues one day. Looking forward to do other stuff then
There's grammatical mistakes and then there is sloppiness. Only the second makes me disregard someone's comment.
> I will lose the credibility of my message if there is too much mistake...
The correct way to write this is "if there are too many mistakes", because mistakes are countable and plural. And it's fine to make grammatical mistakes if English is not your native language. You can only get better by practising :-)
This is not always true. Once there was an online reaction to short content that made people treat "long-form" content as desirable entirely due to its length. I rather like reading books and the New Yorker's fiction section when I still subscribed, but much of this "long-form" content was token-expansion of a formulaic nature which I did not enjoy. LLMs have mastered this kind of long-form token-expansion.
This is assuming people are using an LLM in good faith, obviously. One day, perhaps LLMs will learn to express what someone is saying in an elegant way that is enjoyable for people like me to read. But even then, I will have the difficulty of distinguishing whether this is a human speaking through an LLM in good faith or a human who has set up a machine that is set up to mimic a human.
The latter is undesirable to me because I have access to the best such machines at a remarkably low cost. Were I to desire a conversation with an LLM, it is trivial for me to find one. I'm not coming here for that[0].
A sufficiently insightful LLM which prompts my thinking in certain ways wouldn't be unwelcome to me, I suppose. I have a couple of my friends for whom I still go on Twitter to read what they say even after I have stopped using the site routinely. If I found out the posts were entirely an LLM I think I would still read them simply because I find the posts useful and with sufficiently high signal-to-token.
0: Certainly, if every place only spoke about things I was interested in and never in things I was not interested in, I wouldn't need separation of interest spaces at all. But the variation of interest vectors for different humans has made this impossible.
LLMs shouldn’t be used for communication at all if you want any form of authenticity.
It's one thing to have Claude polish a message and another thing for it to write out an entire message.
In a work context, of course, things are a bit different: I want to move the project forward and not jeopardize my future paychecks. Authenticity tends to take a back seat there. However, I’d be more concerned about inefficiency. Is it really necessary to run every piece of communication through ChatGPT to refine the wording? Are you sure nothing gets lost in the process? Doesn’t that end up wasting a lot of work time without adding any real value?
And on top of that, it leads to alienation and frustration. If you talk to me as if you were an LLM, don’t be surprised if I talk to you as if you were an LLM.
(only half-joking, a part of me fears that this is the reality we’re moving towards)
It's funny though. For computer to computer conversation, we have invented (deflate+inflate) algorithms to save bandwidth, time and money.
On the other hand for human to human communication, we are in the process of inventing a (inflate+deflate) method and at the same time we are spending insane amounts of time, money & bandwidth to make it possible!
I guess everyone using LLMs for text is similar to that. If everyone uses the same LLM style, its hard to understand where the other person is coming from. This is not a problem for technical and precise communication though(the choice of LLMs in that context has other risks).
It is also strictly not an LLM capability problem because they can mimic or retain the original style and just "polish" with enough hints but that takes time, investment and people go through path of least resistance. So, we all end up with similar text with typical AI-isms.
There are other reasons to dislike LLM text like padding and effort asymmetry that have been discussed here enough.
It was about how people would get a thing (a robot?) that would repeat whatever they said but in a more fancy way (or something along those lines), to make them sound smarter. Then the people would start depending on these robots to communicate at all, to the point their speech degrades and they start making unintelligible noises that the robots still translate into actual speech.
EDIT: Found it, from 2014: https://smbc-comics.com/index.php?id=3576
But this was the first year I saw it in performance review write-ups which frankly was jarring. Here is feedback supposedly 1:1 that massively affects this person's life and their perception of "worth" so to speak...and it's just AI.
Notably it was split by geography. EU countries closest to organic, india slop trainwreck, US in the middle
Sorta made me conclude "ok i guess that's the end of performance reviews that vaguely mean anything & actually get read"
If this is true, you really want to be fired. That is a horrendous work environment, and you should quit if at all possible.
Most workplaces (any certainly any good workplace) will seek to understand, not fire you immediately.
Ugh, you are not entitled to get to know me. There is a threshold between all that I share with the world and the rest of me. Hell, not every person gets the same picture, and that's deliberate and healthy--my customers don't get to know what my proctologist knows. My mother doesn't get to know what my wife knows.
You don't get to know all of me, because I don't trust you.
This post comes across as sweet, and innocent. It also comes across as absurdly self-entitled, and it's not an OK posture to take towards the world. It's not OK when the police take this posture, it's not OK when private companies take this posture, and it's not OK when strangers on the internet take this posture.
You are entitled to withdraw from relationships that don't fulfill your emotional needs. A reasonable audience for this missive is your girlfriend, your child (who relies on you), or your employer (to whom you are vulnerable).
I personally think that the people who can’t be bothered to actually write authentic messages, and assume that everyone will just read their word salad full of repetitive AI patterns, are being the ones acting entitled.
True: Nobody is entitled to be treated nicely. Nobody is entitled to an open, friendly relationship. Nobody is entitled to get to know you. If we only did what we were entitled to do, and received what we were entitled to receive, the world would be an even shittier place than it already is. We have enough people walking around with the "You're not entitled to me being nice, so I'm not gonna be! nyaaaaa!" attitudes.