Conversation

when I find out someone's been using an AI assistant to compose replies to me they drop to the bottom of the queue of people I will talk to

I don't need someone to disrespect me like that. use words you mean

5
2
0

I already get enough information. I get too much information. Too many words.

I don't need more words. I need meaningful words, words that matter to me.

Don't waste my time, don't waste others' time. Say what *you* mean when you talk to someone.

1
1
0

If you're using it to translate things you've written, that's different. That one's ok.

2
1
0

@cwebber i actually don’t know how to choose words that get interpreted by other minds as what I mean. this seems like a minefield

1
0
0

@bri_seven if you are doing your best with your own words that's better than loading in something which uses its own words and you're even less sure they're yours

0
0
0

@cwebber I’ve adopted a principle at work when people want to show me something they produced using an LLM: I will not be the first to read the output. If you used an LLM to produce something, have you read it and decided it’s good enough to share with me? If you can’t be arsed to read it, I can’t either.

It is shocking the number of people who take LLM output, copy/paste it, and don’t read it.

1
2
0

I got a really strange reply on Bluesky, and I think it's worth quoting, because it's worth refuting:

> one of the biggest problems in human communication is that people (without assistance) say things they do not mean. it's one of the primary forms of miscommunication out there. AI assistants can, actually, help ensure people _do_ say what they mean, while also anticipating how others may take it.

(cotd)

1
1
0

Strange reply about AI, continued:

> in this sense what people do with LLMs is not meaningfully different from translation, in fact it is all translation between idiolects and sociolects, colloquial dialects and prestige dialects. there's no linguistically sound distinction between these forms of translation.

(ok, now to refute it)

1
2
0

My reply:

The AI doesn't have access to your internal thoughts to translate from.

If I ask an AI to translate a book, but I keep the book closed, and it can only look at the cover, it can't translate it unless it already knows its contents.

It can guess, but that's not the same.

https://bsky.app/profile/dustyweb.bsky.social/post/3lonsg4gxps2k

1
1
0

So, it's true that communication is largely, even primarily, translation between different mediums of information. Even your thoughts to the words from your mouth is translation.

But it's fully strange to say that AI generating thoughts "for you" is "translation" from a source material you cannot examine.

This is like the weird thing of an "AI representation of a deceased defendant" appearing in court.

What absolute, dangerous nonsense. Complete misunderstanding of life, ideas, communication.

1
1
0

I removed the links to the post because the original poster felt it was dogpiling for me to do so, and fair enough I guess.

But I am troubled by this line of thinking, and I think it *is* a line of thinking people are going down.

1
0
0

@cwebber unrelated to the thread but somewhat related to what that person was saying: is there a reason why you boost your own self-replies? i think the overwhelming majority of fedi apps will show self-replies in timelines, so boosting yourself just makes your posts show up twice in a row, which is unnecessary. (i can understand doing it on bluesky, because bluesky collapses reply chains to max 3 or so, but fedi largely doesn't do that...)

1
1
0

@trwnh I boost my own replies so you see it twice because then it means the reply is double good

1
1
0

@cwebber If you can’t be arsed to write it, why should I bother to read it?

0
1
0

@paco

As a large language model, I can't advise on work practices, but...

😏

@cwebber

0
1
0

@cwebber @trwnh The lack of algorithm means mastodon's feeds are dominated by nature's most implacable algorithm, time. idk if I want to make a general case for this, but I like it when some of the people I follow boost or edit their posts because, if they didn't, I wouldn't see the posts. This is especially the case when the people are in different time zones (i.e., most people).

0
1
0

@cwebber generally, yes. However, I have a (probably IMO) neuroatypical friend who has always been flummoxed by the unspoken social dynamics of even the most cursory exchanges. She says she regularly spent an hour or more drafting each email, often googling words or phrases along the way to avoid social pitfalls, and still had blowback on a regular basis because she had misunderstood the between-the-lines messages. She has fully embraced LLMs for email. She says it is a massive relief: she spends far less time crafting or worrying about emails, and gets less negative response from others, directly or indirectly.

Asking chatGPT to reword "your budget summary makes no sense and I think you need to read chapters 3 and 9 of the manual again" in a way that doesn't have negative job consequences has been a game changer for her.

0
1
0

@cwebber oh, that is kind of what my friend does, actually: translate from Direct to Corporate Polite Speak.

1
1
0
@guyjantic @cwebber The solution here is not to rephrase with LLMs, but rather to abolish Corporate Polite Speak and always say what you mean as directly as possible.
1
0
0

@noisytoot @cwebber
Well, yes. However, (a) I don't think that is within her ability to change before Monday and (b) she has emails to respond to on Monday

0
0
0