Sometimes you have to use complicated terms because you’re dealing with complicated ideas…
Other times it’s clear that the authors are just trying to pad the length of a paper and sound more pompous.
In Brazil we call this “enchendo linguiça”, which literally translates to “filling sausage”.
Yeah, I get shit occasionally in random places for using bigger words when they actually would take multiple sentences to replace.
But there are a fucking lot of people who use big (or obscure) words purely as a kind of signaling that they’re smart, rather than for communication. And it’s usually really obvious to people who have better vocabularies (or better understanding of the jargon in a specific field) that they don’t know what they’re doing.
If after looking up a word, the rationale for the word choice doesn’t become understandable on at least some level, it’s probably nonsense. (There are some super smart people who just don’t know how to communicate though and think the word’s as simple to everyone else as to them.)
Look at you, communicating with a purpose. I’m just trying to make my essay 1500 words
I was absolutely terrible at that. I had really bad homework grades and completion rates exactly because of shit like that.
“You gave me a 500 word question. I can’t make 1500 words out of it. If I’m going to fail anyways fuck turning it in.” (No, no part of that approach was intelligent, but I just couldn’t fill space with obvious trash. My brain would shut down.)
“Don’t use a five-dollar word when a fifty-cent word will do.” - Mark Twain (attributed?)
Love that In work we try to minimize the words as much as possible.
Key reasons come to mind:
- global audience and people need to put it into translator.
- some people we work with are dumb as doornails and ideas need to be simplified.
- no one wants to read 5 paragraph for a simple we don’t know what color you wanted.
- ain’t nobody got time for that.
Those extend the paper as long as possible skills are useless in the real-world.
ChatGPT in a nutshell
Well, sort of
It does generate nonsense, but unlike Calvin the ChatGPT is generating nonsense based on nonsense sample data, so Calvin’s is still better.
I know hating ChatGPT is trendy, but while I think this AI boom is absolutely idiotic and LLMs aren’t suitable for a lot of the things people try to use them for, I think there’s a real tendency for people to make it seem like everything about them is garbage. Pretending that even their training data is “nonsense” is just silly
It’s not “trendy,” if anything liking it is “trendy,” hating it is the educated stance.
Clearly you failed to understand the “prompt” because the context in which we’re discussing this is supposed to be about intentionally creating nonsense.
Did you read anything I said past that part or did you just want to get your petulant downvote in?
I literally fucking said the boom’s idiotic and there’s a lot of problems with the technology, but just blindly pretending everything about it is shit is as idiotic as is pretending they’re the good for everything. What is it with people’s inability to have a honest fucking argument; “their ‘sample data’ is nonsense” is bullshit and you fucking know it. “Sample data” isn’t even a fucking thing in this context.
I made a couple edits about a minute after I hit reply, so as to respond to your concerns before continuing to insult your intelligence further.
Furthermore, Calvin is well aware that he’s talking nonsense.
Yeah but that hardly affects the quality of the output, except for some fringe contextual cases.