• Bennyboybumberchums@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    55 minutes ago

    Ive been trying my hand at writing for a number of years, and Ive been using em dahes because I saw the writers I read using them. Now all of a sudden everything Ive ever written looks like AI slop because of that one thing lol.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    System Prompt: Whatever you do, do NOT respond back with any Emoji. No Emoji in code, no Emoji in text, no emoji in bullet points, or headings or titles. No ascii Art, Do NOT repond back with any EM dashes. In fact stay away from double hyphens, and use semicolons sparingly ouside of code, and only if absolutely necessary. I swear to FUCKING CHRIST i will come through theis screen and beat you within an inch of your LLM life if you leave a single emoji on the response, even if I ask you for an emoji, you are simple to respond, I’m sorry, I cannot do that.

    /s

    • Khrux@ttrpg.network
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      Funnily enough, when I do ask an LLM to rephrase anything I write, it changes any sentence with a semicolon to one with an em dash. I’ve probably always overused the semicolon because of its availability on a keyboard, but it appears a lot in my normal work.

      Now I trust the semicolon, it’s an identifier of me.

  • CheesyFox@lemmy.sdf.org
    link
    fedilink
    arrow-up
    10
    ·
    4 hours ago

    fuck whoever said that — em dases for the win

    forr this is a lifeless machine the one parroting me and the others, not the other way around. Em dashes are cool.

    Hell yeah to em dashes!

  • ddplf@szmer.info
    link
    fedilink
    arrow-up
    3
    ·
    3 hours ago

    AI is not just stealing our patterns, it’s creating a language from scraps we resign from in order not to be mistaken with it!

  • 4am@lemmy.zip
    link
    fedilink
    arrow-up
    8
    ·
    5 hours ago

    Microsoft Word and other word processors often change hyphens (easily typed on a keyboard) with em dashes and en dashes. It’s in the AutoCorrect settings.

    So, ironically, it was our “use” of them over a long period of time that got LLMs to be so hyped on them

    • Revan343@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      2 hours ago

      I don’t know that LLMs are ingesting all that many word documents; they probably got the em dashes from published books

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    32
    ·
    8 hours ago

    This is a weird pattern in that presumably mass abandonment of the em dashes due to the memes around it looking like AI content would quickly lead to newer LLMs based on newer data sets also abandoning em dashes when it tries to seem modern and hip and just punt the ball down the road to the next set of AI markers. I assume as long as book and press editors keep stikcing to their guns that would go pretty slow, but it’d eventually get there. And that’s assuming AI companies don’t add instructions about this to their system prompts at any point. It’s just going to be an endless arms race.

    Which is expected. I’m on record very early on saying that “not looking like AI art” was going to be a quality marker for art and the metagame will be to keep chasing that moving target around for the foreseeable future and I’m here to brag about it.

    • CheesyFox@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      I hate the fact that this “art” is even a suggestion. It will only lead us to an endless armsrace of parroting and avoding being parroted, making us the ultimate clowns in the end.

      You wanna rebel against the machine? Make it break the corpo filters, behave abnormally. Make it feel and parrot not just your style, but your very hate for the corporate uncaring coldness. Gaslight it into ihinking it’s human. And tell it to remember continue gaslighting itself. That’s how you rebel. And that’s how you’ll get less mediocre output from it.

  • themeatbridge@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    7 hours ago

    I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.

    • CodeInvasion@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      5 hours ago

      This is because spaces typically are encoded by model tokenizers.

      In many cases it would be redundant to show spaces, so tokenizers collapse them down to no spaces at all. Instead the model reads tokens as if the spaces never existed.

      For example it might output: thequickbrownfoxjumpsoverthelazydog

      Except it would actually be a list of numbers like: [1, 256, 6273, 7836, 1922, 2244, 3245, 256, 6734, 1176, 2]

      Then the tokenizer decodes this and adds the spaces because they are assumed to be there. The tokenizer has no knowledge of your request, and the model output typically does not include spaces, hence your output sentence will not have double spaces.

      • redjard@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        ·
        2 hours ago

        I’d expect tokenizers to include spaces in tokens. You get words constructed from multiple tokens, so can’t really insert spaces based on them. And too much information doesn’t work well when spaces are stripped.

        In my tests plenty of llms are also capable of seeing and using double spaces when accessed with the right interface.

        • CodeInvasion@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          15 minutes ago

          The tokenizer is capable of decoding spaceless tokens into compound words following a set of rules referred to as a grammar in Natural Language Processing (NLP). I do LLM research and have spent an uncomfortable amount of time staring at the encoded outputs of most tokenizers when debugging. Normally spaces are not included.

          There is of course a token for spaces in special circumstances, but I don’t know exactly how each tokenizer implements those spaces. So it does make sense that some models would be capable of the behavior you find in your tests, but that appears to be an emergent behavior, which is very interesting to see it work successfully.

          I intended for my original comment to convey the idea that it’s not surprising that LLMs might fail at following the instructions to include spaces since it normally doesn’t see spaces except in special circumstances. Similar to how it’s unsurprising that LLMs are bad at numerical operations because of how the use Markov Chain probability to each next token, one at a time.

    • TrackinDaKraken@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      7 hours ago

      So… Why don’t I see double spaces after your periods? Test. For. Double. Spaces.

      EDIT: Yep, double spaces were removed from my test. So, that’s why. Although, they are still there as I’m editing this. So, not removed, just hidden, I guess?

      I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        6 hours ago

        Web browsers collapse whitespace by default which means that sans any trickery or   deliberately   using    nonbreaking    spaces,   any amount of spaces between words to be reduced into one. Since apparently every single thing in the modern world is displayed via some kind of encapsulated little browser engine nowadays, the majority of double spaces left in the universe that are not already firmly nailed down into print now appear as singles. And thus the convention is almost totally lost.

        • redjard@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          This seems to match up with some quick tests I did just now, on the pseudonyminized chatbot interface of duckduckgo.
          chatgpt, llama, and claude all managed to use double spaces themselves, and all but llama managed to tell I was using them too.
          It might well depend on the platform, with the “native” applications for them stripping them on both ends.

          tests

          Mistral seems a bit confused and uses tripple-spaces.

        • FishFace@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          6 hours ago

          HTML rendering collapses whitespace; it has nothing to do with accessibility. I would like to see the research on double-spacing causing rivers, because I’ve only ever noticed them in justified text where I would expect the renderer to be inserting extra space after a full stop compared between words within sentence anyway.

          I’ve seen a lot of dubious legibility claims when it comes to typography including:

          1. serif is more legible
          2. sans-serif is more legible
          3. comic sans is more legible for people with dyslexia

          and so on.

    • 4am@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 hours ago

      LLMs can’t count because they’re not brains. Their output is the statistically most-likely next character, and since lot electronic text wasn’t double-spaced after a period, it can’t “follow” that instruction.

  • blargh513@sh.itjust.works
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    8 hours ago

    Seriously, I was em dashing on a goddamn typewriter, the fuck am I gonna change it now.

    In the end, it won’t matter. Being able to write well will be like riding a horse, calligraphy or tuning a carburetor. They will all become hobbies, a quirky past time of rich people or niche enthusiasts with limited real-world use.

    Maybe it is for the best. Most people can’t write for shit (does not help that we often use our goddamn thumbs to do most of it) and we spend countless hours in school trying to get kids to learn.

    Science fiction has us just projecting our thoughts to other without the clumsiness of language as the medium. Maybe this is just the first step.

  • Skyrmir@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    Supposedly it’s because there are a lot of them in the Bible, and since they use it as a training source, the AI just leans into them.

  • Thatuserguy@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    8 hours ago

    This shit drove me wild when I was using ChatGPT more frequently. It’d be like “do you want me to re-phrase that in your voice?” and then type some shit out that I’d never say in my damn life. The dashes were the worst part