• a9cx34udP4ZZ0@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    3 days ago

    Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.

    And people agree with me implicitly and tell me they’ve seen the same. But then don’t hesitate to turn to AI on subjects they aren’t experts in for “quick answers”. These are not stupid people either. I just don’t understand.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Uses for this current wave of AI: converting machine language to human language. Converting human language to machine language. Sentiment analysis. Summarizing text.

      People have way over invested in one of the least functional parts of what it can do because it’s the part that looks the most “magic” if you don’t know what it’s doing.

      The most helpful and least used way of using them is to identify what information the user is looking for and then to point them to resources they can use to find out for themselves, maybe with a description of which resource might be best depending on what part of the question they’re answering.
      It’s easy to be wrong when you’re answering a question, and a lot harder when you hand someone a book and say you think the answer is in chapter four.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      Because the alternative for me is googling the question with “reddit” added at the end half of the time. I still do that alot. For more complicated or serious problems/questions, I’ve set it to only use search function and navigate scientific sites like ncbi and pubmed while utilizing deep think. It then gives me the sources, I randomly tend to cross-check the relevant information, but so far I personally haven’t noticed any errors. You gotta realize how much time this saves.

      When it comes to data privacy, I honestly don’t see the potential dangers in the data I submit to OpenAI, but this is of course different to everyone else. I don’t submit any personal info or talk about my life. It’s a tool.

      • ganryuu@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Simply by the questions you ask, the way you ask them, they are able to infer a lot of information. Just because you’re not giving them the raw data about you doesn’t mean they are not able to get at least some of it. They’ve gotten pretty good at that.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          I really don’t have any counter-arguments as you have a good point, I tend to turn a blind eye to that uncomfortable fact. It’s worth it, I guess? Realistically, I’m having a hard time thinking of worst-case scenarious

      • verdigris@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        If it saves time but you still have to double check its answers, does it really save time? At least many reddit comments call out their own uncertainty or link to better resources, I can’t trust a single thing AI outputs so I just ignore it as much as possible.