Developer and refugee from Reddit

  • 4 Posts
  • 360 Comments
Joined 2年前
cake
Cake day: 2023年7月2日

help-circle


  • That’s an odd thing to say. For one thing, there are plenty of physical activities that one could get a reasonable description of from ChatGPT, but if you can’t actually do them or understand the steps, you’re gonna have a bad time.

    Example: I’ve never seen any evidence that ChatGPT can properly clean and sterilize beakers in an autoclave for a chemical engineering laboratory, even if it can describe the process. If you turned in homework cribbed from ChatGPT and don’t actually know how to do it, your future lab partners aren’t going to be happy that you passed your course by letting ChatGPT do all the work on paper.

    There’s also the issue that ChatGPT is frequently wrong. The whole point here is that these cheaters are getting caught because their papers have all the hallmarks of having been written by a large language model, and don’t show any comprehension of the study material by the student.

    And finally, if you’re cheating to get a degree in a field you don’t actually want to know anything about… Why?


  • Why fight against it? Because some of these students will be going into jobs that are life-or-death levels of importance and won’t know how to do what they’re hired to do.

    There’s nothing wrong with using a large language model to check your essay for errors and clumsy phrasing. There’s a lot wrong with trying to make it do your homework for you. If you graduate with a degree indicating you know your field, and you don’t actually know your field, you and everyone you work with are going to have a bad time.







  • I genuinely don’t know what to do with people like him. On the one hand… Yeah. He knowingly hired undocumented people, making him a hypocrite, and he just voted to have those people forcibly deported against his own interests, making him a fucking dumbass.

    At the same time, he seems to be showing actual remorse, and that should definitely be encouraged. The only - only - way this country has even the slightest shot at recovery is by flipping large numbers of the orange shit-gibbon’s supporters, like this guy.

    I really want to believe that’s possible. I don’t think it is, but I want to believe it.

    Edit: Missed the part in the article where these guys had valid work visas.





  • But it still manages to fuck it up.

    I’ve been experimenting with using Claude’s Sonnet model in Copilot in agent mode for my job, and one of the things that’s become abundantly clear is that it has certain types of behavior that are heavily represented in the model, so it assumes you want that behavior even if you explicitly tell it you don’t.

    Say you’re working in a yarn workspaces project, and you instruct Copilot to build and test a new dashboard using an instruction file. You’ll need to include explicit and repeated reminders all throughout the file to use yarn, not NPM, because even though yarn is very popular today, there are so many older examples of using NPM in its model that it’s just going to assume that’s what you actually want - thereby fucking up your codebase.

    I’ve also had lots of cases where I tell it I don’t want it to edit any code, just to analyze and explain something that’s there and how to update it… and then I have to stop it from editing code anyway, because halfway through it forgot that I didn’t want edits, just explanations.






  • Not true. Not entirely false, but not true.

    Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.

    The problem is that people think LLMs are actual AI. They’re not.

    My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.

    Consider these two facts:

    1. Humans are mammals.
    2. Humans build dams.

    Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.

    Here’s how it goes with Gemini right now:

    Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.

    Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.

    And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.