I feel like the LLMs really encourage that, too. They’ll deliver some garbage and then you tell them to make it less garbage and they’ll be like “You clever son of a removed, why didn’t I think of that?”.
I’ve had some success taking the buggy output of Claude (which Claude gets stuck in a loop trying to fix), fixing it with GoogleAI, then feeding the result back to Claude which can then follow the working patterns and make working extensions…
Still, anything past “microservices” seems to be inadviseable to trust to any current AI.
I feel like the LLMs really encourage that, too. They’ll deliver some garbage and then you tell them to make it less garbage and they’ll be like “You clever son of a removed, why didn’t I think of that?”.
I’ve had some success taking the buggy output of Claude (which Claude gets stuck in a loop trying to fix), fixing it with GoogleAI, then feeding the result back to Claude which can then follow the working patterns and make working extensions…
Still, anything past “microservices” seems to be inadviseable to trust to any current AI.