I’m not convinced LLMs as they exist today don’t prioritize sources – if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information.
Well, it includes the text from the search results in the prompt, it’s not actually updating any internal state (the network weights), a new “conversation” starts from scratch.
Yes that’s right, LLMs are context-free. They don’t have internal state. When I say “update on new information” I really mean “when new information is available in its context window, its response takes that into account.”
Well, it includes the text from the search results in the prompt, it’s not actually updating any internal state (the network weights), a new “conversation” starts from scratch.
That’s not true for the commercial ai’s. We don’t know what they are doing
Yes that’s right, LLMs are context-free. They don’t have internal state. When I say “update on new information” I really mean “when new information is available in its context window, its response takes that into account.”