I agree. We can use ML models for identifying possible malware; there should be more than enough examples of bad coding to train an LLM on to identify injection risks, lack of input sanitation, assignment and inheritance issues, and use after free problems. And cleaning THOSE things up in a code base will fix the majority of security issues.
LLMs could also review algorithms looking for logic issues in larger code bases where a human might not be able to hold the entire system in their mind at the same time.
I agree. We can use ML models for identifying possible malware; there should be more than enough examples of bad coding to train an LLM on to identify injection risks, lack of input sanitation, assignment and inheritance issues, and use after free problems. And cleaning THOSE things up in a code base will fix the majority of security issues.
LLMs could also review algorithms looking for logic issues in larger code bases where a human might not be able to hold the entire system in their mind at the same time.