GitHub Copilot Generated Insecure Code In 40% Of Circumstances During Experiment
Is that higher or lower than the average human input?
My questions are, what was the percent of projects in the training set that contained vulnerabilities, in other words is Copilot making the same number, more, or les security vulnerabilities than it’s training set? Also how many security vulnerabilities does a human programmer introduce per project. Only with that information can we determine if it is better than humans worse than humans or a wash from a secure coding perspective.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Username or Email Address