One of the biggest tech releases this year has to be GitHub auto-pilot the AI-based tool from GitHub to help predict the code you are writing.
From a technical point of view, it’s very exciting, but from a security point, I have concerns.
I have seen some issues being posted where the GitHub pilot was used to exposing secrets by getting it to predict API keys for example. And it seemed that these were valid keys when tested which is pretty dam concerning. Especially because GitHub claim that they only train this on public repositories, and while there are plenty of secrets exposed in public repositories in GitHub, the secrets that it spat out weren’t exposed publically, indicating it’s possible they are training on private repositories.
[https://twitter.com/pkell7/status/1411058236321681414](https://twitter.com/pkell7/status/1411058236321681414) (interesting thread)
In addition to that, I’m also wondering what quality of code this is bringing trained on, the majority of code has security vulnerabilities within it. Could this tool spread poor coding practices, especially because junior engineers will likely blindly trust AI-generated code as high quality (at least I would have).
I am curious to hear opinions from others about this feature and how it could bring out more security issues.