July 6, 2021

GitHub Auto-Pilot exposing secrets + other risks??


One of the biggest tech releases this year has to be GitHub auto-pilot the AI-based tool from GitHub to help predict the code you are writing.

From a technical point of view, it’s very exciting, but from a security point, I have concerns.

I have seen some issues being posted where the GitHub pilot was used to exposing secrets by getting it to predict API keys for example. And it seemed that these were valid keys when tested which is pretty dam concerning. Especially because GitHub claim that they only train this on public repositories, and while there are plenty of secrets exposed in public repositories in GitHub, the secrets that it spat out weren’t exposed publically, indicating it’s possible they are training on private repositories.

[https://twitter.com/pkell7/status/1411058236321681414](https://twitter.com/pkell7/status/1411058236321681414) (interesting thread)

In addition to that, I’m also wondering what quality of code this is bringing trained on, the majority of code has security vulnerabilities within it. Could this tool spread poor coding practices, especially because junior engineers will likely blindly trust AI-generated code as high quality (at least I would have).

I am curious to hear opinions from others about this feature and how it could bring out more security issues.

Leave a Reply

Your email address will not be published. Required fields are marked *

Note: By filling this form and submitting your commen, you acknowledge, agree and comply with our terms of service. In addition you acknowledge that you are willingly sharing your email address with AiOWikis and you might receive notification emails from AiOWikis for comment notifications. AiOWiksi guarantees that your email address WILL NOT be used for advertisement or email marketting purposes.

This site uses Akismet to reduce spam. Learn how your comment data is processed.