I am very interested in this new topic, and more focused on Adversarial Machine Learning. How can we protect our models from attackers inside or outside our organizations?
I have shared here some articles to discuss if this is a real threat or not.
* [CREATING AN AI RED TEAM TO PROTECT CRITICAL INFRASTRUCTURE](https://www.mitre.org/publications/project-stories/creating-an-ai-red-team-to-protect-critical-infrastructure)
* [Facebook’s ‘Red Team’ Hacks Its Own AI Programs](https://www.wired.com/story/facebooks-red-team-hacks-ai-programs/)
* [AI Red Teaming with GPUs](https://www.nvidia.com/en-us/on-demand/session/gtcfall20-a21317/)
* [AI Village DefCon](https://aivillage.org/)
* [Demo ART from IBM](https://art-demo.mybluemix.net/)