Any cybersecurity ML engineers here care to give some insights on this?

Normally security datasets only consists of attacks that have already occurred to the organization (hence they have the data). But what about attacks that have not occurred BUT COULD occur? Can attack detection via ML be done in a more proactive way?

Is pentesting data used in ML for attack detection currently in the industry? If not, why not?

And what are the buzzwords for job roles pertaining to this line of work?

Share This Discussion

1 Comment

  • cdhamma

    November 21, 2021

    Seems like you’re talking about attacks of a purely electronic nature, and not hacks involving social engineering. Correct?

    Anything could occur. Human error or malicious internal actors have an enormous amount of potential to expose a system performing to its security expectations.

    Human error can occur in other organizations and affect your organization simply by means of a BGP whoopsie.

    There are an enormous number of ways a system could be affected by internal or external threats and the challenge with ML or any other automated threat detection is that there are so many false positives. Therefore, the value of the information would be very low because it was not actionable intelligence. It would be overwhelming garbage.

    Reply

Leave a Comment

Note: By filling this form and submitting your commen, you acknowledge, agree and comply with our terms of service. In addition you acknowledge that you are willingly sharing your email address with AiOWikis and you might receive notification emails from AiOWikis for comment notifications. AiOWiksi guarantees that your email address WILL NOT be used for advertisement or email marketting purposes.

This site uses Akismet to reduce spam. Learn how your comment data is processed.