I’m going to be starting a thesis for my computer science masters soon and thinking about topics. I’m a software engineer now with some systems experience and in my masters I’ve been doing a lot of ML and systems courses (distribute systems and system security). Since I’ve always been interested in cybersecurity, I was thinking about potentially doing my thesis on an application of ML in security (it’s that or I do it on a ML applications in finance).

My question is, if I ever want to move over to something in the security space, would the thesis relating to cybersecurity help in any way? Or is it pretty irrelevant and what matters is getting hands on work exp and certs? Thanks!

Share This Discussion

3 Comments

  • typeHonda

    October 18, 2021

    The only way I can see if helping is if your thesis speaks to practical uses for ML when relating to SIEM data analysis or SOAR automated responses. There is a need for at least a working knowledge of ML in organizations that have a strong base security organization and are looking to further mature and utilize their current products. The other end of the spectrum would be organizations that are trying to maximize the effectiveness of smaller security teams that won’t necessarily have the ability to grow a team past a certain size.

    So as far as what you will need to look for in a future job search I would suggest companies that describe their security as mature or speak to trying to better utilize current resources.

    Imho even if your thesis wasn’t focused on Security minded ML use cases if you interviewed and spoke to security use cases you have in mind that would suffice.

    Reply
  • emasculine

    October 18, 2021

    i think it should be established whether ML would even be appropriate for various security tasks. cybersecurity is not just one thing, it’s a million and one little(r) things. and ML is the buzzword of the moment thankfully replacing blockchain, so some amount of skepticism is warranted.

    the main issue is around false negatives and false positives. take the example of Facebook that uses ML for moderation. i read an article that it’s FN rate is like 90%. it also has a horrible FP rate too where it’s obvious that their AI is not up to the task with anything that approaches nuance — it seems to be a glorified regex. i’m sure they spend lots of money on it too, more than you or i could.

    now apply this to security tasks. what would be tolerable false rates? what are the consequences of each? and we all know that ML depends on training data, but security is a cat and mouse game where attackers always are looking for a new way to attack. how would a ML system withstand those novel attacks? would it be better or worse than more conventional methods? i would bet on it being worse, but who knows.

    Reply
  • lightning407

    October 18, 2021

    Maybe check out Silverfort? They use a bit of ML in their if I recall correctly

    Reply

Leave a Comment

Note: By filling this form and submitting your commen, you acknowledge, agree and comply with our terms of service. In addition you acknowledge that you are willingly sharing your email address with AiOWikis and you might receive notification emails from AiOWikis for comment notifications. AiOWiksi guarantees that your email address WILL NOT be used for advertisement or email marketting purposes.

This site uses Akismet to reduce spam. Learn how your comment data is processed.