I am an Associate Professor at Northeastern University in the Khoury College of Computer Sciences. I joined Northeastern University in Fall 2016. Before that, I was a Consultant Research Scientist at RSA Laboratories, working on threat detection, machine learning for security, cloud security, and applied cryptography. I am the recipient of the Technology Review TR35 award for research in cloud security in 2011 and the recipient of the Google Security and Privacy Award 2019.


Adversarial machine learning in critical environments

The wide adoption of machine learning and deep learning in many critical applications introduces stronger incentives for motivated adversaries to manipulate the results and models generated by these automated methods. For instance, attackers can deliberately influence the training data to manipulate the results of a predictive model in poisoning attacks, induce misclassification in the testing phase in evasion attacks, or infer private information about training data in privacy attacks. More research is needed to understand in depth the vulnerabilities of machine learning against a wide range of adversaries in critical environments. My group works on new threat models for adversarial attacks on machine learning and designing machine learning methods resilient against attacks. We also study adversarial attacks against machine learning and defenses in constrained domains, such as threat detection and malware classification.

Funding : Army Research Lab (ARL), Toyota ITC, FireEye
Machine learning in cyber security

My group is working on applications of machine learning and artificial intelligence in cyber security, with a particular focus on techniques to understand, model, and predict the behavior of advanced attacks, and design proactive defenses. We have designed AI-enabled systems for threat detection that analyze network logs or endpoint data, learn profiles of legitimate behavior, and proactively detect anomalous behavior in a network. We have explored the use of both supervised and unsupervised learning methods in this context, as well as word embedding techniques and graph algorithms. Our algorithms were deployed in industry and on two university networks, where they detected unknown malicious activity confirmed by Security Operations Center (SOC) investigation. We are also interested in detecting coordinated attack campaigns across multiple networks, and designing methods for information sharing for more resilient global defenses.

Funding : DARPA CHASE program, Cisco
Intrusion-resilient enterprise networks

The recent prevalence of advanced cyber attacks has caused enterprise breaches with severe consequences in critical sectors, such as national defense, manufacturing and the financial industry. This project seeks to harden enterprise security against advanced threats by systematically designing a new multi-layer intrusion-resilient framework that addresses enterprise defense using cryptography, game theory, and machine learning. We design a novel game theoretical framework called FlipIt to model defender - attacker interactions encountered in stealthy attacks and show that intelligent defensive strategies can benefit from reinforcement learning algorithms. We leverage graph mining algorithms to measure infrastructure and application dependencies in an enterprise, and increase enterprise resilience in face of failures and cyber attacks.

Funding : NSF, Google, PWC