Close

Computer Science

Tech researchers developing adversarial malware analysis modules for cybersecurity

Assistant Professor Maanak Gupta and graduate student Kshitiz Aryal sit and discuss research
Tennessee Tech Assistant Professor Maanak Gupta, left, and graduate student Kshitiz Aryal discuss their research in adversarial malware analysis.

(From Eagle Drive magazine, December 2023)

Artificial intelligence is getting smarter, but it can be fooled. The bad guys never tire of trying.

Neither do the good guys – like Tennessee Tech Assistant Professor Maanak Gupta – when it comes to foiling their plans.

“It’s a great challenge,” said Gupta, who leads a multi-university interdisciplinary team of artificial intelligence, cybersecurity and education experts in researching and developing adversarial malware analysis modules through a National Science Foundation grant. At Tech, he’s been working with Ph.D. student Kshitiz Aryal and assistant professors Cory Gleasman of the College of Education and Indranil Bhattacharya of the Department of Electrical and Computer Engineering. 

Tech was awarded $300,000, while $200,000 went to collaborator North Carolina Agricultural and Technical State University. For the next three years they’ll be researching the problem and developing curriculum modules that can be integrated into computer science courses.

“While AI performs well on many tasks, it is often vulnerable to corrupt inputs that produce inaccurate responses from learning, reasoning or planning systems,” Gupta said. “Deep learning methods can be fooled by small amounts of input noise crafted by an adversary. Such capabilities allow adversaries to control the systems with little fear of detection.”

Sometimes the bad guys will disguise viruses, making them seem harmless.

“With minor changes, the AI or machine learning system will classify the file as benign rather than malicious,” Gupta said. “Our research is to develop robust and trustworthy AI for cybersecurity.”

Adversaries may even tamper with road signs – which is a big problem for someone in an autonomous car when the AI incorrectly detects “yield” instead of “stop.” 

As dependency on AI technology increases, it’s vital to keep it secure.

“Our work aims to test the robustness of existing machine-learning models used in malware detection and classification against adversarial poisoning and adversarial evasion attacks,” Gupta said. “The project includes the entire life cycle of malware analysis, including malware data collection, feature analysis, training machine learning-based malware detectors, carrying out different white-box and black-box adversarial attacks to malware detectors and eventually making the models robust against such attacks.”

Aryal, his student, is happy to be one of the good guys.

“It has been a great learning experience for me to work on two of the most cutting-edge topics of the current time: machine learning and malware analysis,” he said. “I am exploring the limitations of machine learning models and the vulnerabilities in the structure of malware files. We are implementing our research findings to devise noble adversarial attacks and defense methods.”

 

 

Department of Computer Science        College of Engineering Calendar

Lean More About Our Programs

Degree Information

Top Careers

Student Success

Experience Tech For Yourself

Visit us to see what sets us apart.

Schedule Your Visit
Contact Us