Karmakar, Tanmay2025-07-152025-07-152025-0649p.http://hdl.handle.net/10263/7567Dissertation under the supervision of Dr. Debapriyo MajumdarNeural Ranking Models (NRMs) have become state-of-the-art in information retrieval, demonstrating remarkable effectiveness across various search and ranking tasks. However, their increasing deployment in real-world systems raises critical concerns about their robustness and susceptibility to adversarial attacks. This project investigates the fragility of modern NRMs by proposing and evaluating a document perturbation method based on targeted, single-word perturbation. Our approach strategically identifies an influential word depending on the query to be substituted or added in the document. We have done experiments on benchmark datasets to assess the impact of these minimal perturbations on ranking performance. Our findings reveal that even a single carefully chosen word addition or substitution can significantly change the ranking score of the targeted document providing insight into the NRMs.enNeural Ranking Models (NRMs)Text RankingWord Level Attack for Text RankingOther