Technological evolution continues to define the contours of modern-day society. Amongst the proliferating list of enablers, Machine Learning (ML) has emerged as a driving force for technological advancement at a fast pace. Its efficiency and ability to process data, learn patterns and assess underlying relationships in a short time has accelerated the growth of ML across diverse enterprises. However, as its applications grow, so have the efforts to counter it. Adversarial attacks have emerged as a potent threat to ML that can lead to unforeseen consequences. These attacks can be executed in two ways – by tampering with the training data or the model itself. Adversarial attacks undermine ML’s efficiency and potentially threaten societies increasingly dependent on it. The Working Paper explores several types of adversarial attacks to highlight ML’s vulnerability. Taking cues from various experiments conducted and conferences convened, the paper discusses implication scenarios of adversarial attacks on the civil and military sectors.