Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides way to evaluate the adversarial robustness. In practice, algorithms are artificially selected and tuned by human experts break ML system. However, manual selection of attackers tends be sub-optimal, leading mistakenly assessment model security. this paper, new procedure called Composite Attack (CAA) pro...