Adversarial attacks on Graph Neural Networks (GNNs) reveal their security vulnerabilities, limiting adoption in safety-critical applications. However, existing attack strategies rely the knowledge of either GNN model being used or predictive task attacked. Is this necessary? For example, a graph may be for multiple downstream tasks unknown to practical attacker. It is thus important test vulner...