We propose a differentiable successive halving method of relaxing the top-k operator, rendering gradient-based optimization possible. The need to perform softmax iteratively on entire vector scores is avoided using tournament-style selection. As result, much better approximation and lower computational cost achieved compared previous approach.