The recently developed pure transformer architectures have attained promising accuracy on point cloud learning benchmarks compared to convolutional neural networks. However, existing Transformers are computationally expensive because they waste a significant amount of time structuring irregular data. To solve this shortcoming, we present the Sparse Window Attention module gather coarse-grained ...