Vision Transformers (ViTs) have achieved state-of the-art performance across a wide range of computer vision tasks. Their adaptation to spiking neural networks (SNNs) has also shown strong potential in applications such as classification, object detection, and transfer learning. However, deploying these models on neuromorphic hardware like Intel Loihi requires f ixed-point implementations, making model quantization essen tial for maintaining energy efficiency. Current state-of-the-art quantization approaches for spiking ViTs rely on quantization aware training (QAT), which is computationally expensive due to the O(T^2) time complexity of backpropagation through time (BPTT). To address this challenge, we propose S-OPTQ, a fast and accurate post-training quantization method based on OPTQ, tai lored for spiking neural networks. S-OPTQ significantly reduces computational overhead while maintaining competitive accuracy with minimal performance degradation.