Resistive Random-Access Memory (RRAM)-based Computing-in-Memory (CIM) has emerged as a promising solution for accelerating neuromorphic computing and artificial intelligence (AI) applications. However, conventional direct current (DC) input schemes suffer from limited parallelism, reducing computational precision and energy efficiency. To address these challenges, this work introduces an AC-driven computational framework that integrates frequency division multiplexing (FDM) and quadrature amplitude modulation (QAM) to enhance parallel processing capabilities. Multiple input samples are modulated in different frequency components and can be processed simultaneously, giving an effective solution to improve throughput. Additionally, we propose orthogonal component separation for QAM, where each pair of bits is independently modulated onto sine and cosine components (previously the Q and I axes, respectively). The dynamic range of MAC outputs in crossbar arrays is reduced, leading to lower analog-to-digital converter (ADC) power and higher energy efficiency. The experimental results validate that our work significantly improves up to 10.15 ×, 6.13 × in throughput and energy efficiency, respectively, and 77% reduction in area cost compared to the similar techniques. This enables higher parallelism and intelligent functionalities for area-/energy-constrained edge computing devices. Furthermore, we observed that the selected convolutional neural network (CNN) workloads can resist 1% phase noise during inference using our system. Among them, LeNet5-MNIST shows the highest tolerance, with an accuracy loss of less than 1% when variation is up to 3%.