Static random-access memories (SRAMs) are key building blocks in modern systems and are increasingly used as embedded storage and compute fabrics in compute-in-memory (CIM) and deep neural network (DNN) accelerators. However, aggressive technology scaling significantly increases SRAM sensitivity to radiation-induced soft errors, which manifest as single-event upsets (SEUs) and, in advanced technology nodes, multiple-node upsets (MNUs). This work evaluates the robustness-performance trade-offs of three radiation-hardened SRAM designs implemented in a 7 nm FinFET technology. We perform a design-space exploration over transistor sizing configurations to identify optimal operating points for each cell. A circuit-level single-event transient (SET) injection framework is used to extract the critical charge, $Q_{\mathrm{crit}}$, across the explored design space. In addition to $Q_{\mathrm{crit}}$, we characterize CIM-relevant metrics including cell energy, read delay, dynamic read noise margin, and area. To enable fair cross-design comparison and guide operating-point selection, we introduce a multi-objective figure-of-merit--based evaluation framework that identifies Pareto-optimal design points under competing reliability and overhead constraints. Results show that the ZBMA cell provides the most balanced trade-off between robustness and efficiency, lying in the knee region of the Pareto front. The impact of soft errors is further evaluated at the CIM macro level and at the system level by analyzing inference accuracy degradation in DNN workloads. When integrated into a CIM macro, hardened ZBMA-based designs significantly improve DNN inference robustness compared to conventional non-recoverable 6T SRAM. We test in very high flux environments, and demonstrate up to 86.2 percentage-point (pp) accuracy gain for LeNet on MNIST and 74.8 pp for VGG9 on CIFAR-10 under soft-error injection.