The multi-party fabrication of semiconductor products has exposed Machine Learning (ML) hardware accelerators to significant security threats, particularly malicious hardware modifications aiming at service denials. While traditional adversaries typically create specific backdoors, this paper investigates a more insidious class of attacks, i.e., gradual accuracy-degrading attacks. We propose a novel sensitivity analysis method based on partial derivatives, which quantitatively assesses the security implications of weights in a hardware-implemented ML model. This analysis not only identifies the most critical parameters to target but also provides a foundation for protecting ML accelerators. Leveraging this method, we design, implement, and evaluate six novel hardware Trojans (HTs) that tamper with model parameters to gradually degrade accuracy over time. The proposed HTs are demonstrated on a LeNet-5 CNN accelerator deployed on a Xilinx Zybo Z7-20 FPGA. Our hardware evaluation shows that these stealthy HTs can drive final accuracy down to 15.2% on hardware (up to 83.6 percentage-point drop) while adding at most 136 LUTs (~0.30%), and <0.025W power overhead. These results demonstrate both the practical feasibility of highly effective and stealthy HTs capable of evading conventional testing, and the dual role of the proposed sensitivity analysis as a tool for exposing vulnerabilities and enabling targeted defense mechanisms.