As specialized machine learning (ML) accelerators become increasingly prevalent in edge devices, new yet significant security challenges emerge due to the increased risk of adversaries gaining physical access to the hardware. This ease of access can enable reverse engineering, model extraction, or hardware-level attacks that are not feasible in cloud or data center environments. Edge devices, in particular, often process sensitive data such as biometric features or medical images. In this work, we present an electromagnetic (EM) side-channel attack targeting the Google Coral Edge TPU, a commercial low-power neural network inference engine. We develop a dedicated EM measurement setup to capture high-resolution leakage signals from the device during neural network inference. Leveraging this setup, we perform an input recovery attack that combines profiled side-channel analysis with a generative neural network and a custom-designed loss function. Our method is able to reconstruct images processed on different architectures deployed on the Edge TPU, demonstrating that sensitive user input data can be recovered from physical leakage. This finding highlights the significant privacy risks associated with deploying ML models on edge devices without adequate side-channel resistance.