Neuromorphic embodied Artificial Intelligence (AI) robots have shown the capability to perform associative learning, emulating how animals learn through interactions with their environments. Through such a process, the neuromorphic embodied AI robot can memorize concurrent stimuli, such as vibrations and visual cues. However, these sensory inputs often contain perturbations, such as visual adversarial stickers, that can distort perception and interfere with learned associations. This study presents a robust vision processing framework inspired by the animal visual cortex and system, e.g., V1, for a neuromorphic robot capable of detecting and suppressing the interference from adversarial stickers. The proposed system integrates this visual processing module with a neuromorphic embodied AI robot, along with an associative learning mechanism. The robot is evaluated in an open-field maze, where it learns to associate neutral visual landmarks with vibration as an aversive stimulus. Both simulation and experimental results demonstrate that the proposed approach maintains high navigation accuracy and stable spatial memory even in the presence of adversarial stickers. Analyses of trajectories and neural firing patterns further demonstrate that the neuromorphic embodied AI robot with an enhanced visual system effectively resists deceptive input and preserves accurate associative learning.