Artificial intelligence (AI) is increasingly deployed in real-time and energy-constrained environments, driving demand for hardware platforms that can deliver high performance and power efficiency. While CPUs and GPUs have traditionally served as the primary inference engines, their general-purpose nature often leads to inefficiencies under strict latency or power budgets. Field-Programmable Gate Arrays (FPGAs) offer a promising alternative by enabling custom-tailored parallelism and hardware-level optimizations. However, mapping AI workloads to FPGAs remains challenging due to the complexity of hardware-software co-design and data orchestration. This paper presents AI FPGA Agent, an agent-driven framework that simplifies the integration and acceleration of deep neural network inference on FPGAs. The proposed system employs a runtime software agent that dynamically partitions AI models, schedules compute-intensive layers for hardware offload, and manages data transfers with minimal developer intervention. The hardware component includes a parameterizable accelerator core optimized for high-throughput inference using quantized arithmetic. Experimental results demonstrate that the AI FPGA Agent achieves over 10× latency reduction compared to CPU baselines and 2–3× higher energy efficiency than GPU implementations, all while preserving classification accuracy within 0.2% of full-precision references. These findings underscore the potential of AI-FPGA co-design for scalable, energy-efficient AI deployment.