HD2FPGA: Automated Framework for Accelerating Hyperdimensional Computing on FPGAs

Tianqi Zhang1, Sahand Salamat2, Behnam Khaleghi2, Justin Morris3, Baris Aksanli4, Tajana Rosing1
1UCSD, 2University of California, San Diego, 3CSUSM, 4San Diego State University


Building a highly-efficient FPGA accelerator for Hyperdimensional (HD) computing is tedious work that requires Register Transfer Level (RTL) programming and verification. An inexperienced designer might waste significant time finding the best resource allocation scheme to achieve the target performance under resource constraints, especially for edge applications. HD computing is a novel computational paradigm that emulates brain functionality in performing cognitive tasks. The underlying computations of HD involve a substantial number of elementwise operations (e.g., additions and multiplications) on ultrawide hypervectors (HVs), which can be effectively parallelized and pipelined. Although different HD applications might vary in terms of the number of input features and output classes (labels), they generally follow the same computation flow. In this paper, we propose HD2FPGA, an automated tool that generates fast and highly efficient FPGA-based accelerators for HD classification and clustering. HD2FPGA eliminates the arduous task of andcrafted design of hardware accelerators by leveraging a template of optimized processing elements to automatically generate an FPGA implementation as a function of application specifications and user constraints. For HD classification HD2FPGA, on average, provides 1.5× (up to 2.5× ) speedup compared to the stateof-the-art FPGA-based accelerator and 36.6× speedup with 5.4× higher energy efficiency compared to the GPU-based one. For HD clustering, HD2FPGA is 2.2× faster than the GPU framework.