“Column-oriented Database Acceleration using FPGAs” (2019)
Summary: The authors present one of the most sophisticated FPGA-based database acceleration systems today. NVMe over PCIe is used as storage medium and modified 16 KB PAX pages store the tables. Data can be encoded using runlength and dictionary encoding. The accelerators work on microcode which allows for great flexibility, therefore, a large variety of data types can be processed. GroupBy is implemented using a hash table, where software resolves collisions. Results show In-Memory performance of known DBMSs can be achieved with this approach.
“In-RDBMS Hardware Acceleration of Advanced Analytics” (2018)
Summary: Sophisticated FPGA-based Acceleration system. They employ a DSL to automatically generate Accelerator Designs + the execution Binaries. The accelerators and data accessing modules (striders) are controlled by a custom instruction set generated for each individual query (UDF). Complex analytics are therefore possible ( SVM …)
“Relational Query Processing on OpenCL-based FPGAs” (2016)
Summary: The authors employ different OpenCL kernels to generate accelerators. An execution plan is derived using a cost model. Multiple reconfigurable accelerators can be generated from these OpenCL kernels. Partial reconfiguration is then used to reconfigure the FPGA according to the execution plan. They present an algorithm to find a good execution plan using the estimations and present measurements for a number of different queries and scale factors.
“doppioDB: A Hardware Accelerated Database” (2017)
Summary: The paper discuss the extension of MonetDB with hardware accelerators for UDFs (user defined functions). Currently the system can perform string operators – LIKE and REGEXP_LIKE and analytical operations – SKYLINE and SGD. The system consists of a shared memory CPU-FPGA, with components to allocate the operator in FPGA and executing them. Their solution shows the flexible placement of an operation as well as the efficiency of using a hardware-based accelerator.