Hacker News .hnnew | past | comments | ask | show | jobs | submit | kaigai's commentslogin

Yes, Tesla or Quadro are required. In addition, Tesla P40/P100 are recommended due to size of i/o mappable region.


Early version of PG-Strom didn't plan SSD2GPU Direct SQL Execution, however, some users concerns didn't concern about CPU performance only but I/O also. So, I began to implement the feature to accelerate I/O also. Its development started at Dec-2015.


RAM-to-GPU is always faster than SSD-to-GPU. It is a solution to help a situation when data size does not fit RAM size (or when user has less budget to purchase enough RAM. In fact, we can purchase Intel SSD 750 (400GB) with 300USD).


For the scenario you're targeting: databases, this makes a tonne of sense, database data regularly exceeds the size of RAM and the operations you want to do on the data are pretty static in the sense that they're the SQL operators.

In deep learning you are usually doing a lot more custom processing and your datasets are usually not as big, such that just buying more RAM is often cost effective.


My headache is painful. It might be called "SSD-to-GPU P2P DMA".


Its kernel module provides some special APIs. The userspase application (PostgreSQL) is enhanced to use them. From the point of user view, SQL still has been the interface to access the data.


NVME-SSD performs as DMAC in this case. All GPU doing is mapping its own device memory on the PCI BAR area.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: