Early version of PG-Strom didn't plan SSD2GPU Direct SQL Execution, however, some users concerns didn't concern about CPU performance only but I/O also. So, I began to implement the feature to accelerate I/O also. Its development started at Dec-2015.
RAM-to-GPU is always faster than SSD-to-GPU. It is a solution to help a situation when data size does not fit RAM size (or when user has less budget to purchase enough RAM. In fact, we can purchase Intel SSD 750 (400GB) with 300USD).
For the scenario you're targeting: databases, this makes a tonne of sense, database data regularly exceeds the size of RAM and the operations you want to do on the data are pretty static in the sense that they're the SQL operators.
In deep learning you are usually doing a lot more custom processing and your datasets are usually not as big, such that just buying more RAM is often cost effective.
Its kernel module provides some special APIs. The userspase application (PostgreSQL) is enhanced to use them. From the point of user view, SQL still has been the interface to access the data.