Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DBscan dpex_k implementation does not work under compute follows data. #140

Open
1 of 2 tasks
diptorupd opened this issue Oct 6, 2022 · 0 comments
Open
1 of 2 tasks

Comments

@diptorupd
Copy link

diptorupd commented Oct 6, 2022

The dpex_k implementation for DBSCAN currently fails execution with a rather cryptic message stating:

"Datatypes of array passed to @numba_dpex.kernel has to be the same. Passed datatypes: "...

The error message needs fixing and I am working on a dpex PR to address that.

What the error message is really telling is that the implementation does not follow the "compute follows data" programming model. Under the compute follows data programming model the execution queue for the kernel should be discoverable from the input array arguments.

There are two problems with the current implementation.

  • In the dpex_k implementation, the arguments to the kernel are n_samples, min_pts, assignments, sizes, indices_list. Of these, sizes and indices_list are not allocated in the initialize function and therefore are never copied to usm_ndarray. The kernel inputs are a mix of numpy.ndarray and dpctl.tensor.usm_ndarray and there is no way to infer the execution queue using compute follows data. Thus, the dpex error. To fix the issue, the creation of these two arrays need to be moved into the initialize call.

  • Fixing the first problem will lead to the next issue that is hidden by the first failure right now. Only the get_neighborhood function is a kernel. The compute_cluster function is a njit function. Currently, njit functions cannot consume usm_ndarray and thus to make it work we will have to copy data back to host after the get_neighborhood call. Doing so will mess up the timing measurement. Moreover, implementing dbscan_dpex_k in this fashion is inaccurate in terms of comparing the kernel implementation with other implementations as the whole benchmark never runs on a device/GPU. If implemented this way, comparing the timing of kernel with any other implementation is not an apples to apples comparison. We either need to implement compute_cluster as a kernel or remove the dbscan_dpek_k module.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants