-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alpaka container and algorithm library #102
Comments
Is it possible to use thrust directly as the implementation for the CUDA backend? Or would that possibly interfere with the use of alpaka buffers? As for naming suggestions: Since it emulates the "look and feel" of the STL, what about "Alpaka Standard Template Library [ASTL]", with a namespace |
I am still not sure about using thrust together with alpaka:
we could rewrite thrust containers and algorithms we need to alp. |
Alp sounds create, what do you think about creating a gist and collecting the interface. |
A container somehow similar to the thrust::device vectors would reduce are code base again and should make
algorithms on its memory more safe. For HASEonGPU we only need a container equal to std::array. Therefore no dynamic size increase is necessary.
For the algorithm side, we need a reduce and exclusive scan/prefix sum algorithm. Based on alpaka buffers and a wrapper for the container would be perfect 😸
I would suggest to create a separate repository for this containers and algorithms. Name suggestions ? 🐯
The text was updated successfully, but these errors were encountered: