'Easy way to port C++ code using std::array to use CUDA thrust?

I have existing C++11 code using std::array in the following form:

#include <array>
const unsigned int arraySize = 1024;
#define ARRAY_DEF std::array<int, arraySize>

int main()
{
    ARRAY_DEF x;
    x.fill(1);

    return 0;
}

Throughout the code, I use the ARRAY_DEF for easy readability, and make it easier to maintain. No problems there.

Now I'd like to port the code to run in CUDA on the GPU. Problem, as std::array cannot run on the device.

I think I need to leverage thrust::device_vector, but I can't see an easy way to declare a vector of static size in a #define. (I only see doing it after the variable name in the constructor, which defeats the point of using the #define.)

Is there another approach to declaring the vector, with static size, within a #define?

Or is there perhaps another class I can use within CUDA libraries to mimic the std::array to run on the device?



Solution 1:[1]

Thanks all! Sadly, none of these answers fit my need. I took matters into my own hands, and created a class which mimics std::array (mostly), can run in a device/kernel function, and was largely a find/replace to edit. (Ok, I needed to replace other STL functions, but that's another question.) https://github.com/MikeBSilverman/CUDAHostDeviceArray (dead link)

Hope it helps someone else.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Thibault