Description
I am wondering the reason that complex numbers are not considered in the Array API, and if we could give a second thought to make them native dtypes in the API.
The Dataframe API is not considered in the rest of this issue 🙂
I spent quite some time on making sure complex numbers are first-class citizens in CuPy, as many scientific computing applications require using complex numbers. In quantum mechanics, for example, complex numbers are the cornerstones and we can't live without them. Even in some machine learning / deep learning works that we do, either classical or quantum (yes, for those who don't know already there is quantum machine learning 😁), we also need complex numbers in various places like building tensors or communicating with simulations, especially those applying physics-aware neural networks, so it is a great pain to us not being able to build and operate on complex numbers natively.
To date, complex numbers are also an integral part of mainstream programming languages. For example, C has it since C99, and so is C++ (std::complex
). Our beloved Python has complex
too, so it is just so weird IMHO that when we talk about native dtypes they're being excluded.
As for language extensions to support GPUs, in CUDA we have thrust::complex
(which currently supports complex64
/complex128
) as a clone of std::complex
and it is likely that libcu++
will replace Thrust on this aspect, and in ROCm there's also a Thrust clone and native support in HIP, so at least on NVIDIA/AMD GPUs we are good.
Turning to library support, as far as I know
- NumPy supports
complex64
/complex128
, but notcomplex32
(ENH: half precision complex numpy/numpy#14753) - CuPy supports
complex64
/complex128
, andcomplex32
is being evaluated (ex: [WIP] Addcupy.complex32
cupy/cupy#4454) - PyTorch's support for
complex32
/complex64
/complex128
is catching up (I am unaware of any meta-issue summarizing the status quo, but the labelmodule: complex
is a good reference - SciPy /
cupyx.scipy
has many components supporting complex numbers, the most recent prominent case being the extensivendimage
overhaul (ex: ENH: Support complex-valued images and kernels for many ndimage filters scipy/scipy#12725) done by @grlee77 for image processing (yes, image processing also needs complex numbers!)
The reason I also mention complex32
above is because CUDA now provides complex32
support in some CUDA libraries like cuBLAS and cuFFT. With special hardware acceleration over float16
, it is expected that complex32
can also benefit, see the preliminary FFT test being done in cupy/cupy#4407. Hopefully by having complex number support in ML/DL frameworks (complex64
and complex128
are enough to start) many more applications can be benefited as well.
I am aware that Array API picks DLPack as the primary protocol for zero-copy data exchange, and that it currently lacks complex number support. This is one of the reasons I do not like DLPack. While I will create a separate issue to discuss about alternatives to DLPack, I think revising DLPack's format is fairly straightforward (and should be done asap regardless of the Array API standardization due to the need of ML/DL libraries).
Disclaimer: This issue is merely for my research interests (relevant to my and other colleagues' work) and is not driven by CuPy, one of the Array API stakeholders I will represent.