Open
Description
Running array-api-tests suite on dpctl.tensor
(https://github.com/IntelPython/dpctl):
ONEAPI_DEVICE_SELECTOR=*:cpu ARRAY_API_TESTS_MODULE=dpctl.tensor python -m pytest array_api_tests/test_creation_functions.py
two test new failures consistently occur (after updating from f82c7bc to recent tip of main, i.e. 9d7777b), e.g.:
FAILED array_api_tests/test_creation_functions.py::test_eye - hypothesis.errors.DeadlineExceeded: Test took 1470.80ms, which exceeds the deadline of 800.00ms
FAILED array_api_tests/test_creation_functions.py::test_linspace - hypothesis.errors.DeadlineExceeded: Test took 864.40ms, which exceeds the deadline of 800.00ms
Here is my understanding of what happens.
Hypothesis would generate a sufficiently large array, i.e. eye(47, 100)
but testing of the output correctness is is done one element at the time (suboptimal usage for offloading libraries as it triggers 4700 submissions of small kernels) and this testing would take much longer than the creation of the array itself:
In [7]: import dpctl, dpctl.tensor as dpt
In [8]: %time eye_m = dpt.eye(47,100)
CPU times: user 3.23 ms, sys: 0 ns, total: 3.23 ms
Wall time: 2.76 ms
In [9]: %time all(eye_m[i, j] == (1 if i == j else 0) for i in range(eye_m.shape[0]) for j in range(eye_m.shape[1]))
CPU times: user 6.68 s, sys: 1.98 s, total: 8.67 s
Wall time: 2.09 s
Out[9]: True
The remedy would be to restrict the maximal dimension size.