Skip to content

ENH: Review exported symbols; redesign test_all #315

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 22 commits into from
Jun 6, 2025

Conversation

crusaderky
Copy link
Contributor

@crusaderky crusaderky commented Apr 21, 2025

#288 introduced __dir__, which completely neutered test_all.
Instead of reverting the change, this PR attempts to reinvent the test to be more useful.

CC @jorenham @ev-br

@Copilot Copilot AI review requested due to automatic review settings April 21, 2025 10:19
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR overhauls the testing of exported names by replacing the old test_all function with new, more focused tests and by standardizing the dir implementations across multiple array_api_compat modules. Key changes include:

  • Replacing the test_all function with new tests (test_dir and test_builtins_collision) that better validate module exports.
  • Removing redundant _all_ignore variables and cleaning up all definitions and dir implementations in various modules.
  • Consolidating all settings in numpy, torch, dask, cupy, and common modules for more consistent behavior.

Reviewed Changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated no comments.

Show a summary per file
File Description
tests/test_all.py New tests using NAMES/XFAILS and updated parameterizations replacing test_all.
array_api_compat/torch/linalg.py Removed _all_ignore and cleaned up all and dir definitions.
array_api_compat/torch/fft.py Removed _all_ignore; standardized dir definition.
array_api_compat/torch/_aliases.py Removed _all_ignore to simplify export handling.
array_api_compat/numpy/linalg.py Revised all composition and removed redundant concatenation of all.
array_api_compat/numpy/fft.py Modified all concatenation and removed extra deletion lines for cleanup.
array_api_compat/numpy/_typing.py Removed _all_ignore.
array_api_compat/numpy/_aliases.py Updated all assembly and removed _all_ignore for consistency.
array_api_compat/dask/array/linalg.py Removed _all_ignore and added a dir function returning all.
array_api_compat/dask/array/fft.py Removed _all_ignore and standardized dir implementation.
array_api_compat/dask/array/_aliases.py Removed _all_ignore.
array_api_compat/cupy/_typing.py Removed _all_ignore.
array_api_compat/cupy/_aliases.py Removed _all_ignore.
array_api_compat/common/_linalg.py Removed _all_ignore.
array_api_compat/common/_helpers.py Removed _all_ignore.
array_api_compat/common/_aliases.py Removed _all_ignore.

@crusaderky crusaderky changed the title Overhaul test_all TST: Redesign test_all Apr 21, 2025

NAMES = {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a better version of this test could automatically scrape data-apis/array-api/?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@crusaderky crusaderky Apr 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you suggesting that array-api-compat should add array-api as a git submodule, for testing only?
If so, do you agree that such a change is best left to a follow-up?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you suggesting that array-api-compat should add array-api as a git submodule, for testing only?

Definitely not.

If so, do you agree that such a change is best left to a follow-up?

Absolutely yes.

ev-br
ev-br previously requested changes Apr 22, 2025
Copy link
Member

@ev-br ev-br left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the purpose of test_all --- with all its considerable sins! --- was three-fold:

  • make sure that the user-visible dir/all lists contain everything they should
  • make sure that unwanted names do not bleed into the user-visible dir/all lists
  • make sure that internal implementation modules __all__ lists are sensible

This PR seems to work for the first item; the second one seems to still allow some strange things:

In [11]: import array_api_compat.numpy as anp

In [12]: "Final" in dir(anp)
Out[12]: True

In [13]: import numpy as np

In [14]: "Final" in dir(np)
Out[14]: False

For the third item, maybe we should somehow check that __all__ lists do not contain duplicate items? This would be useful for development (for one recent example, I'm not entirely sure if #317 does it right with __all__ lists; would be nice to have a test support for this)

@crusaderky
Copy link
Contributor Author

I've heavily reworked the PR and fixed many issues where array-api-compat was hiding objects declared in the wrapped module.
Output of the new test vs. the old array_api_compat is as follows:

FAILED tests/test_all.py::test_array_api_names[numpy-] - AssertionError: Missing exports: {'__array_api_version__', '__array_namespace_info__'}
FAILED tests/test_all.py::test_array_api_names[cupy-] - AssertionError: Missing exports: {'__array_api_version__', '__array_namespace_info__'}
FAILED tests/test_all.py::test_array_api_names[torch-] - AssertionError: Missing exports: {'__array_api_version__', '__array_namespace_info__'}
FAILED tests/test_all.py::test_array_api_names[dask.array-] - AssertionError: Missing exports: {'__array_api_version__', '__array_namespace_info__'}
FAILED tests/test_all.py::test_compat_doesnt_hide_names[numpy-] - AssertionError: Non-Array API names have been hidden: {'test', 'kernel_version', 'numarray', 'polynomial', 'compat', 'typing', 'testing', 'version', 'dtypes', 'l...
FAILED tests/test_all.py::test_compat_doesnt_hide_names[numpy-fft] - AssertionError: Non-Array API names have been hidden: {'test', 'helper'}
FAILED tests/test_all.py::test_compat_doesnt_hide_names[numpy-linalg] - AssertionError: Non-Array API names have been hidden: {'test', 'linalg'}
FAILED tests/test_all.py::test_compat_doesnt_hide_names[torch-] - AssertionError: Non-Array API names have been hidden: {'cpu', 'cuda'}
FAILED tests/test_all.py::test_compat_doesnt_hide_names[torch-fft] - AssertionError: Non-Array API names have been hidden: {'ihfftn', 'common_args', 'hfft2', 'factory_common_args', 'ihfft2', 'sys', 'hfftn'}
FAILED tests/test_all.py::test_compat_doesnt_hide_names[dask.array-] - AssertionError: Non-Array API names have been hidden: {'einsumfuncs', 'annotations', 'utils', 'optimization', 'chunk_types', 'warnings', 'wrap', 'reductions', 'r...
FAILED tests/test_all.py::test_compat_doesnt_add_names[numpy-] - AssertionError: array-api-compat is adding non-Array API names: {'is_pydata_sparse_namespace', 'is_numpy_array', 'is_ndonnx_namespace', 'is_cupy_namespace', 'is_...
FAILED tests/test_all.py::test_compat_doesnt_add_names[torch-fft] - AssertionError: array-api-compat is adding non-Array API names: {'annotations', 'Sequence', 'Union', 'Literal', 'Array'}
FAILED tests/test_all.py::test_compat_doesnt_add_names[dask.array-] - AssertionError: array-api-compat is adding non-Array API names: {'Final'}
FAILED tests/test_all.py::test_compat_doesnt_add_names[dask.array-fft] - AssertionError: array-api-compat is adding non-Array API names: {'get_xp', 'fft_all', 'da'}
FAILED tests/test_all.py::test_compat_doesnt_add_names[dask.array-linalg] - AssertionError: array-api-compat is adding non-Array API names: {'Literal', 'linalg_all', 'get_xp', 'da'}

@crusaderky
Copy link
Contributor Author

@ev-br gentle ping

@crusaderky
Copy link
Contributor Author

@ev-br gentle ping

@ev-br
Copy link
Member

ev-br commented May 15, 2025

Yes, thanks for the ping. My plan is to look at this and gh-321 right after 1.12 is out of the door (so that pytorch==2.7 is usable).

@ev-br
Copy link
Member

ev-br commented Jun 4, 2025

Sorry for the delay.

This PR seems to have some visible effects on which names are available from the wrapped namespaces. Are these intended?

On main
-------

In [13]: for bare_ns in [np, cp, da, torch]:
    ...:     xp = array_namespace(bare_ns.arange(3))
    ...:     bare_names = set(dir(bare_ns))
    ...:     xp_names = set(dir(xp))
    ...:     print(f"{xp.__name__}, {len(bare_names - xp_names)} {len(xp_names - bare_names)}")
    ...: 
array_api_compat.numpy, 24 7
array_api_compat.cupy, 28 33
array_api_compat.dask.array, 2 37
array_api_compat.torch, 441 25

In [16]: for bare_ns in [np, cp, da, torch]:
    ...:     xp = array_namespace(bare_ns.arange(3))
    ...:     bare_names = set([x for x in dir(bare_ns) if not x.startswith("_")])
    ...:     xp_names = set([x for x in dir(xp) if not x.startswith("_")])
    ...:     print(f"{xp.__name__}, {len(bare_names - xp_names)} {len(xp_names - bare_names)}")
    ...: 
array_api_compat.numpy, 0 4
array_api_compat.cupy, 0 28
array_api_compat.dask.array, 0 33
array_api_compat.torch, 2 20


On branch
---------



In [1]: import numpy as np

In [2]: import torch
import
In [3]: import cupy as cp

In [4]: import dask.array as da

In [5]: from array_api_compat import array_namespace


In [7]: for bare_ns in [np, cp, da, torch]:
   ...:     xp = array_namespace(bare_ns.arange(3))
   ...:     bare_names = set([x for x in dir(bare_ns) ])
   ...:     xp_names = set([x for x in dir(xp) ])
   ...:     print(f"{xp.__name__}, {len(bare_names - xp_names)} {len(xp_names - bare_names)}")
   ...: 
array_api_compat.numpy, 33 3
array_api_compat.cupy, 37 30
array_api_compat.dask.array, 11 34
array_api_compat.torch, 447 22

In [6]: for bare_ns in [np, cp, da, torch]:
   ...:     xp = array_namespace(bare_ns.arange(3))
   ...:     bare_names = set([x for x in dir(bare_ns) if not x.startswith("_")])
   ...:     xp_names = set([x for x in dir(xp) if not x.startswith("_")])
   ...:     print(f"{xp.__name__}, {len(bare_names - xp_names)} {len(xp_names - bare_names)}")
   ...: 
array_api_compat.numpy, 0 3
array_api_compat.cupy, 0 28
array_api_compat.dask.array, 0 32
array_api_compat.torch, 0 20

@crusaderky
Copy link
Contributor Author

crusaderky commented Jun 4, 2025

@ev-br this may be clearer. It shows that all changes in visibility are desirable:

import yaml
from array_api_compat import array_namespace
import numpy as np
import cupy as cp
import dask.array as da
import torch

out = {}
for bare_ns in [np, cp, da, torch]:
    xp = array_namespace(bare_ns.arange(3))
    bare_names = set(dir(bare_ns))
    xp_names = set(dir(xp))
    hides = sorted(bare_names - xp_names)
    adds = sorted(xp_names - bare_names)
    out[f"array-api-compat hides from {bare_ns.__name__}"] = sorted(bare_names - xp_names)
    out[f"array-api-compat adds to {bare_ns.__name__}"] = sorted(xp_names - bare_names)

print(yaml.dump(out))
git checkout main
python dump.py > main.txt
git checkout test_all
python dump.py > test_all.txt
diff -c99999 main.txt test_all.txt
*** main.txt	2025-06-04 12:35:57.385160311 +0100
--- test_all.txt	2025-06-04 12:35:48.335020354 +0100
***************
*** 1,628 ****
  array-api-compat adds to cupy:
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
  - __array_api_version__
  - __array_namespace_info__
- - _aliases
- - _info
- - _typing
  - acos
  - acosh
  - asin
  - asinh
  - astype
  - atan
  - atan2
  - atanh
  - bitwise_invert
  - bitwise_left_shift
  - bitwise_right_shift
  - bool
  - concat
  - cumulative_prod
  - cumulative_sum
  - isdtype
  - matrix_transpose
  - permute_dims
  - pow
  - unique_all
  - unique_counts
  - unique_inverse
  - unique_values
  - unstack
  - vecdot
  array-api-compat adds to dask.array:
- - Final
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
  - __array_api_version__
  - __array_namespace_info__
- - _aliases
- - _info
  - acos
  - acosh
  - argsort
  - asin
  - asinh
  - astype
  - atan
  - atan2
  - atanh
  - bitwise_invert
  - bitwise_left_shift
  - bitwise_right_shift
  - can_cast
  - concat
  - cumulative_prod
  - cumulative_sum
  - finfo
  - iinfo
  - isdtype
  - matrix_transpose
  - permute_dims
  - pow
  - sort
  - unique_all
  - unique_counts
  - unique_inverse
  - unique_values
  - unstack
  - vecdot
  array-api-compat adds to numpy:
- - Final
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
- - __annotations__
- - _aliases
- - _info
  array-api-compat adds to torch:
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
  - __array_api_version__
  - __array_namespace_info__
- - _aliases
- - _info
- - _typing
  - astype
  - bitwise_invert
  - broadcast_arrays
  - cumulative_prod
  - cumulative_sum
  - expand_dims
  - isdtype
  - matrix_transpose
  - permute_dims
  - repeat
  - take_along_axis
  - unique_all
  - unique_counts
  - unique_inverse
  - unique_values
  - unstack
  - vecdot
  array-api-compat hides from cupy:
  - __getattr__
  - __version__
  - _binary
  - _core
  - _creation
  - _cupy
  - _cupyx
  - _default_memory_pool
  - _default_pinned_memory_pool
  - _deprecated_apis
  - _embed_signatures
  - _environment
  - _functional
  - _functools
  - _indexing
  - _io
  - _logic
  - _manipulation
  - _math
  - _misc
  - _numpy
  - _padding
  - _sorting
  - _statistics
  - _sys
  - _template
  - _util
  - _version
  array-api-compat hides from dask.array:
- - ARRAY_EXPR_ENABLED
  - __all__
  - _array_expr_enabled
  - _reductions_generic
  - _shuffle
- - annotations
- - chunk
- - chunk_types
- - core
- - creation
- - dispatch
- - einsumfuncs
- - importlib
- - numpy_compat
- - optimization
- - reductions
- - routines
- - slicing
- - tiledb_io
- - ufunc
- - utils
- - warnings
- - wrap
  array-api-compat hides from numpy:
  - _CopyMode
  - _NoValue
  - __NUMPY_SETUP__
  - __all__
  - __config__
  - __dir__
  - __expired_attributes__
  - __former_attrs__
  - __future_scalars__
  - __getattr__
  - __numpy_submodules__
  - _array_api_info
  - _core
  - _distributor_init
  - _expired_attrs_2_0
  - _globals
  - _int_extended_msg
  - _mat
  - _msg
  - _pyinstaller_hooks_dir
  - _pytesttester
  - _specific_msg
  - _type_info
  - _utils
  array-api-compat hides from torch:
  - _Any
  - _C
  - _Callable
  - _GLOBAL_DEVICE_CONTEXT
  - _InputT
  - _Optional
  - _ParamSpec
  - _RetT
  - _TorchCompileInductorWrapper
  - _TorchCompileWrapper
  - _TritonLibrary
  - _TypeIs
  - _TypeVar
  - _Union
  - _VF
  - __all__
  - __all_and_float_types
  - __annotations__
  - __config__
  - __future__
  - __getattr__
  - __version__
  - _adaptive_avg_pool2d
  - _adaptive_avg_pool3d
  - _add_batch_dim
  - _add_relu
  - _add_relu_
  - _addmm_activation
  - _aminmax
  - _amp_foreach_non_finite_check_and_unscale_
  - _amp_update_scale_
  - _as_tensor_fullprec
  - _assert
  - _assert_async
  - _assert_scalar
  - _assert_tensor_metadata
  - _awaits
  - _batch_norm_impl_index
  - _cast_Byte
  - _cast_Char
  - _cast_Double
  - _cast_Float
  - _cast_Half
  - _cast_Int
  - _cast_Long
  - _cast_Short
  - _check
  - _check_index
  - _check_is_size
  - _check_not_implemented
  - _check_tensor_all
  - _check_tensor_all_with
  - _check_type
  - _check_value
  - _check_with
  - _choose_qparams_per_tensor
  - _chunk_cat
  - _classes
  - _coalesce
  - _compile
  - _compute_linear_combination
  - _conj
  - _conj_copy
  - _conj_physical
  - _constrain_as_size
  - _convert_indices_from_coo_to_csr
  - _convert_indices_from_csr_to_coo
  - _convert_weight_to_int4pack
  - _convert_weight_to_int4pack_for_cpu
  - _convolution
  - _convolution_mode
  - _copy_from
  - _copy_from_and_resize
  - _cslt_compress
  - _cslt_sparse_mm
  - _cslt_sparse_mm_search
  - _ctc_loss
  - _cudnn_ctc_loss
  - _cudnn_init_dropout_state
  - _cudnn_rnn
  - _cudnn_rnn_flatten_weight
  - _cufft_clear_plan_cache
  - _cufft_get_plan_cache_max_size
  - _cufft_get_plan_cache_size
  - _cufft_set_plan_cache_max_size
  - _cummax_helper
  - _cummin_helper
  - _custom_op
  - _custom_ops
  - _debug_has_internal_overlap
  - _decomp
  - _deprecated_attrs
  - _dim_arange
  - _dirichlet_grad
  - _disable_dynamo
  - _disable_functionalization
  - _dispatch
  - _dyn_quant_matmul_4bit
  - _dyn_quant_pack_4bit_weight
  - _efficientzerotensor
  - _embedding_bag
  - _embedding_bag_forward_only
  - _empty_affine_quantized
  - _empty_per_channel_affine_quantized
  - _enable_functionalization
  - _environment
  - _euclidean_dist
  - _export
  - _fake_quantize_learnable_per_channel_affine
  - _fake_quantize_learnable_per_tensor_affine
  - _fake_quantize_per_tensor_affine_cachemask_tensor_qparams
  - _fft_c2c
  - _fft_c2r
  - _fft_r2c
  - _fill_mem_eff_dropout_mask_
  - _foobar
  - _foreach_abs
  - _foreach_abs_
  - _foreach_acos
  - _foreach_acos_
  - _foreach_add
  - _foreach_add_
  - _foreach_addcdiv
  - _foreach_addcdiv_
  - _foreach_addcmul
  - _foreach_addcmul_
  - _foreach_asin
  - _foreach_asin_
  - _foreach_atan
  - _foreach_atan_
  - _foreach_ceil
  - _foreach_ceil_
  - _foreach_clamp_max
  - _foreach_clamp_max_
  - _foreach_clamp_min
  - _foreach_clamp_min_
  - _foreach_copy_
  - _foreach_cos
  - _foreach_cos_
  - _foreach_cosh
  - _foreach_cosh_
  - _foreach_div
  - _foreach_div_
  - _foreach_erf
  - _foreach_erf_
  - _foreach_erfc
  - _foreach_erfc_
  - _foreach_exp
  - _foreach_exp_
  - _foreach_expm1
  - _foreach_expm1_
  - _foreach_floor
  - _foreach_floor_
  - _foreach_frac
  - _foreach_frac_
  - _foreach_lerp
  - _foreach_lerp_
  - _foreach_lgamma
  - _foreach_lgamma_
  - _foreach_log
  - _foreach_log10
  - _foreach_log10_
  - _foreach_log1p
  - _foreach_log1p_
  - _foreach_log2
  - _foreach_log2_
  - _foreach_log_
  - _foreach_max
  - _foreach_maximum
  - _foreach_maximum_
  - _foreach_minimum
  - _foreach_minimum_
  - _foreach_mul
  - _foreach_mul_
  - _foreach_neg
  - _foreach_neg_
  - _foreach_norm
  - _foreach_pow
  - _foreach_pow_
  - _foreach_reciprocal
  - _foreach_reciprocal_
  - _foreach_round
  - _foreach_round_
  - _foreach_rsqrt
  - _foreach_rsqrt_
  - _foreach_sigmoid
  - _foreach_sigmoid_
  - _foreach_sign
  - _foreach_sign_
  - _foreach_sin
  - _foreach_sin_
  - _foreach_sinh
  - _foreach_sinh_
  - _foreach_sqrt
  - _foreach_sqrt_
  - _foreach_sub
  - _foreach_sub_
  - _foreach_tan
  - _foreach_tan_
  - _foreach_tanh
  - _foreach_tanh_
  - _foreach_trunc
  - _foreach_trunc_
  - _foreach_zero_
  - _freeze_functional_tensor
  - _from_functional_tensor
  - _functional_assert_async
  - _functional_assert_scalar
  - _functional_sym_constrain_range
  - _functional_sym_constrain_range_for_size
  - _functionalize_apply_view_metas
  - _functionalize_are_all_mutations_hidden_from_autograd
  - _functionalize_are_all_mutations_under_no_grad_or_inference_mode
  - _functionalize_commit_update
  - _functionalize_enable_reapply_views
  - _functionalize_get_storage_size
  - _functionalize_has_data_mutation
  - _functionalize_has_metadata_mutation
  - _functionalize_is_multi_output_view
  - _functionalize_is_symbolic
  - _functionalize_mark_mutation_hidden_from_autograd
  - _functionalize_replace
  - _functionalize_set_storage_changed
  - _functionalize_sync
  - _functionalize_unsafe_set
  - _functionalize_was_inductor_storage_resized
  - _functionalize_was_storage_changed
  - _functorch
  - _fused_adagrad_
  - _fused_adam_
  - _fused_adamw_
  - _fused_dropout
  - _fused_moving_avg_obs_fq_helper
  - _fused_sdp_choice
  - _fused_sgd_
  - _fw_primal_copy
  - _get_cuda_dep_paths
  - _get_origin
  - _grid_sampler_2d_cpu_fallback
  - _guards
  - _has_compatible_shallow_copy_type
  - _higher_order_ops
  - _histogramdd_bin_edges
  - _histogramdd_from_bin_cts
  - _histogramdd_from_bin_tensors
  - _import_device_backends
  - _import_dotted_name
  - _index_put_impl_
  - _indices_copy
  - _initExtension
  - _int_mm
  - _is_all_true
  - _is_any_true
  - _is_device_backend_autoload_enabled
  - _is_functional_tensor
  - _is_functional_tensor_base
  - _is_zerotensor
  - _jit_internal
  - _lazy_clone
  - _lazy_modules
  - _library
  - _linalg_check_errors
  - _linalg_det
  - _linalg_eigh
  - _linalg_slogdet
  - _linalg_solve_ex
  - _linalg_svd
  - _linalg_utils
  - _load_global_deps
  - _lobpcg
  - _log_softmax
  - _log_softmax_backward_data
  - _logcumsumexp
  - _logging
  - _lowrank
  - _lstm_mps
  - _lu_with_info
  - _make_dep_token
  - _make_dual
  - _make_dual_copy
  - _make_per_channel_quantized_tensor
  - _make_per_tensor_quantized_tensor
  - _masked_scale
  - _masked_softmax
  - _meta_registrations
  - _mirror_autograd_meta_to
  - _mixed_dtypes_linear
  - _mkldnn
  - _mkldnn_reshape
  - _mkldnn_transpose
  - _mkldnn_transpose_
  - _mps_convolution
  - _mps_convolution_transpose
  - _namedtensor_internals
  - _native_batch_norm_legit
  - _native_batch_norm_legit_no_training
  - _native_multi_head_attention
  - _neg_view
  - _neg_view_copy
  - _nested_compute_contiguous_strides_offsets
  - _nested_from_padded
  - _nested_from_padded_and_nested_example
  - _nested_from_padded_tensor
  - _nested_get_jagged_dummy
  - _nested_get_lengths
  - _nested_get_max_seqlen
  - _nested_get_min_seqlen
  - _nested_get_offsets
  - _nested_get_ragged_idx
  - _nested_get_values
  - _nested_get_values_copy
  - _nested_tensor_from_mask
  - _nested_tensor_from_mask_left_aligned
  - _nested_tensor_from_tensor_list
  - _nested_tensor_softmax_with_shape
  - _nested_view_from_buffer
  - _nested_view_from_buffer_copy
  - _nested_view_from_jagged
  - _nested_view_from_jagged_copy
  - _nnpack_available
  - _nnpack_spatial_convolution
  - _ops
  - _overload
  - _pack_padded_sequence
  - _pad_packed_sequence
  - _pin_memory
  - _preload_cuda_deps
  - _prelu_kernel
  - _prims
  - _prims_common
  - _print
  - _propagate_xla_data
  - _refs
  - _register_device_module
  - _remove_batch_dim
  - _reshape_alias_copy
  - _reshape_from_tensor
  - _resize_output_
  - _rowwise_prune
  - _running_with_deploy
  - _safe_softmax
  - _sample_dirichlet
  - _saturate_weight_to_fp16
  - _scaled_dot_product_attention_math
  - _scaled_dot_product_attention_math_for_mps
  - _scaled_dot_product_cudnn_attention
  - _scaled_dot_product_efficient_attention
  - _scaled_dot_product_flash_attention
  - _scaled_dot_product_flash_attention_for_cpu
  - _scaled_grouped_mm
  - _scaled_mm
  - _segment_reduce
  - _shape_as_tensor
  - _sobol_engine_draw
  - _sobol_engine_ff_
  - _sobol_engine_initialize_state_
  - _sobol_engine_scramble_
  - _softmax
  - _softmax_backward_data
  - _sources
  - _sparse_broadcast_to
  - _sparse_broadcast_to_copy
  - _sparse_csr_prod
  - _sparse_csr_sum
  - _sparse_log_softmax_backward_data
  - _sparse_semi_structured_addmm
  - _sparse_semi_structured_apply
  - _sparse_semi_structured_apply_dense
  - _sparse_semi_structured_linear
  - _sparse_semi_structured_mm
  - _sparse_semi_structured_tile
  - _sparse_softmax_backward_data
  - _sparse_sparse_matmul
  - _sparse_sum
  - _stack
  - _standard_gamma
  - _standard_gamma_grad
  - _storage_classes
  - _strobelight
  - _subclasses
  - _sym_acos
  - _sym_asin
  - _sym_atan
  - _sym_cos
  - _sym_cosh
  - _sym_log2
  - _sym_sin
  - _sym_sinh
  - _sym_sqrt
  - _sym_tan
  - _sym_tanh
  - _sync
  - _tensor
  - _tensor_classes
  - _tensor_str
  - _test_autograd_multiple_dispatch
  - _test_autograd_multiple_dispatch_view
  - _test_autograd_multiple_dispatch_view_copy
  - _test_check_tensor
  - _test_functorch_fallback
  - _test_parallel_materialize
  - _test_serialization_subcmul
  - _to_cpu
  - _to_functional_tensor
  - _to_sparse_semi_structured
  - _transform_bias_rescale_qkv
  - _transformer_encoder_layer_fwd
  - _trilinear
  - _triton_multi_head_attention
  - _triton_scaled_dot_attention
  - _unique
  - _unique2
  - _unpack_dual
  - _unsafe_index
  - _unsafe_index_put
  - _unsafe_masked_index
  - _unsafe_masked_index_put_accumulate
  - _use_cudnn_ctc_loss
  - _use_cudnn_rnn_flatten_weight
  - _utils
  - _utils_internal
  - _validate_compressed_sparse_indices
  - _validate_sparse_bsc_tensor_args
  - _validate_sparse_bsr_tensor_args
  - _validate_sparse_compressed_tensor_args
  - _validate_sparse_coo_tensor_args
  - _validate_sparse_csc_tensor_args
  - _validate_sparse_csr_tensor_args
  - _values_copy
  - _vendor
  - _vmap_internals
  - _warn_typed_storage_removal
  - _weight_int4pack_mm
  - _weight_int4pack_mm_for_cpu
  - _weight_int8pack_mm
  - _weight_norm
  - _weight_norm_interface
  - _weights_only_unpickler
  - _wrapped_linear_prepack
  - _wrapped_quantized_linear_prepacked
- - cpu
- - cuda
  array-api-compat adds to cupy:
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
  - __array_api_version__
  - __array_namespace_info__
  - acos
  - acosh
  - asin
  - asinh
  - astype
  - atan
  - atan2
  - atanh
  - bitwise_invert
  - bitwise_left_shift
  - bitwise_right_shift
  - bool
  - concat
  - cumulative_prod
  - cumulative_sum
  - isdtype
  - matrix_transpose
  - permute_dims
  - pow
  - unique_all
  - unique_counts
  - unique_inverse
  - unique_values
  - unstack
  - vecdot
  array-api-compat adds to dask.array:
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
  - __array_api_version__
  - __array_namespace_info__
  - acos
  - acosh
  - argsort
  - asin
  - asinh
  - astype
  - atan
  - atan2
  - atanh
  - bitwise_invert
  - bitwise_left_shift
  - bitwise_right_shift
  - can_cast
  - concat
  - cumulative_prod
  - cumulative_sum
  - finfo
  - iinfo
  - isdtype
  - matrix_transpose
  - permute_dims
  - pow
  - sort
  - unique_all
  - unique_counts
  - unique_inverse
  - unique_values
  - unstack
  - vecdot
  array-api-compat adds to numpy:
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
  array-api-compat adds to torch:
  - UniqueAllResult
  - UniqueCountsResult
  - UniqueInverseResult
  - __array_api_version__
  - __array_namespace_info__
  - astype
  - bitwise_invert
  - broadcast_arrays
  - cumulative_prod
  - cumulative_sum
  - expand_dims
  - isdtype
  - matrix_transpose
  - permute_dims
  - repeat
  - take_along_axis
  - unique_all
  - unique_counts
  - unique_inverse
  - unique_values
  - unstack
  - vecdot
  array-api-compat hides from cupy:
+ - __builtins__
+ - __cached__
+ - __doc__
+ - __file__
  - __getattr__
+ - __loader__
+ - __name__
+ - __package__
+ - __path__
+ - __spec__
  - __version__
  - _binary
  - _core
  - _creation
  - _cupy
  - _cupyx
  - _default_memory_pool
  - _default_pinned_memory_pool
  - _deprecated_apis
  - _embed_signatures
  - _environment
  - _functional
  - _functools
  - _indexing
  - _io
  - _logic
  - _manipulation
  - _math
  - _misc
  - _numpy
  - _padding
  - _sorting
  - _statistics
  - _sys
  - _template
  - _util
  - _version
  array-api-compat hides from dask.array:
  - __all__
+ - __annotations__
+ - __cached__
+ - __doc__
+ - __file__
+ - __loader__
+ - __name__
+ - __package__
+ - __path__
+ - __spec__
  - _array_expr_enabled
  - _reductions_generic
  - _shuffle
  array-api-compat hides from numpy:
  - _CopyMode
  - _NoValue
  - __NUMPY_SETUP__
  - __all__
+ - __cached__
  - __config__
  - __dir__
+ - __doc__
  - __expired_attributes__
+ - __file__
  - __former_attrs__
  - __future_scalars__
  - __getattr__
+ - __loader__
+ - __name__
  - __numpy_submodules__
+ - __package__
+ - __path__
+ - __spec__
  - _array_api_info
  - _core
  - _distributor_init
  - _expired_attrs_2_0
  - _globals
  - _int_extended_msg
  - _mat
  - _msg
  - _pyinstaller_hooks_dir
  - _pytesttester
  - _specific_msg
  - _type_info
+ - _typing
  - _utils
  array-api-compat hides from torch:
  - _Any
  - _C
  - _Callable
  - _GLOBAL_DEVICE_CONTEXT
  - _InputT
  - _Optional
  - _ParamSpec
  - _RetT
  - _TorchCompileInductorWrapper
  - _TorchCompileWrapper
  - _TritonLibrary
  - _TypeIs
  - _TypeVar
  - _Union
  - _VF
  - __all__
  - __all_and_float_types
  - __annotations__
+ - __cached__
  - __config__
+ - __doc__
+ - __file__
  - __future__
  - __getattr__
+ - __loader__
+ - __name__
+ - __package__
+ - __path__
+ - __spec__
  - __version__
  - _adaptive_avg_pool2d
  - _adaptive_avg_pool3d
  - _add_batch_dim
  - _add_relu
  - _add_relu_
  - _addmm_activation
  - _aminmax
  - _amp_foreach_non_finite_check_and_unscale_
  - _amp_update_scale_
  - _as_tensor_fullprec
  - _assert
  - _assert_async
  - _assert_scalar
  - _assert_tensor_metadata
  - _awaits
  - _batch_norm_impl_index
  - _cast_Byte
  - _cast_Char
  - _cast_Double
  - _cast_Float
  - _cast_Half
  - _cast_Int
  - _cast_Long
  - _cast_Short
  - _check
  - _check_index
  - _check_is_size
  - _check_not_implemented
  - _check_tensor_all
  - _check_tensor_all_with
  - _check_type
  - _check_value
  - _check_with
  - _choose_qparams_per_tensor
  - _chunk_cat
  - _classes
  - _coalesce
  - _compile
  - _compute_linear_combination
  - _conj
  - _conj_copy
  - _conj_physical
  - _constrain_as_size
  - _convert_indices_from_coo_to_csr
  - _convert_indices_from_csr_to_coo
  - _convert_weight_to_int4pack
  - _convert_weight_to_int4pack_for_cpu
  - _convolution
  - _convolution_mode
  - _copy_from
  - _copy_from_and_resize
  - _cslt_compress
  - _cslt_sparse_mm
  - _cslt_sparse_mm_search
  - _ctc_loss
  - _cudnn_ctc_loss
  - _cudnn_init_dropout_state
  - _cudnn_rnn
  - _cudnn_rnn_flatten_weight
  - _cufft_clear_plan_cache
  - _cufft_get_plan_cache_max_size
  - _cufft_get_plan_cache_size
  - _cufft_set_plan_cache_max_size
  - _cummax_helper
  - _cummin_helper
  - _custom_op
  - _custom_ops
  - _debug_has_internal_overlap
  - _decomp
  - _deprecated_attrs
  - _dim_arange
  - _dirichlet_grad
  - _disable_dynamo
  - _disable_functionalization
  - _dispatch
  - _dyn_quant_matmul_4bit
  - _dyn_quant_pack_4bit_weight
  - _efficientzerotensor
  - _embedding_bag
  - _embedding_bag_forward_only
  - _empty_affine_quantized
  - _empty_per_channel_affine_quantized
  - _enable_functionalization
  - _environment
  - _euclidean_dist
  - _export
  - _fake_quantize_learnable_per_channel_affine
  - _fake_quantize_learnable_per_tensor_affine
  - _fake_quantize_per_tensor_affine_cachemask_tensor_qparams
  - _fft_c2c
  - _fft_c2r
  - _fft_r2c
  - _fill_mem_eff_dropout_mask_
  - _foobar
  - _foreach_abs
  - _foreach_abs_
  - _foreach_acos
  - _foreach_acos_
  - _foreach_add
  - _foreach_add_
  - _foreach_addcdiv
  - _foreach_addcdiv_
  - _foreach_addcmul
  - _foreach_addcmul_
  - _foreach_asin
  - _foreach_asin_
  - _foreach_atan
  - _foreach_atan_
  - _foreach_ceil
  - _foreach_ceil_
  - _foreach_clamp_max
  - _foreach_clamp_max_
  - _foreach_clamp_min
  - _foreach_clamp_min_
  - _foreach_copy_
  - _foreach_cos
  - _foreach_cos_
  - _foreach_cosh
  - _foreach_cosh_
  - _foreach_div
  - _foreach_div_
  - _foreach_erf
  - _foreach_erf_
  - _foreach_erfc
  - _foreach_erfc_
  - _foreach_exp
  - _foreach_exp_
  - _foreach_expm1
  - _foreach_expm1_
  - _foreach_floor
  - _foreach_floor_
  - _foreach_frac
  - _foreach_frac_
  - _foreach_lerp
  - _foreach_lerp_
  - _foreach_lgamma
  - _foreach_lgamma_
  - _foreach_log
  - _foreach_log10
  - _foreach_log10_
  - _foreach_log1p
  - _foreach_log1p_
  - _foreach_log2
  - _foreach_log2_
  - _foreach_log_
  - _foreach_max
  - _foreach_maximum
  - _foreach_maximum_
  - _foreach_minimum
  - _foreach_minimum_
  - _foreach_mul
  - _foreach_mul_
  - _foreach_neg
  - _foreach_neg_
  - _foreach_norm
  - _foreach_pow
  - _foreach_pow_
  - _foreach_reciprocal
  - _foreach_reciprocal_
  - _foreach_round
  - _foreach_round_
  - _foreach_rsqrt
  - _foreach_rsqrt_
  - _foreach_sigmoid
  - _foreach_sigmoid_
  - _foreach_sign
  - _foreach_sign_
  - _foreach_sin
  - _foreach_sin_
  - _foreach_sinh
  - _foreach_sinh_
  - _foreach_sqrt
  - _foreach_sqrt_
  - _foreach_sub
  - _foreach_sub_
  - _foreach_tan
  - _foreach_tan_
  - _foreach_tanh
  - _foreach_tanh_
  - _foreach_trunc
  - _foreach_trunc_
  - _foreach_zero_
  - _freeze_functional_tensor
  - _from_functional_tensor
  - _functional_assert_async
  - _functional_assert_scalar
  - _functional_sym_constrain_range
  - _functional_sym_constrain_range_for_size
  - _functionalize_apply_view_metas
  - _functionalize_are_all_mutations_hidden_from_autograd
  - _functionalize_are_all_mutations_under_no_grad_or_inference_mode
  - _functionalize_commit_update
  - _functionalize_enable_reapply_views
  - _functionalize_get_storage_size
  - _functionalize_has_data_mutation
  - _functionalize_has_metadata_mutation
  - _functionalize_is_multi_output_view
  - _functionalize_is_symbolic
  - _functionalize_mark_mutation_hidden_from_autograd
  - _functionalize_replace
  - _functionalize_set_storage_changed
  - _functionalize_sync
  - _functionalize_unsafe_set
  - _functionalize_was_inductor_storage_resized
  - _functionalize_was_storage_changed
  - _functorch
  - _fused_adagrad_
  - _fused_adam_
  - _fused_adamw_
  - _fused_dropout
  - _fused_moving_avg_obs_fq_helper
  - _fused_sdp_choice
  - _fused_sgd_
  - _fw_primal_copy
  - _get_cuda_dep_paths
  - _get_origin
  - _grid_sampler_2d_cpu_fallback
  - _guards
  - _has_compatible_shallow_copy_type
  - _higher_order_ops
  - _histogramdd_bin_edges
  - _histogramdd_from_bin_cts
  - _histogramdd_from_bin_tensors
  - _import_device_backends
  - _import_dotted_name
  - _index_put_impl_
  - _indices_copy
  - _initExtension
  - _int_mm
  - _is_all_true
  - _is_any_true
  - _is_device_backend_autoload_enabled
  - _is_functional_tensor
  - _is_functional_tensor_base
  - _is_zerotensor
  - _jit_internal
  - _lazy_clone
  - _lazy_modules
  - _library
  - _linalg_check_errors
  - _linalg_det
  - _linalg_eigh
  - _linalg_slogdet
  - _linalg_solve_ex
  - _linalg_svd
  - _linalg_utils
  - _load_global_deps
  - _lobpcg
  - _log_softmax
  - _log_softmax_backward_data
  - _logcumsumexp
  - _logging
  - _lowrank
  - _lstm_mps
  - _lu_with_info
  - _make_dep_token
  - _make_dual
  - _make_dual_copy
  - _make_per_channel_quantized_tensor
  - _make_per_tensor_quantized_tensor
  - _masked_scale
  - _masked_softmax
  - _meta_registrations
  - _mirror_autograd_meta_to
  - _mixed_dtypes_linear
  - _mkldnn
  - _mkldnn_reshape
  - _mkldnn_transpose
  - _mkldnn_transpose_
  - _mps_convolution
  - _mps_convolution_transpose
  - _namedtensor_internals
  - _native_batch_norm_legit
  - _native_batch_norm_legit_no_training
  - _native_multi_head_attention
  - _neg_view
  - _neg_view_copy
  - _nested_compute_contiguous_strides_offsets
  - _nested_from_padded
  - _nested_from_padded_and_nested_example
  - _nested_from_padded_tensor
  - _nested_get_jagged_dummy
  - _nested_get_lengths
  - _nested_get_max_seqlen
  - _nested_get_min_seqlen
  - _nested_get_offsets
  - _nested_get_ragged_idx
  - _nested_get_values
  - _nested_get_values_copy
  - _nested_tensor_from_mask
  - _nested_tensor_from_mask_left_aligned
  - _nested_tensor_from_tensor_list
  - _nested_tensor_softmax_with_shape
  - _nested_view_from_buffer
  - _nested_view_from_buffer_copy
  - _nested_view_from_jagged
  - _nested_view_from_jagged_copy
  - _nnpack_available
  - _nnpack_spatial_convolution
  - _ops
  - _overload
  - _pack_padded_sequence
  - _pad_packed_sequence
  - _pin_memory
  - _preload_cuda_deps
  - _prelu_kernel
  - _prims
  - _prims_common
  - _print
  - _propagate_xla_data
  - _refs
  - _register_device_module
  - _remove_batch_dim
  - _reshape_alias_copy
  - _reshape_from_tensor
  - _resize_output_
  - _rowwise_prune
  - _running_with_deploy
  - _safe_softmax
  - _sample_dirichlet
  - _saturate_weight_to_fp16
  - _scaled_dot_product_attention_math
  - _scaled_dot_product_attention_math_for_mps
  - _scaled_dot_product_cudnn_attention
  - _scaled_dot_product_efficient_attention
  - _scaled_dot_product_flash_attention
  - _scaled_dot_product_flash_attention_for_cpu
  - _scaled_grouped_mm
  - _scaled_mm
  - _segment_reduce
  - _shape_as_tensor
  - _sobol_engine_draw
  - _sobol_engine_ff_
  - _sobol_engine_initialize_state_
  - _sobol_engine_scramble_
  - _softmax
  - _softmax_backward_data
  - _sources
  - _sparse_broadcast_to
  - _sparse_broadcast_to_copy
  - _sparse_csr_prod
  - _sparse_csr_sum
  - _sparse_log_softmax_backward_data
  - _sparse_semi_structured_addmm
  - _sparse_semi_structured_apply
  - _sparse_semi_structured_apply_dense
  - _sparse_semi_structured_linear
  - _sparse_semi_structured_mm
  - _sparse_semi_structured_tile
  - _sparse_softmax_backward_data
  - _sparse_sparse_matmul
  - _sparse_sum
  - _stack
  - _standard_gamma
  - _standard_gamma_grad
  - _storage_classes
  - _strobelight
  - _subclasses
  - _sym_acos
  - _sym_asin
  - _sym_atan
  - _sym_cos
  - _sym_cosh
  - _sym_log2
  - _sym_sin
  - _sym_sinh
  - _sym_sqrt
  - _sym_tan
  - _sym_tanh
  - _sync
  - _tensor
  - _tensor_classes
  - _tensor_str
  - _test_autograd_multiple_dispatch
  - _test_autograd_multiple_dispatch_view
  - _test_autograd_multiple_dispatch_view_copy
  - _test_check_tensor
  - _test_functorch_fallback
  - _test_parallel_materialize
  - _test_serialization_subcmul
  - _to_cpu
  - _to_functional_tensor
  - _to_sparse_semi_structured
  - _transform_bias_rescale_qkv
  - _transformer_encoder_layer_fwd
  - _trilinear
  - _triton_multi_head_attention
  - _triton_scaled_dot_attention
  - _unique
  - _unique2
  - _unpack_dual
  - _unsafe_index
  - _unsafe_index_put
  - _unsafe_masked_index
  - _unsafe_masked_index_put_accumulate
  - _use_cudnn_ctc_loss
  - _use_cudnn_rnn_flatten_weight
  - _utils
  - _utils_internal
  - _validate_compressed_sparse_indices
  - _validate_sparse_bsc_tensor_args
  - _validate_sparse_bsr_tensor_args
  - _validate_sparse_compressed_tensor_args
  - _validate_sparse_coo_tensor_args
  - _validate_sparse_csc_tensor_args
  - _validate_sparse_csr_tensor_args
  - _values_copy
  - _vendor
  - _vmap_internals
  - _warn_typed_storage_removal
  - _weight_int4pack_mm
  - _weight_int4pack_mm_for_cpu
  - _weight_int8pack_mm
  - _weight_norm
  - _weight_norm_interface
  - _weights_only_unpickler
  - _wrapped_linear_prepack
  - _wrapped_quantized_linear_prepacked

@ev-br
Copy link
Member

ev-br commented Jun 4, 2025

Okay, thanks. So, if I read this right, as compared to main this PR

  • adds some symbols (Final for dask.array)
  • hides some symbols present in the bare namespace (torch.cuda, ~400 private symbols on torch, something on other backends, too).

Extra symbols would be nice to hide, and previously the package worked quite a lot to hide them. It's a nice-to-have though.
Hiding things is a bit more problematic IMO. If array-api-compat's job is to extend the namespace to be compatible with the spec, it should not second-guess the namespace on what should be visible and what shouldn't. The target namespace should contain just what is in the bare namespace, plus spec-mandated symbols IMO.

<aside>
It's a bit suboptimal that a PR which claims to add type hints changes unrelated properties of the namespace, and then a PR which claims to improve tests has to undo some of these unrelated changes (and of course adds some more changes, too, because it's just too messy otherwise.)
</aside>

@crusaderky
Copy link
Contributor Author

crusaderky commented Jun 4, 2025

Okay, thanks. So, if I read this right, as compared to main this PR

  • adds some symbols (Final for dask.array)
  • hides some symbols present in the bare namespace (torch.cuda, ~400 private symbols on torch, something on other backends, too).

No, it's the other way around.
This PR no longer adds Final and a bunch of private symbols.
It no longer hides these public symbols from Dask:

  array-api-compat hides from dask.array:
- - ARRAY_EXPR_ENABLED
  - __all__
  - _array_expr_enabled
  - _reductions_generic
  - _shuffle
- - annotations
- - chunk
- - chunk_types
- - core
- - creation
- - dispatch
- - einsumfuncs
- - importlib
- - numpy_compat
- - optimization
- - reductions
- - routines
- - slicing
- - tiledb_io
- - ufunc
- - utils
- - warnings
- - wrap

It no longer hides these public symbols from torch:

- - cpu
- - cuda

It starts hiding a handful extra private symbols of no importance.

  array-api-compat hides from cupy:
+ - __builtins__
+ - __cached__
+ - __doc__
+ - __file__
  - __getattr__
+ - __loader__
+ - __name__
+ - __package__
+ - __path__
+ - __spec__
[...]
  array-api-compat hides from dask.array:
  - __all__
+ - __annotations__
+ - __cached__
+ - __doc__
+ - __file__
+ - __loader__
+ - __name__
+ - __package__
+ - __path__
+ - __spec__
  - _array_expr_enabled
  - _reductions_generic
  - _shuffle
  array-api-compat hides from numpy:
  - _CopyMode
  - _NoValue
  - __NUMPY_SETUP__
  - __all__
+ - __cached__
  - __config__
  - __dir__
+ - __doc__
  - __expired_attributes__
+ - __file__
  - __former_attrs__
  - __future_scalars__
  - __getattr__
+ - __loader__
+ - __name__
  - __numpy_submodules__
+ - __package__
+ - __path__
+ - __spec__
[...]
+ - _typing
  - _utils
  array-api-compat hides from torch:
[...]
+ - __cached__
  - __config__
+ - __doc__
+ - __file__
  - __future__
  - __getattr__
+ - __loader__
+ - __name__
+ - __package__
+ - __path__
+ - __spec__

@crusaderky crusaderky changed the title TST: Redesign test_all ENH: Review exported symbols; redesign test_all Jun 4, 2025
@ev-br
Copy link
Member

ev-br commented Jun 4, 2025

Great, thanks. I misread then.

What still exists is hiding private symbols (see below for torch). Would it be difficult to remove the filter and pass through whatever the library has in its dir list?

In [12]: from array_api_compat import torch as xp

In [13]: set(dir(torch)) - set(dir(xp))
Out[13]: 
{'_Any',
 '_C',
 '_Callable',
 '_GLOBAL_DEVICE_CONTEXT',
 '_InputT',
 '_Optional',
 '_ParamSpec',
 '_RetT',
 '_TorchCompileInductorWrapper',
 '_TorchCompileWrapper',
 '_TritonLibrary',
 '_TypeIs',
 '_TypeVar',
 '_Union',
 '_VF',
 '__all__',
 '__all_and_float_types',
 '__annotations__',
 '__cached__',
 '__config__',
 '__doc__',
 '__file__',
 '__future__',
 '__getattr__',
 '__loader__',
 '__name__',
 '__package__',
 '__path__',
 '__spec__',
 '__version__',
 '_adaptive_avg_pool2d',
 '_adaptive_avg_pool3d',
 '_add_batch_dim',
 '_add_relu',
 '_add_relu_',
 '_addmm_activation',
 '_aminmax',
 '_amp_foreach_non_finite_check_and_unscale_',
 '_amp_update_scale_',
 '_as_tensor_fullprec',
 '_assert',
 '_assert_async',
 '_assert_scalar',
 '_assert_tensor_metadata',
 '_awaits',
 '_batch_norm_impl_index',
 '_cast_Byte',
 '_cast_Char',
 '_cast_Double',
 '_cast_Float',
 '_cast_Half',
 '_cast_Int',
 '_cast_Long',
 '_cast_Short',
 '_check',
 '_check_index',
 '_check_is_size',
 '_check_not_implemented',
 '_check_tensor_all',
 '_check_tensor_all_with',
 '_check_type',
 '_check_value',
 '_check_with',
 '_choose_qparams_per_tensor',
 '_chunk_cat',
 '_classes',
 '_coalesce',
 '_compile',
 '_compute_linear_combination',
 '_conj',
 '_conj_copy',
 '_conj_physical',
 '_constrain_as_size',
 '_convert_indices_from_coo_to_csr',
 '_convert_indices_from_csr_to_coo',
 '_convert_weight_to_int4pack',
 '_convert_weight_to_int4pack_for_cpu',
 '_convolution',
 '_convolution_mode',
 '_copy_from',
 '_copy_from_and_resize',
 '_cslt_compress',
 '_cslt_sparse_mm',
 '_cslt_sparse_mm_search',
 '_ctc_loss',
 '_cudnn_ctc_loss',
 '_cudnn_init_dropout_state',
 '_cudnn_rnn',
 '_cudnn_rnn_flatten_weight',
 '_cufft_clear_plan_cache',
 '_cufft_get_plan_cache_max_size',
 '_cufft_get_plan_cache_size',
 '_cufft_set_plan_cache_max_size',
 '_cummax_helper',
 '_cummin_helper',
 '_custom_op',
 '_custom_ops',
 '_debug_has_internal_overlap',
 '_decomp',
 '_deprecated_attrs',
 '_dim_arange',
 '_dirichlet_grad',
 '_disable_dynamo',
 '_disable_functionalization',
 '_dispatch',
 '_dyn_quant_matmul_4bit',
 '_dyn_quant_pack_4bit_weight',
 '_efficientzerotensor',
 '_embedding_bag',
 '_embedding_bag_forward_only',
 '_empty_affine_quantized',
 '_empty_per_channel_affine_quantized',
 '_enable_functionalization',
 '_environment',
 '_euclidean_dist',
 '_export',
 '_fake_quantize_learnable_per_channel_affine',
 '_fake_quantize_learnable_per_tensor_affine',
 '_fake_quantize_per_tensor_affine_cachemask_tensor_qparams',
 '_fft_c2c',
 '_fft_c2r',
 '_fft_r2c',
 '_fill_mem_eff_dropout_mask_',
 '_foobar',
 '_foreach_abs',
 '_foreach_abs_',
 '_foreach_acos',
 '_foreach_acos_',
 '_foreach_add',
 '_foreach_add_',
 '_foreach_addcdiv',
 '_foreach_addcdiv_',
 '_foreach_addcmul',
 '_foreach_addcmul_',
 '_foreach_asin',
 '_foreach_asin_',
 '_foreach_atan',
 '_foreach_atan_',
 '_foreach_ceil',
 '_foreach_ceil_',
 '_foreach_clamp_max',
 '_foreach_clamp_max_',
 '_foreach_clamp_min',
 '_foreach_clamp_min_',
 '_foreach_copy_',
 '_foreach_cos',
 '_foreach_cos_',
 '_foreach_cosh',
 '_foreach_cosh_',
 '_foreach_div',
 '_foreach_div_',
 '_foreach_erf',
 '_foreach_erf_',
 '_foreach_erfc',
 '_foreach_erfc_',
 '_foreach_exp',
 '_foreach_exp_',
 '_foreach_expm1',
 '_foreach_expm1_',
 '_foreach_floor',
 '_foreach_floor_',
 '_foreach_frac',
 '_foreach_frac_',
 '_foreach_lerp',
 '_foreach_lerp_',
 '_foreach_lgamma',
 '_foreach_lgamma_',
 '_foreach_log',
 '_foreach_log10',
 '_foreach_log10_',
 '_foreach_log1p',
 '_foreach_log1p_',
 '_foreach_log2',
 '_foreach_log2_',
 '_foreach_log_',
 '_foreach_max',
 '_foreach_maximum',
 '_foreach_maximum_',
 '_foreach_minimum',
 '_foreach_minimum_',
 '_foreach_mul',
 '_foreach_mul_',
 '_foreach_neg',
 '_foreach_neg_',
 '_foreach_norm',
 '_foreach_pow',
 '_foreach_pow_',
 '_foreach_reciprocal',
 '_foreach_reciprocal_',
 '_foreach_round',
 '_foreach_round_',
 '_foreach_rsqrt',
 '_foreach_rsqrt_',
 '_foreach_sigmoid',
 '_foreach_sigmoid_',
 '_foreach_sign',
 '_foreach_sign_',
 '_foreach_sin',
 '_foreach_sin_',
 '_foreach_sinh',
 '_foreach_sinh_',
 '_foreach_sqrt',
 '_foreach_sqrt_',
 '_foreach_sub',
 '_foreach_sub_',
 '_foreach_tan',
 '_foreach_tan_',
 '_foreach_tanh',
 '_foreach_tanh_',
 '_foreach_trunc',
 '_foreach_trunc_',
 '_foreach_zero_',
 '_freeze_functional_tensor',
 '_from_functional_tensor',
 '_functional_assert_async',
 '_functional_assert_scalar',
 '_functional_sym_constrain_range',
 '_functional_sym_constrain_range_for_size',
 '_functionalize_apply_view_metas',
 '_functionalize_are_all_mutations_hidden_from_autograd',
 '_functionalize_are_all_mutations_under_no_grad_or_inference_mode',
 '_functionalize_commit_update',
 '_functionalize_enable_reapply_views',
 '_functionalize_get_storage_size',
 '_functionalize_has_data_mutation',
 '_functionalize_has_metadata_mutation',
 '_functionalize_is_multi_output_view',
 '_functionalize_is_symbolic',
 '_functionalize_mark_mutation_hidden_from_autograd',
 '_functionalize_replace',
 '_functionalize_set_storage_changed',
 '_functionalize_sync',
 '_functionalize_unsafe_set',
 '_functionalize_was_inductor_storage_resized',
 '_functionalize_was_storage_changed',
 '_functorch',
 '_fused_adagrad_',
 '_fused_adam_',
 '_fused_adamw_',
 '_fused_dropout',
 '_fused_moving_avg_obs_fq_helper',
 '_fused_sdp_choice',
 '_fused_sgd_',
 '_fw_primal_copy',
 '_get_cuda_dep_paths',
 '_get_origin',
 '_grid_sampler_2d_cpu_fallback',
 '_guards',
 '_has_compatible_shallow_copy_type',
 '_higher_order_ops',
 '_histogramdd_bin_edges',
 '_histogramdd_from_bin_cts',
 '_histogramdd_from_bin_tensors',
 '_import_device_backends',
 '_import_dotted_name',
 '_index_put_impl_',
 '_indices_copy',
 '_initExtension',
 '_int_mm',
 '_is_all_true',
 '_is_any_true',
 '_is_device_backend_autoload_enabled',
 '_is_functional_tensor',
 '_is_functional_tensor_base',
 '_is_zerotensor',
 '_jit_internal',
 '_lazy_clone',
 '_lazy_modules',
 '_library',
 '_linalg_check_errors',
 '_linalg_det',
 '_linalg_eigh',
 '_linalg_slogdet',
 '_linalg_solve_ex',
 '_linalg_svd',
 '_linalg_utils',
 '_load_global_deps',
 '_lobpcg',
 '_log_softmax',
 '_log_softmax_backward_data',
 '_logcumsumexp',
 '_logging',
 '_lowrank',
 '_lstm_mps',
 '_lu_with_info',
 '_make_dep_token',
 '_make_dual',
 '_make_dual_copy',
 '_make_per_channel_quantized_tensor',
 '_make_per_tensor_quantized_tensor',
 '_masked_scale',
 '_masked_softmax',
 '_meta_registrations',
 '_mirror_autograd_meta_to',
 '_mixed_dtypes_linear',
 '_mkldnn',
 '_mkldnn_reshape',
 '_mkldnn_transpose',
 '_mkldnn_transpose_',
 '_mps_convolution',
 '_mps_convolution_transpose',
 '_namedtensor_internals',
 '_native_batch_norm_legit',
 '_native_batch_norm_legit_no_training',
 '_native_multi_head_attention',
 '_neg_view',
 '_neg_view_copy',
 '_nested_compute_contiguous_strides_offsets',
 '_nested_from_padded',
 '_nested_from_padded_and_nested_example',
 '_nested_from_padded_tensor',
 '_nested_get_jagged_dummy',
 '_nested_get_lengths',
 '_nested_get_max_seqlen',
 '_nested_get_min_seqlen',
 '_nested_get_offsets',
 '_nested_get_ragged_idx',
 '_nested_get_values',
 '_nested_get_values_copy',
 '_nested_tensor_from_mask',
 '_nested_tensor_from_mask_left_aligned',
 '_nested_tensor_from_tensor_list',
 '_nested_tensor_softmax_with_shape',
 '_nested_view_from_buffer',
 '_nested_view_from_buffer_copy',
 '_nested_view_from_jagged',
 '_nested_view_from_jagged_copy',
 '_nnpack_available',
 '_nnpack_spatial_convolution',
 '_ops',
 '_overload',
 '_pack_padded_sequence',
 '_pad_packed_sequence',
 '_pin_memory',
 '_preload_cuda_deps',
 '_prelu_kernel',
 '_prims',
 '_prims_common',
 '_print',
 '_propagate_xla_data',
 '_refs',
 '_register_device_module',
 '_remove_batch_dim',
 '_reshape_alias_copy',
 '_reshape_from_tensor',
 '_resize_output_',
 '_rowwise_prune',
 '_running_with_deploy',
 '_safe_softmax',
 '_sample_dirichlet',
 '_saturate_weight_to_fp16',
 '_scaled_dot_product_attention_math',
 '_scaled_dot_product_attention_math_for_mps',
 '_scaled_dot_product_cudnn_attention',
 '_scaled_dot_product_efficient_attention',
 '_scaled_dot_product_flash_attention',
 '_scaled_dot_product_flash_attention_for_cpu',
 '_scaled_grouped_mm',
 '_scaled_mm',
 '_segment_reduce',
 '_shape_as_tensor',
 '_sobol_engine_draw',
 '_sobol_engine_ff_',
 '_sobol_engine_initialize_state_',
 '_sobol_engine_scramble_',
 '_softmax',
 '_softmax_backward_data',
 '_sources',
 '_sparse_broadcast_to',
 '_sparse_broadcast_to_copy',
 '_sparse_csr_prod',
 '_sparse_csr_sum',
 '_sparse_log_softmax_backward_data',
 '_sparse_semi_structured_addmm',
 '_sparse_semi_structured_apply',
 '_sparse_semi_structured_apply_dense',
 '_sparse_semi_structured_linear',
 '_sparse_semi_structured_mm',
 '_sparse_semi_structured_tile',
 '_sparse_softmax_backward_data',
 '_sparse_sparse_matmul',
 '_sparse_sum',
 '_stack',
 '_standard_gamma',
 '_standard_gamma_grad',
 '_storage_classes',
 '_strobelight',
 '_subclasses',
 '_sym_acos',
 '_sym_asin',
 '_sym_atan',
 '_sym_cos',
 '_sym_cosh',
 '_sym_log2',
 '_sym_sin',
 '_sym_sinh',
 '_sym_sqrt',
 '_sym_tan',
 '_sym_tanh',
 '_sync',
 '_tensor',
 '_tensor_classes',
 '_tensor_str',
 '_test_autograd_multiple_dispatch',
 '_test_autograd_multiple_dispatch_view',
 '_test_autograd_multiple_dispatch_view_copy',
 '_test_check_tensor',
 '_test_functorch_fallback',
 '_test_parallel_materialize',
 '_test_serialization_subcmul',
 '_to_cpu',
 '_to_functional_tensor',
 '_to_sparse_semi_structured',
 '_transform_bias_rescale_qkv',
 '_transformer_encoder_layer_fwd',
 '_trilinear',
 '_triton_multi_head_attention',
 '_triton_scaled_dot_attention',
 '_unique',
 '_unique2',
 '_unpack_dual',
 '_unsafe_index',
 '_unsafe_index_put',
 '_unsafe_masked_index',
 '_unsafe_masked_index_put_accumulate',
 '_use_cudnn_ctc_loss',
 '_use_cudnn_rnn_flatten_weight',
 '_utils',
 '_utils_internal',
 '_validate_compressed_sparse_indices',
 '_validate_sparse_bsc_tensor_args',
 '_validate_sparse_bsr_tensor_args',
 '_validate_sparse_compressed_tensor_args',
 '_validate_sparse_coo_tensor_args',
 '_validate_sparse_csc_tensor_args',
 '_validate_sparse_csr_tensor_args',
 '_values_copy',
 '_vendor',
 '_vmap_internals',
 '_warn_typed_storage_removal',
 '_weight_int4pack_mm',
 '_weight_int4pack_mm_for_cpu',
 '_weight_int8pack_mm',
 '_weight_norm',
 '_weight_norm_interface',
 '_weights_only_unpickler',
 '_wrapped_linear_prepack',
 '_wrapped_quantized_linear_prepacked'}

@crusaderky
Copy link
Contributor Author

Great, thanks. I misread then.

What still exists is hiding private symbols (see below for torch). Would it be difficult to remove the filter and pass through whatever the library has in its dir list?

It's non-trivial, because right now dir() and __all__ are one and the same. And you don't want the private symbols in __all__. Since it is not a regression, I'd much rather leave it to a follow-up.

@ev-br
Copy link
Member

ev-br commented Jun 4, 2025

One problem is it is a regression. On main (unless I'm being dense again):

In [3]: set(dir(xp)) - set(dir(torch))
Out[3]: 
{'UniqueAllResult',
 'UniqueCountsResult',
 'UniqueInverseResult',
 '__array_api_version__',
 '__array_namespace_info__',
 '_aliases',
 '_info',
 '_typing',
 'astype',
 'bitwise_invert',
 'broadcast_arrays',
 'cumulative_prod',
 'cumulative_sum',
 'expand_dims',
 'isdtype',
 'matrix_transpose',
 'permute_dims',
 'repeat',
 'take_along_axis',
 'unique_all',
 'unique_counts',
 'unique_inverse',
 'unique_values',
 'unstack',
 'vecdot'}

Or are you saying it brings the status quo back to one of previous versions?

EDIT: Nevermind, I am being dense. Here are all these private functions, safely hidden, also on main.

In [5]: len(set(dir(torch)) - set(dir(xp)))
Out[5]: 442

@ev-br
Copy link
Member

ev-br commented Jun 6, 2025

Okay, lets merge this and see about exporting private items separately (Checked it: they have been hidden from at least 1.9.1; thus it's rather low prio; let's wait for if and when it becomes a problem).

Thanks @crusaderky

@ev-br ev-br merged commit cddc9ef into data-apis:main Jun 6, 2025
23 checks passed
@ev-br ev-br added this to the 1.13 milestone Jun 6, 2025
@crusaderky crusaderky deleted the test_all branch June 6, 2025 11:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants