Skip to content

Commit 42e31c4

Browse files
committed
Fix broken references
1 parent 4e55e0e commit 42e31c4

File tree

7 files changed

+8
-16
lines changed

7 files changed

+8
-16
lines changed

doc/core_development_guide.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,4 @@ some of them might be outdated though:
2626

2727
* :ref:`unittest` -- Tutorial on how to use unittest in testing PyTensor.
2828

29-
* :ref:`sparse` -- Description of the ``sparse`` type in PyTensor.
29+
* :ref:`libdoc_sparse` -- Description of the ``sparse`` type in PyTensor.

doc/extending/creating_a_c_op.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -923,7 +923,7 @@ pre-defined macros. These section tags have no macros: ``init_code``,
923923
discussed below.
924924

925925
* ``APPLY_SPECIFIC(str)`` which will automatically append a name
926-
unique to the :ref:`Apply` node that applies the `Op` at the end
926+
unique to the :ref:`apply` node that applies the `Op` at the end
927927
of the provided ``str``. The use of this macro is discussed
928928
further below.
929929

@@ -994,7 +994,7 @@ Apply node in their own names to avoid conflicts between the different
994994
versions of the apply-specific code. The code that wasn't
995995
apply-specific was simply defined in the ``c_support_code`` method.
996996

997-
To make indentifiers that include the :ref:`Apply` node name use the
997+
To make indentifiers that include the :ref:`apply` node name use the
998998
``APPLY_SPECIFIC(str)`` macro. In the above example, this macro is
999999
used when defining the functions ``vector_elemwise_mult`` and
10001000
``vector_times_vector`` as well as when calling function

doc/extending/creating_an_op.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Creating a new :class:`Op`: Python implementation
77
So suppose you have looked through the library documentation and you don't see
88
a function that does what you want.
99

10-
If you can implement something in terms of an existing :ref:`Op`, you should do that.
10+
If you can implement something in terms of an existing :ref:`op`, you should do that.
1111
Odds are your function that uses existing PyTensor expressions is short,
1212
has no bugs, and potentially profits from rewrites that have already been
1313
implemented.

doc/extending/inplace.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ input(s)'s memory). From there, go to the previous section.
200200
certainly lead to erroneous computations.
201201

202202
You can often identify an incorrect `Op.view_map` or :attr:`Op.destroy_map`
203-
by using :ref:`DebugMode`.
203+
by using :ref:`DebugMode <debugmode>`.
204204

205205
.. note::
206206
Consider using :class:`DebugMode` when developing

doc/extending/other_ops.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ Want C speed without writing C code for your new Op? You can use Numba
197197
to generate the C code for you! Here is an `example
198198
Op <https://gist.github.com/nouiz/5492778#file-theano_op-py>`_ doing that.
199199

200-
.. _alternate_PyTensor_types:
200+
.. _alternate_pytensor_types:
201201

202202
Alternate PyTensor Types
203203
========================

doc/library/tensor/random/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Low-level objects
8383
.. automodule:: pytensor.tensor.random.op
8484
:members: RandomVariable, default_rng
8585

86-
..automodule:: pytensor.tensor.random.type
86+
.. automodule:: pytensor.tensor.random.type
8787
:members: RandomType, RandomGeneratorType, random_generator_type
8888

8989
.. automodule:: pytensor.tensor.random.var

doc/tutorial/examples.rst

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -347,15 +347,7 @@ afterwards compile this expression to get functions,
347347
using pseudo-random numbers is not as straightforward as it is in
348348
NumPy, though also not too complicated.
349349

350-
The way to think about putting randomness into PyTensor's computations is
351-
to put random variables in your graph. PyTensor will allocate a NumPy
352-
`RandomStream` object (a random number generator) for each such
353-
variable, and draw from it as necessary. We will call this sort of
354-
sequence of random numbers a *random stream*. *Random streams* are at
355-
their core shared variables, so the observations on shared variables
356-
hold here as well. PyTensor's random objects are defined and implemented in
357-
:ref:`RandomStream<libdoc_tensor_random_utils>` and, at a lower level,
358-
in :ref:`RandomVariable<libdoc_tensor_random_basic>`.
350+
The general user-facing API is documented in :ref:`RandomStream<libdoc_tensor_random_basic>`
359351

360352
For a more technical explanation of how PyTensor implements random variables see :ref:`prng`.
361353

0 commit comments

Comments
 (0)