Skip to content

Commit 3204e69

Browse files
committed
[maskedtensor] Sparsity tutorial [2/4]
1 parent 12ea814 commit 3204e69

File tree

2 files changed

+318
-0
lines changed

2 files changed

+318
-0
lines changed
Lines changed: 310 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,310 @@
1+
# -*- coding: utf-8 -*-
2+
3+
"""
4+
(Prototype) MaskedTensor Sparsity
5+
=================================
6+
"""
7+
8+
######################################################################
9+
# Before working on this tutorial, please make sure to review our
10+
# `MaskedTensor Overview tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_overview.html>`.
11+
#
12+
# Introduction
13+
# ----------
14+
#
15+
# Sparsity has been an area of rapid growth and importance within PyTorch; if any sparsity terms are confusing below,
16+
# please refer to the `sparsity tutorial <https://pytorch.org/docs/stable/sparse.html>`__ for additional details.
17+
#
18+
# Sparse storage formats have been proven to be powerful in a variety of ways. As a primer, the first use case
19+
# most practitioners think about is when the majority of elements are equal to zero (a high degree of sparsity),
20+
# but even in cases of lower sparsity, certain formats (e.g. BSR) can take advantage of substructures within a matrix.
21+
#
22+
# .. note::
23+
#
24+
# At the moment, MaskedTensor supports COO and CSR tensors with plans to support additional formats
25+
# (such as BSR and CSC) in the future. If you have any requests for additional formats,
26+
# please file a feature request `here <https://github.com/pytorch/pytorch/issues>`__!
27+
#
28+
# Principles
29+
# ----------
30+
#
31+
# When creating a :class:`MaskedTensor` with sparse tensors, there are a few principles that must be observed:
32+
#
33+
# 1. ``data`` and ``mask`` must have the same storage format, whether that's :attr:`torch.strided`, :attr:`torch.sparse_coo`, or :attr:`torch.sparse_csr`
34+
# 2. ``data`` and ``mask`` must have the same size, indicated by :func:`size()`
35+
#
36+
# .. _sparse-coo-tensors:
37+
#
38+
# Sparse COO tensors
39+
# ------------------
40+
#
41+
# In accordance with Principle #1, a sparse COO MaskedTensor is created by passing in two sparse COO tensors,
42+
# which can be initialized by any of its constructors, for example :func:`torch.sparse_coo_tensor`.
43+
#
44+
# As a recap of `sparse COO tensors <https://pytorch.org/docs/stable/sparse.html#sparse-coo-tensors>`__, the COO format
45+
# stands for "coordinate format", where the specified elements are stored as tuples of their indices and the
46+
# corresponding values. That is, the following are provided:
47+
#
48+
# * ``indices``: array of size ``(ndim, nse)`` and dtype ``torch.int64``
49+
# * ``values``: array of size `(nse,)` with any integer or floating point dtype
50+
#
51+
# where ``ndim`` is the dimensionality of the tensor and ``nse`` is the number of specified elements.
52+
#
53+
# For both sparse COO and CSR tensors, you can construct a :class:`MaskedTensor` by doing either:
54+
#
55+
# 1. ``masked_tensor(sparse_tensor_data, sparse_tensor_mask)``
56+
# 2. ``dense_masked_tensor.to_sparse_coo()`` or ``dense_masked_tensor.to_sparse_csr()``
57+
#
58+
# The second method is easier to illustrate so we've shown that below, but for more on the first and the nuances behind
59+
# the approach, please read the :ref:`Sparse COO Appendix <sparse-coo-appendix>`.
60+
#
61+
62+
import torch
63+
from torch.masked import masked_tensor
64+
65+
values = torch.tensor([[0, 0, 3], [4, 0, 5]])
66+
mask = torch.tensor([[False, False, True], [False, False, True]])
67+
mt = masked_tensor(values, mask)
68+
sparse_coo_mt = mt.to_sparse_coo()
69+
print("mt:\n", mt)
70+
print("mt (sparse coo):\n", sparse_coo_mt)
71+
print("mt data (sparse coo):\n", sparse_coo_mt.get_data())
72+
73+
######################################################################
74+
# Sparse CSR tensors
75+
# ------------------
76+
#
77+
# Similarly, :class:`MaskedTensor` also supports the
78+
# `CSR (Compressed Sparse Row) <https://pytorch.org/docs/stable/sparse.html#sparse-csr-tensor>`__
79+
# sparse tensor format. Instead of storing the tuples of the indices like sparse COO tensors, sparse CSR tensors
80+
# aim to decrease the memory requirements by storing compressed row indices.
81+
# In particular, a CSR sparse tensor consists of three 1-D tensors:
82+
#
83+
# * ``crow_indices``: array of compressed row indices with size ``(size[0] + 1,)``. This array indicates which row
84+
# a given entry in values lives in. The last element is the number of specified elements,
85+
# while `crow_indices[i+1] - crow_indices[i]` indicates the number of specified elements in row i.
86+
# * ``col_indices``: array of size ``(nnz,)``. Indicates the column indices for each value.
87+
# * ``values``: array of size ``(nnz,)``. Contains the values of the CSR tensor.
88+
#
89+
# Of note, both sparse COO and CSR tensors are in a `beta <https://pytorch.org/docs/stable/index.html>`__ state.
90+
#
91+
# By way of example:
92+
#
93+
94+
mt_sparse_csr = mt.to_sparse_csr()
95+
print("mt (sparse csr):\n", mt_sparse_csr)
96+
print("mt data (sparse csr):\n", mt_sparse_csr.get_data())
97+
98+
######################################################################
99+
# Supported Operations
100+
# ++++++++++++++++++++
101+
#
102+
# Unary
103+
# -----
104+
# All `unary operators <https://pytorch.org/docs/master/masked.html#unary-operators>`__ are supported, e.g.:
105+
#
106+
107+
mt.sin()
108+
109+
######################################################################
110+
# Binary
111+
# ------
112+
# `Binary operators <https://pytorch.org/docs/master/masked.html#unary-operators>`__ are also supported, but the
113+
# input masks from the two masked tensors must match. For more information on why this decision was made, please
114+
# find our `MaskedTensor: Advanced Semantics tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_advanced_semantics.html>`
115+
#
116+
# Please find an example below:
117+
#
118+
119+
i = [[0, 1, 1],
120+
[2, 0, 2]]
121+
v1 = [3, 4, 5]
122+
v2 = [20, 30, 40]
123+
m = torch.tensor([True, False, True])
124+
125+
s1 = torch.sparse_coo_tensor(i, v1, (2, 3))
126+
s2 = torch.sparse_coo_tensor(i, v2, (2, 3))
127+
mask = torch.sparse_coo_tensor(i, m, (2, 3))
128+
129+
mt1 = masked_tensor(s1, mask)
130+
mt2 = masked_tensor(s2, mask)
131+
132+
print("mt1:\n", mt1)
133+
print("mt2:\n", mt2)
134+
print("torch.div(mt2, mt1):\n", torch.div(mt2, mt1))
135+
print("torch.mul(mt1, mt2):\n", torch.mul(mt1, mt2))
136+
137+
######################################################################
138+
# Reductions
139+
# ----------
140+
# Finally, `reductions <https://pytorch.org/docs/master/masked.html#reductions>`__ are supported:
141+
#
142+
143+
print("mt:\n", mt)
144+
print("mt.sum():\n", mt.sum())
145+
print("mt.sum():\n", mt.sum(dim=1))
146+
print("mt.amin():\n", mt.amin())
147+
148+
######################################################################
149+
# MaskedTensor Helper Methods
150+
# ---------------------------
151+
# For convenience, :class:`MaskedTensor` has a number of methods to help convert between the different layouts
152+
# and identify the current layout:
153+
#
154+
# Setup:
155+
#
156+
157+
v = [[3, 0, 0],
158+
[0, 4, 5]]
159+
m = [[True, False, False],
160+
[False, True, True]]
161+
162+
mt = masked_tensor(torch.tensor(v), torch.tensor(m))
163+
164+
######################################################################
165+
# :meth:`MaskedTensor.to_sparse_coo()`
166+
#
167+
168+
mt_sparse_coo = mt.to_sparse_coo()
169+
mt_sparse_coo
170+
171+
######################################################################
172+
# :meth:`MaskedTensor.to_sparse_csr()`
173+
#
174+
175+
mt_sparse_csr = mt.to_sparse_csr()
176+
mt_sparse_csr
177+
178+
######################################################################
179+
# :meth:`MaskedTensor.to_dense()`
180+
#
181+
182+
mt_dense = mt_sparse_coo.to_dense()
183+
mt_dense
184+
185+
######################################################################
186+
# :meth:`MaskedTensor.is_sparse()` -- this will check if the :class:`MaskedTensor`'s layout
187+
# matches any of the supported sparse layouts (currently COO and CSR).
188+
#
189+
190+
print("mt_dense.is_sparse: ", mt_dense.is_sparse())
191+
print("mt_sparse_coo.is_sparse: ", mt_sparse_coo.is_sparse())
192+
print("mt_sparse_csr.is_sparse: ", mt_sparse_csr.is_sparse())
193+
194+
######################################################################
195+
# :meth:`MaskedTensor.is_sparse_coo()`
196+
#
197+
198+
print("mt_dense.is_sparse_coo: ", mt_dense.is_sparse_coo())
199+
print("mt_sparse_coo.is_sparse_coo: ", mt_sparse_coo.is_sparse_coo())
200+
print("mt_sparse_csr.is_sparse_coo: ", mt_sparse_csr.is_sparse_coo())
201+
202+
######################################################################
203+
# :meth:`MaskedTensor.is_sparse_csr()`
204+
#
205+
206+
print("mt_dense.is_sparse_csr: ", mt_dense.is_sparse_csr())
207+
print("mt_sparse_coo.is_sparse_csr: ", mt_sparse_coo.is_sparse_csr())
208+
print("mt_sparse_csr.is_sparse_csr: ", mt_sparse_csr.is_sparse_csr())
209+
210+
######################################################################
211+
# Appendix
212+
# ++++++++
213+
#
214+
# .. _sparse-coo-appendix:
215+
#
216+
# Sparse COO construction
217+
# -----------------------
218+
#
219+
# Recall in our :ref:`original example <sparse-coo-tensors>`, we created a :class:`MaskedTensor`
220+
# and then converted it to a sparse COO MaskedTensor with :meth:`MaskedTensor.to_sparse_coo`.
221+
#
222+
# Alternatively, we can also construct a sparse COO MaskedTensor directly by passing in two sparse COO tensors:
223+
#
224+
225+
values = torch.tensor([[0, 0, 3], [4, 0, 5]]).to_sparse()
226+
mask = torch.tensor([[False, False, True], [False, False, True]]).to_sparse()
227+
mt = masked_tensor(values, mask)
228+
print("values:\n", values)
229+
print("mask:\n", mask)
230+
print("mt:\n", mt)
231+
232+
######################################################################
233+
# Instead of using :meth:`torch.Tensor.to_sparse`, we can also create the sparse COO tensors directly,
234+
# which brings us to a warning:
235+
#
236+
# .. warning::
237+
#
238+
# When using a function like :meth:`MaskedTensor.to_sparse_coo` (analogous to :meth:`Tensor.to_sparse`),
239+
# if the user does not specify the indices like in the above example,
240+
# then the 0 values will be "unspecified" by default.
241+
#
242+
# Below, we explicitly specify the 0's:
243+
#
244+
245+
i = [[0, 1, 1],
246+
[2, 0, 2]]
247+
v = [3, 4, 5]
248+
m = torch.tensor([True, False, True])
249+
values = torch.sparse_coo_tensor(i, v, (2, 3))
250+
mask = torch.sparse_coo_tensor(i, m, (2, 3))
251+
mt2 = masked_tensor(values, mask)
252+
print("values:\n", values)
253+
print("mask:\n", mask)
254+
print("mt2:\n", mt2)
255+
256+
######################################################################
257+
# Note that ``mt`` and ``mt2`` look identical on the surface, and in the vast majority of operations, will yield the same
258+
# result. But this brings us to a detail on the implementation:
259+
#
260+
# ``data`` and ``mask`` -- only for sparse MaskedTensors -- can have a different number of elements (:func:`nnz`)
261+
# **at creation**, but the indices of ``mask`` must then be a subset of the indices of ``data``. In this case,
262+
# ``data`` will assume the shape of ``mask`` by ``data = data.sparse_mask(mask)``; in other words, any of the elements
263+
# in ``data`` that are not ``True`` in ``mask`` (that is, not specified) will be thrown away.
264+
#
265+
# Therefore, under the hood, the data looks slightly different; ``mt2`` has the "4" value masked out and ``mt``
266+
# is completely without it. Their underlying data has different shapes,
267+
# which would make operations like ``mt + mt2`` invalid.
268+
#
269+
270+
print("mt data:\n", mt.get_data())
271+
print("mt2 data:\n", mt2.get_data())
272+
273+
######################################################################
274+
# .. _sparse-csr-appendix:
275+
#
276+
# Sparse CSR construction
277+
# -----------------------
278+
#
279+
# We can also construct a sparse CSR MaskedTensor using sparse CSR tensors,
280+
# and like the example above, this results in a similar treatment under the hood.
281+
#
282+
283+
crow_indices = torch.tensor([0, 2, 4])
284+
col_indices = torch.tensor([0, 1, 0, 1])
285+
values = torch.tensor([1, 2, 3, 4])
286+
mask_values = torch.tensor([True, False, False, True])
287+
288+
csr = torch.sparse_csr_tensor(crow_indices, col_indices, values, dtype=torch.double)
289+
mask = torch.sparse_csr_tensor(crow_indices, col_indices, mask_values, dtype=torch.bool)
290+
291+
mt = masked_tensor(csr, mask)
292+
print("mt:\n", mt)
293+
print("mt data:\n", mt.get_data())
294+
295+
######################################################################
296+
# Conclusion
297+
# ----------
298+
# In this tutorial, we have introduced how to use :class:`MaskedTensor` with sparse COO and CSR formats and
299+
# discussed some of the subtleties under the hood in case users decide to access the underlying data structures
300+
# directly. Sparse storage formats and masked semantics indeed have strong synergies, so much so that they are
301+
# sometimes used as proxies for each other (as we will see in the next tutorial). In the future, we certainly plan
302+
# to invest and continue developing in this direction.
303+
#
304+
# Further Reading
305+
# ---------------
306+
#
307+
# To continue learning more, you can find our
308+
# `Efficiently writing "sparse" semantics for Adagrad with MaskedTensor tutorial <https://pytorch.org/tutorials/prototype/maskedtensor_adagrad.html>`__
309+
# to see an example of how MaskedTensor can simplify existing workflows with native masking semantics.
310+
#

prototype_source/prototype_index.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,13 @@ Prototype features are not available as part of binary distributions like PyPI o
141141
:link: ../prototype/nestedtensor.html
142142
:tags: NestedTensor
143143

144+
.. customcarditem::
145+
:header: Masked Tensor Sparsity
146+
:card_description: Learn about how to leverage sparse layouts (e.g. COO and CSR) in MaskedTensor
147+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
148+
:link: ../prototype/maskedtensor_sparsity.html
149+
:tags: MaskedTensor
150+
144151
.. End of tutorial card section
145152
146153
.. raw:: html
@@ -172,3 +179,4 @@ Prototype features are not available as part of binary distributions like PyPI o
172179
prototype/vmap_recipe.html
173180
prototype/vulkan_workflow.html
174181
prototype/nestedtensor.html
182+
prototype/maskedtensor_sparsity.html

0 commit comments

Comments
 (0)