Skip to content

Commit 297e33e

Browse files
Svetlana Karslioglugeorge-qi
Svetlana Karslioglu
authored andcommitted
move preparation
1 parent 9d44ef8 commit 297e33e

File tree

1 file changed

+22
-8
lines changed

1 file changed

+22
-8
lines changed

prototype_source/maskedtensor_overview.py

+22-8
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,24 @@
2020
#
2121
# Using MaskedTensor
2222
# ++++++++++++++++++
23+
#
24+
# In this section we discuss how to use MaskedTensor including how to construct, access, the data
25+
# and mask, as well as indexing and slicing.
26+
#
27+
# Preparation
28+
# -----------
29+
#
30+
# We'll begin by doing the necessary setup for the tutorial:
2331
#
32+
33+
import torch
34+
from torch.masked import masked_tensor, as_masked_tensor
35+
import warnings
36+
37+
# Disable prototype warnings and such
38+
warnings.filterwarnings(action='ignore', category=UserWarning)
39+
40+
######################################################################
2441
# Construction
2542
# ------------
2643
#
@@ -52,9 +69,6 @@
5269
# as :class:`torch.Tensor`. Below are some examples of common indexing and slicing patterns:
5370
#
5471

55-
import torch
56-
from torch.masked import masked_tensor, as_masked_tensor
57-
5872
data = torch.arange(24).reshape(2, 3, 4)
5973
mask = data % 2 == 0
6074

@@ -174,7 +188,6 @@
174188
x = torch.tensor([1., 1.], requires_grad=True)
175189
div = torch.tensor([0., 1.])
176190
y = x/div # => y is [inf, 1]
177-
>>>
178191
mask = (div != 0) # => mask is [0, 1]
179192
loss = as_masked_tensor(y, mask)
180193
loss.sum().backward()
@@ -213,7 +226,7 @@
213226
# Safe Softmax
214227
# ------------
215228
#
216-
# Safe softmax is another great example of `an issue <https://github.com/pytorch/pytorch/issues/55056>`_
229+
# Safe softmax is another great example of `an issue <https://github.com/pytorch/pytorch/issues/55056>`__
217230
# that arises frequently. In a nutshell, if there is an entire batch that is "masked out"
218231
# or consists entirely of padding (which, in the softmax case, translates to being set `-inf`),
219232
# then this will result in NaNs, which can lead to training divergence.
@@ -247,15 +260,16 @@
247260

248261
######################################################################
249262
# Implementing missing torch.nan* operators
250-
# --------------------------------------------------------------------------------------------------------------
263+
# -----------------------------------------
251264
#
252265
# In `Issue 61474 <<https://github.com/pytorch/pytorch/issues/61474>`__,
253266
# there is a request to add additional operators to cover the various `torch.nan*` applications,
254267
# such as ``torch.nanmax``, ``torch.nanmin``, etc.
255268
#
256269
# In general, these problems lend themselves more naturally to masked semantics, so instead of introducing additional
257-
# operators, we propose using :class:`MaskedTensor`s instead. Since
258-
# `nanmean has already landed <https://github.com/pytorch/pytorch/issues/21987>`_, we can use it as a comparison point:
270+
# operators, we propose using :class:`MaskedTensor`s instead.
271+
# Since `nanmean has already landed <https://github.com/pytorch/pytorch/issues/21987>`__,
272+
# we can use it as a comparison point:
259273
#
260274

261275
x = torch.arange(16).float()

0 commit comments

Comments
 (0)