Skip to content

Commit 9edd639

Browse files
Svetlana Karslioglugeorge-qi
Svetlana Karslioglu
authored andcommitted
fix typo
1 parent 9d44ef8 commit 9edd639

File tree

1 file changed

+8
-5
lines changed

1 file changed

+8
-5
lines changed

prototype_source/maskedtensor_overview.py

+8-5
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@
2020
#
2121
# Using MaskedTensor
2222
# ++++++++++++++++++
23+
#
24+
# In this section we discuss how to use MaskedTensor including how to construct, access, the data
25+
# and mask, as well as indexing and slicing.
2326
#
2427
# Construction
2528
# ------------
@@ -174,7 +177,6 @@
174177
x = torch.tensor([1., 1.], requires_grad=True)
175178
div = torch.tensor([0., 1.])
176179
y = x/div # => y is [inf, 1]
177-
>>>
178180
mask = (div != 0) # => mask is [0, 1]
179181
loss = as_masked_tensor(y, mask)
180182
loss.sum().backward()
@@ -213,7 +215,7 @@
213215
# Safe Softmax
214216
# ------------
215217
#
216-
# Safe softmax is another great example of `an issue <https://github.com/pytorch/pytorch/issues/55056>`_
218+
# Safe softmax is another great example of `an issue <https://github.com/pytorch/pytorch/issues/55056>`__
217219
# that arises frequently. In a nutshell, if there is an entire batch that is "masked out"
218220
# or consists entirely of padding (which, in the softmax case, translates to being set `-inf`),
219221
# then this will result in NaNs, which can lead to training divergence.
@@ -247,15 +249,16 @@
247249

248250
######################################################################
249251
# Implementing missing torch.nan* operators
250-
# --------------------------------------------------------------------------------------------------------------
252+
# -----------------------------------------
251253
#
252254
# In `Issue 61474 <<https://github.com/pytorch/pytorch/issues/61474>`__,
253255
# there is a request to add additional operators to cover the various `torch.nan*` applications,
254256
# such as ``torch.nanmax``, ``torch.nanmin``, etc.
255257
#
256258
# In general, these problems lend themselves more naturally to masked semantics, so instead of introducing additional
257-
# operators, we propose using :class:`MaskedTensor`s instead. Since
258-
# `nanmean has already landed <https://github.com/pytorch/pytorch/issues/21987>`_, we can use it as a comparison point:
259+
# operators, we propose using :class:`MaskedTensor`s instead.
260+
# Since `nanmean has already landed <https://github.com/pytorch/pytorch/issues/21987>`__,
261+
# we can use it as a comparison point:
259262
#
260263

261264
x = torch.arange(16).float()

0 commit comments

Comments
 (0)