Skip to content

💡 [REQUEST] - What is purpose of out.backward(torch.randn(1, 10)) in neural_networks_tutorial #3017

Open
@Lovkush-A

Description

@Lovkush-A

🚀 Describe the improvement or the new tutorial

In neural networks tutorial for beginners, we have the following:

Zero the gradient buffers of all parameters and backprops with random gradients:

net.zero_grad()
out.backward(torch.randn(1, 10))

What is the purpose of this? It is not part of standard ML workflows and can be confusing to beginners. (As evidence,I am helping some people learn basics of ML and I got questions about this line. This is how I found out about it!)

If there is no good reason for it, then I suggest:

  • dropping these few lines
  • changing wording of other parts of the page if needed. E.g. 'at this point we covered... calling backward'

Existing tutorials on this topic

No response

Additional context

No response

cc @subramen @albanD

Metadata

Metadata

Assignees

No one assigned

    Labels

    coreTutorials of any level of difficulty related to the core pytorch functionalityintroquestion

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions