Skip to content

[Not to Land (Hopefully)]Revert PT Pin to 0912 #5987

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Jack-Khuu
Copy link
Contributor

Summary:
Original PR: https://github.com/pytorch/executorch/pull/5824/files

Changes picked up in the revert:
f005dd5#diff-3b0b2409eb2a7cb2dfd94e84c17f54f48243649eb0874b78422b2f1411283d43L169

Differential Revision: D64053532

Copy link

pytorch-bot bot commented Oct 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5987

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures

As of commit b2e7687 with merge base 77b1f08 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 8, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D64053532

@Jack-Khuu Jack-Khuu changed the title Revert PT Pin to 0912 [Not to Land (Hopefully)]Revert PT Pin to 0912 Oct 8, 2024
@Jack-Khuu
Copy link
Contributor Author

Not to land (hopefully)

Testing CI sanctity (whether there were stray deps updates being pulled in)

facebook-github-bot pushed a commit that referenced this pull request Oct 8, 2024
Summary:
Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Differential Revision: D64074891
larryliu0820 added a commit that referenced this pull request Oct 8, 2024
Summary:
Pull Request resolved: #6026

Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Differential Revision: D64074891
facebook-github-bot pushed a commit that referenced this pull request Oct 8, 2024
Summary:

Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Reviewed By: Jack-Khuu

Differential Revision: D64074891
larryliu0820 added a commit that referenced this pull request Oct 8, 2024
Summary:
Pull Request resolved: #6026

Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Reviewed By: Jack-Khuu

Differential Revision: D64074891
facebook-github-bot pushed a commit that referenced this pull request Oct 8, 2024
Summary:

Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Reviewed By: Jack-Khuu

Differential Revision: D64074891
facebook-github-bot pushed a commit that referenced this pull request Oct 9, 2024
Summary:

Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Reviewed By: digantdesai, Jack-Khuu, tugsbayasgalan

Differential Revision: D64074891
larryliu0820 added a commit that referenced this pull request Oct 9, 2024
Summary:

Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Reviewed By: digantdesai, Jack-Khuu, tugsbayasgalan

Differential Revision: D64074891
facebook-github-bot pushed a commit that referenced this pull request Oct 9, 2024
Summary:
Pull Request resolved: #6026

Did a bunch of debugging on OSS CI:https://github.com/pytorch/executorch/actions/runs/11241297226/job/31252590975

Was able to confirm although the problem happens in `ConvertToLinear` but the root cause is we are partitioning the graph differently between these two pytorch nightly: dev20240916 and dev20240917.

The exported graph looks the same but the partitioner was behaving differently and causes the `ConvertToLinear` pass to error out.

We can't really revert back to dev20240916 nightly because it breaks other CI jobs, see #5987.

The current approach I'm taking avoids decomposing linear by using `to_edge_lower_and_transform` API. This avoids jumping into the rabbit hole of debugging the partitioning & tagging logic.

Reviewed By: digantdesai, Jack-Khuu, tugsbayasgalan

Differential Revision: D64074891

fbshipit-source-id: c434f9f5fc240b9268a7419fc66ee4a365ae1664
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants