Skip to content

Downstream #1288 #87

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 18 commits into from
Closed

Downstream #1288 #87

wants to merge 18 commits into from

Conversation

twiecki
Copy link
Member

@twiecki twiecki commented Dec 6, 2022

Downstreaming aesara-devs/aesara#1288 as per request of @ColtAllen.

@ColtAllen
Copy link
Contributor

I've added some commits and a request in the upstream branch. Feel free to reply in the discussion for this PR.

@twiecki
Copy link
Member Author

twiecki commented Dec 6, 2022

@ColtAllen There were some odd errors trying to apply the patch so not everything might be ported. Can you check what's missing?

@ColtAllen
Copy link
Contributor

@ColtAllen There were some odd errors trying to apply the patch so not everything might be ported. Can you check what's missing?

Looks like some black formatting did not carry over, and the tests added in tests.tensor.test_math_scipy are apparently duplicated.

@twiecki
Copy link
Member Author

twiecki commented Dec 6, 2022

@ColtAllen can you fix that up and take over?

@ColtAllen
Copy link
Contributor

@twiecki I've forked this repo and made the revisions, but I'm blocked from pushing any commits until my permissions are updated. Shall we proceed with continuing to force-push the upstream PR, or update my permissions for this repo?

@ColtAllen
Copy link
Contributor

Next steps are to get these tests passing; I could use some assistance.

Here's how the tests are set up for Hyp2F1:

_good_broadcast_quaternary_hyp2f1 = dict(
    normal=(
        random_ranged(0, 1000, (2, 3)),
        random_ranged(0, 1000, (2, 3)),
        random_ranged(0, 1000, (2, 3)),
        random_ranged(0, 0.5, (2, 3)),
    ),
)

TestHyp2F1Broadcast = makeBroadcastTester(
    op=at.hyp2f1,
    expected=expected_hyp2f1,
    good=_good_broadcast_quaternary_hyp2f1,
    grad=_good_broadcast_quaternary_hyp2f1,
    eps=2e-10,
)

TestHyp2F1InplaceBroadcast = makeBroadcastTester(
    op=inplace.hyp2f1_inplace,
    expected=expected_hyp2f1,
    good=_good_broadcast_quaternary_hyp2f1,
    grad=_good_broadcast_quaternary_hyp2f1,
    eps=2e-10,
    inplace=True,
)

They are returning this jumbled error for test_grad:

TypeError: ('float() argument must be a string or a number, not \'ScalarVariable\'\nApply node that caused the error: 
Elemwise{hyp2f1_der}(input 0, input 1, input 2, input 3, TensorConstant{(1, 1) of 0.0})\nToposort index: 0\nInputs types: 
[TensorType(float64, (2, 3)), TensorType(float64, (2, 3)), TensorType(float64, (2, 3)), TensorType(float64, (2, 3)), TensorType(float64, (1, 1))]\n
Inputs shapes: [(2, 3), (2, 3), (2, 3), (2, 3), (1, 1)]\nInputs strides: [(24, 8), (24, 8), (24, 8), (24, 8), (8, 8)]\n
Inputs values: [\'not shown\', \'not shown\', \'not shown\', \'not shown\', array([[0.]])]\nOutputs clients: [[Elemwise{Mul}[(0, 1)](random_projection, Elemwise{hyp2f1_der}.0)]]\n\n
Backtrace when the node is created (use Aesara flag traceback__limit=N to make it longer):\n 
File "/mnt/c/Users/colta/portfolio/aesara/tests/tensor/utils.py", line 585, in test_grad\n    
utt.verify_grad(\n  
File "/mnt/c/Users/colta/portfolio/aesara/tests/unittest_tools.py", line 70, in verify_grad\n   
 orig_verify_grad(op, pt, n_tests, rng, *args, **kwargs)\n 
File "/mnt/c/Users/colta/portfolio/aesara/aesara/gradient.py", line 1854, in verify_grad\n 
symbolic_grad = grad(cost, tensor_pt, disconnected_inputs="ignore")\n  File "/mnt/c/Users/colta/portfolio/aesara/aesara/gradient.py", line 623, in grad\n 
rval: Sequence[Variable] = _populate_grad_dict(\n  File "/mnt/c/Users/colta/portfolio/aesara/aesara/gradient.py", line 1434, in _populate_grad_dict\n 
rval = [access_grad_cache(elem) for elem in wrt]\n  File "/mnt/c/Users/colta/portfolio/aesara/aesara/gradient.py", line 1434, in <listcomp>\n    rval = [access_grad_cache(elem) for elem in wrt]\n
File "/mnt/c/Users/colta/portfolio/aesara/aesara/gradient.py", line 1387, in access_grad_cache\n 
term = access_term_cache(node)[idx]\n  File "/mnt/c/Users/colta/portfolio/aesara/aesara/gradient.py", line 1213, in access_term_cache\n 
input_grads = node.op.L_op(inputs, node.outputs, new_output_grads)\n\n
HINT: Use the Aesara flag `exception_verbosity=high` for a debug print-out and storage map footprint of this Apply node.', 
'Test Elemwise{hyp2f1,no_inplace}::normal: Error occurred while computing the gradient on the following inputs: 
[array([[764.16214925, 550.49823533, 542.19109217],\n       [613.93532095, 341.6967123 , 284.38215306]]), 
array([[764.16214925, 550.49823533, 542.19109217],\n       [613.93532095, 341.6967123 , 284.38215306]]),
array([[764.16214925, 550.49823533, 542.19109217],\n       [613.93532095, 341.6967123 , 284.38215306]]), 
array([[0.38208107, 0.27524912, 0.27109555],\n       [0.30696766, 0.17084836, 0.14219108]])]')

Here's the testing setup for the gradients Op Hyp2F1Der:

_good_broadcast_pentanary_hyp2f1_der = dict(
    normal=(
        random_ranged(0, 1000, (2, 3)),
        random_ranged(0, 1000, (2, 3)),
        random_ranged(0, 1000, (2, 3)),
        random_ranged(0, 0.5, (2, 3)),
        integers_ranged(-1, 3, (2, 3)),
    ),
)

TestHyp2F1DerBroadcast = makeBroadcastTester(
    op=at.hyp2f1_der,
    expected=expected_hyp2f1,
    good=_good_broadcast_pentanary_hyp2f1_der,
    eps=2e-10,
)

The last parameter for this Op is a flag indicating which variable the derivative is taken wrt, but the makeBroadcastTester utility doesn't seem to like it:
numpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'hyp2f1' output from dtype('float64') to dtype('int64') with casting rule 'same_kind'

@twiecki
Copy link
Member Author

twiecki commented Dec 6, 2022

@twiecki I've forked this repo and made the revisions, but I'm blocked from pushing any commits until my permissions are updated. Shall we proceed with continuing to force-push the upstream PR, or update my permissions for this repo?

Just push to your fork and make a new PR.

Comment on lines +1392 to +1394
@scalar_elemwise
def hyp2f1_der(a, b, c, z):
"""Derivatives for Gaussian hypergeometric function."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to provide this as an Elemwise. It's enough to implement the Scalar Op which will be used in the gradient of hyp2f1. We don't expect users to use this directly.

Comment on lines +766 to +774
_good_broadcast_pentanary_hyp2f1_der = dict(
normal=(
random_ranged(0, 1000, (2, 3)),
random_ranged(0, 1000, (2, 3)),
random_ranged(0, 1000, (2, 3)),
random_ranged(0, 0.5, (2, 3)),
integers_ranged(-1, 3, (2, 3)),
),
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly you don't need to test this one directly, the Scalar Op will be tested in the test of the hyp2f1

@ricardoV94
Copy link
Member

ricardoV94 commented Dec 7, 2022

@ColtAllen I left some comments above. Basically you don't need to implement the Elemwise of the derivative scalar Ops, nor test them directly. If you remove those, the important tests are passing already TestHyp2F1Broadcast and TestHyp2F1InplaceBroadcast

Feel free to push into your fork and open a PR. I'll answer any further questions you have there.

@twiecki
Copy link
Member Author

twiecki commented Dec 7, 2022

Closing this in favor of a PR that @ColtAllen will open.

@twiecki twiecki closed this Dec 7, 2022
@ricardoV94 ricardoV94 added the aesara downstream PRs that were downstreamed from Aesara label Dec 13, 2022
@ricardoV94 ricardoV94 deleted the downstream_1288 branch June 13, 2024 12:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
aesara downstream PRs that were downstreamed from Aesara
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants