Skip to content

Add LinCE sentiment analysis prompts #757

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 9, 2022

Conversation

RosenZhang
Copy link

This new PR contains new prompt edits addressing previous PR comments:

Does this same prompt apply to all other subsets of LinCE? Or are we only asked to evaluate the sa_spaeng subset?
Some prompts are missing the field of Answer Choices: positive ||| negative ||| neutral
This prompt's wording could be more natural
The following post expresses what sentiment?

What sentiment does the following post express? Positive, negative, or neutral?

(in that case, you should also mark the "Choices in template" flag. That is, models are explicitly told the choices "Positive, negative, or neutral?" in the input.)

We're looking for at least 5 original task prompts. You're missing one.

Thanks a lot for the comments!

  1. The templates in this PR would only apply to the sa_spaeng subset, other subsets are for linguistic structure tasks, namely, language identification, POS, NER. After standup, more work requires to be done on these tasks, More prompts for them are coming soon. I was thinking starting another PR for those since they're quite different from sentiment analysis task (requiring a label per word).

  2. For Answer Choices, apologize for the inconsistency. Would answer choices still be required if not referred in the target template?

3 & 4. The updated prompts in this PR improve wording and add an extra original task prompt.

@RosenZhang RosenZhang changed the base branch from main to eval-hackathon April 28, 2022 14:38
@awebson awebson self-assigned this Apr 28, 2022
@awebson awebson self-requested a review April 29, 2022 02:36
Copy link
Contributor

@awebson awebson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Almost there!

Would answer choices still be required if not referred in the target template?

Did you mean if choices are not mentioned in the input template? If so, yes. Choices should be given to models for all classification datasets. Models will score which answer choice is the most probable target sequence condition on the input sequence. Currently, the negation template is still missing choices.

If I understand correctly, negation wouldn't consider as original task right?

Correct.

Can you put a period after the end of the main sentence for "express sentiment 2" and others?

If you're interested, you can vary your wording even more. Currently they all basically say "sentiment" and "express". You can say for example "The author seems positive, neutral, or negative?", "The previous sentence has a good, bad, or neutral feeling?", etc.

@RosenZhang
Copy link
Author

Thanks! Almost there!

Would answer choices still be required if not referred in the target template?

Did you mean if choices are not mentioned in the input template? If so, yes. Choices should be given to models for all classification datasets. Models will score which answer choice is the most probable target sequence condition on the input sequence. Currently, the negation template is still missing choices.

If I understand correctly, negation wouldn't consider as original task right?

Correct.

Can you put a period after the end of the main sentence for "express sentiment 2" and others?

If you're interested, you can vary your wording even more. Currently they all basically say "sentiment" and "express". You can say for example "The author seems positive, neutral, or negative?", "The previous sentence has a good, bad, or neutral feeling?", etc.

Thanks again for the super helpful comments on writing prompts! The fixes are reflected in the new commit and add new prompts based on your suggestion!

Copy link
Contributor

@awebson awebson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now! Thanks so much!

@awebson awebson merged commit dcff8f6 into bigscience-workshop:eval-hackathon May 9, 2022
Comment on lines +29 to +40
answer_choices: positive ||| negative ||| neutral
id: 52708ad1-0029-4d97-a5e9-e179da16e452
jinja: "{{ words | join(\" \") }}. This is definitely \n||| \n{% if sa == \"negative\"\
\ %} \nnot a positive post. \n{% elif sa == \"positive\" %} \nnot a negative\
\ post. \n{% else %} \na neutral post.\n{% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: false
metrics:
- Accuracy
original_task: false
name: negation template
reference: imdb
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello, @RosenZhang ! The answer_choices for negation template do not align with the designed targets. E.g. the positive in answer_choices should actually be not a negative. Is there any chance we could get a fix for this? Thanks!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the catch, just open a new PR to fix this! Let me know if that works!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants