-
Notifications
You must be signed in to change notification settings - Fork 365
Add LinCE sentiment analysis prompts #757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LinCE sentiment analysis prompts #757
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Almost there!
Would answer choices still be required if not referred in the target template?
Did you mean if choices are not mentioned in the input template? If so, yes. Choices should be given to models for all classification datasets. Models will score which answer choice is the most probable target sequence condition on the input sequence. Currently, the negation template is still missing choices.
If I understand correctly, negation wouldn't consider as original task right?
Correct.
Can you put a period after the end of the main sentence for "express sentiment 2" and others?
If you're interested, you can vary your wording even more. Currently they all basically say "sentiment" and "express". You can say for example "The author seems positive, neutral, or negative?", "The previous sentence has a good, bad, or neutral feeling?", etc.
Thanks again for the super helpful comments on writing prompts! The fixes are reflected in the new commit and add new prompts based on your suggestion! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now! Thanks so much!
answer_choices: positive ||| negative ||| neutral | ||
id: 52708ad1-0029-4d97-a5e9-e179da16e452 | ||
jinja: "{{ words | join(\" \") }}. This is definitely \n||| \n{% if sa == \"negative\"\ | ||
\ %} \nnot a positive post. \n{% elif sa == \"positive\" %} \nnot a negative\ | ||
\ post. \n{% else %} \na neutral post.\n{% endif %}" | ||
metadata: !TemplateMetadata | ||
choices_in_prompt: false | ||
metrics: | ||
- Accuracy | ||
original_task: false | ||
name: negation template | ||
reference: imdb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello, @RosenZhang ! The answer_choices
for negation template
do not align with the designed targets. E.g. the positive
in answer_choices
should actually be not a negative
. Is there any chance we could get a fix for this? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the catch, just open a new PR to fix this! Let me know if that works!
This new PR contains new prompt edits addressing previous PR comments:
Thanks a lot for the comments!
The templates in this PR would only apply to the sa_spaeng subset, other subsets are for linguistic structure tasks, namely, language identification, POS, NER. After standup, more work requires to be done on these tasks, More prompts for them are coming soon. I was thinking starting another PR for those since they're quite different from sentiment analysis task (requiring a label per word).
For Answer Choices, apologize for the inconsistency. Would answer choices still be required if not referred in the target template?
3 & 4. The updated prompts in this PR improve wording and add an extra original task prompt.