1 minute read

The more NLI pre-training the better

Combining

NLI training datasets helps (also in TACRED)

Is this because of a brilliant domain-expert?

● We gave the task to a computational linguist PhD

● Very similar results across all training regimes

● Replicable, robust to variations in prompts

● She also found writing prompts very friendly: “Writing templates is more natural and rewarding than annotating examples, which is more repetitive, stressful and tiresome.”

“When writing templates, I was thinking in an abstract manner, trying to find generalizations. When doing annotation I was paying attention to concrete cases.”

Is this because of a brilliant domain-expert?

● We gave the task to a computational linguist PhD

● Very similar results across all training regimes

● Replicable, robust to variations in prompts

● She also found writing prompts very friendly: “Writing templates is more natural and rewarding than annotating examples, which is more repetitive, stressful and tiresome.”

“When writing templates, I was thinking in an abstract manner, trying to find generalizations. When doing annotation I was paying attention to concrete cases.”

This article is from: