I am an AI Resident at Google, working on natural language processing.
Vitæ | Google Scholar | Personal

Finetuned language models are zero-shot learners.
{J. Wei, M. Bosma, V. Zhao, K. Guu}, A. Yu, B. Lester, N. Du, A. Dai, and Q. Le.
A recipe for arbitrary text style transfer with large language models.
{Emily Reif, Daphne Ippolito}, Ann Yuan, Andy Coenen, Chris Callison-Burch, and Jason Wei.
EMNLP '21Frequency effects on syntactic rule learning in transformers.
Jason Wei, Dan Garrette, Tal Linzen, and Ellie Pavlick.
EMNLP '21Good-enough example extrapolation.
Jason Wei.
ACL '21A cognitive regularizer for language modeling.
Jason Wei, Clara Meister, and Ryan Cotterell.
ACL '21Language model augmented relevance score.
Ruibo Liu, Jason Wei, and Soroush Vosoughi.
ACL '21A survey of data augmentation approaches for NLP.
(Findings){S. Feng, V. Gangal}, J. Wei, S. Chandar, S. Vosoughi, T. Mitamura, and E. Hovy.
NAACL '21Linguistic complexity loss in text-based therapy.
Jason Wei, Kelly Finn, Emma Templeton, Thalia Wheatley, and Soroush Vosoughi.
NAACL '21Few-shot text classification with triplet networks, data augmentation, and curriculum learning.
Jason Wei, Chengyu Huang, Soroush Vosoughi, Yu Cheng, and Shiqi Xu.
EACL '21Text augmentation in a multi-task view.
Jason Wei, Chengyu Huang, Shiqi Xu, and Soroush Vosoughi.
AAAI '21Mitigating political bias in language models through reinforced calibration.
R. Liu, C. Jia, J. Wei, G. Xu, L. Wang, and S. Vosoughi. Outstanding paper!
EMNLP '19Easy data augmentation techniques for boosting performance on text classification tasks.
Jason Wei and Kai Zou.