Publications

  • Counterfactual reasoning: Testing language models’ understanding of hypothetical scenarios

    Jiaxuan Li, Lang Yu, Allyson Ettinger

    [ACL 2023]

  • Counterfactual reasoning: Do language models need world knowledge for causal understanding?

    Jiaxuan Li, Lang Yu, Allyson Ettinger

    [nCSI workshop at NeurIPS 2022]

  • “No, They Did Not”: Dialogue Response Dynamics in Pre-trained Language Models

    Sanghee J. Kim, Lang Yu, Allyson Ettinger

    [COLING 2022]

  • Analyzing and Improving Compositionality in Neural Language Models

    Lang Yu

    [Phd Thesis]

  • On the Interplay Between Fine-tuning and Composition in Transformers

    Lang Yu and Allyson Ettinger

    [Findings of ACL: ACL-IJCNLP 2021] [Code] [Poster]

  • Assessing Phrasal Representation and Composition in Transformers

    Lang Yu and Allyson Ettinger

    [EMNLP 2020] [Code] [Talk]

  • VinaSC: Scalable Autodock Vina with fine-grained scheduling on heterogeneous platform.

    Lang Yu, Zhongzhi Luan, Xiangzheng Sun, Zhe Wang, and Hailong Yang

    [BIBM 2016] [Paper]