Yunsu Kim

I am a Research Scientist at Lilt. I have worked on cross-lingual, semi-supervised, and unsupervised learning for neural machine translation. Currently, I’m focusing on the personalized adaptation of translation models and an efficient computer-assisted workflow for human translators.

I have a long-lasting dream of building an agent that understands and speaks languages of any country or planet in any style or tone. I want the agent to be compassionate and thoughtful with a good understanding of the situation, like Gerty.


Nov 6, 2021 Check out this blog post on my work and life at Lilt!
Jul 19, 2021 The new homepage is launched! :sparkles:

selected publications

  1. When and Why is Unsupervised Neural Machine Translation Useless?
    Yunsu Kim, Miguel Graça, and Hermann Ney
    Proceedings of the 22nd Annual Conference of the European Association for Machine Translation (EAMT 2020)
  2. Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages
    Yunsu Kim, Petre Petrov, Pavel Petrushkov, Shahram Khadivi, and Hermann Ney
    Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019)
  3. When and Why is Document-level Context Useful in Neural Machine Translation?
    Yunsu Kim, Duc Thanh Tran, and Hermann Ney
    Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
  4. Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies
    Yunsu Kim, Yingbo Gao, and Hermann Ney
    Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)