I am currently a third-year PhD candidate under the joint supervision of Assoc. Prof. Trevor Cohn and Dr. Reza Haffari. Prior to coming to Melbourne, I spent approximately 7 years (2008-2015) in Singapore for studying at National University of Singapore (NUS) and then working as a senior research enginneer at HLT department, Institute for Infocomm Research (I2R), A*STAR. Before that, I was a student/teaching & research assistant/lecturer at University of Science, Vietnam National University at Ho Chi Minh City, Vietnam. During this time, I was a research intern at National Institute of Informatics in Tokyo, Japan, working under Dr. Nigel Collier in a bio-text mining project (BioCaster).
My primary research interests lie on Natural Language Processing and Applied Machine Learning. My current focus is on Deep Learning models (e.g., sequence to sequence learning/inference) applied to structured prediction problems such as: Statistical Machine Translation, Abstractive Summarisation, Parsing.
*** I've joined Speak.AI (a new startup company headquartered in WA, USA) as an AI scientist, working on developing solutions for on-device conversational AI.
*** I was a research intern at NAVER LABS Europe (formerly as Xerox Research Centre Europe) from Mar 2018 to June 2018, working with Marc Dymetman on the project "Globally-driven Training Techniques for Neural Machine Translation".
*** Transformer-DyNet is my latest *humble* neural sequence-to-sequence toolkit (written in C++ with dynet backend). It implements Google's state-of-the-art Transformer architecture in a simplified manner. It's fast and efficient, can produce very high performance, consistently with Google's tensor2tensor or Amazon's Sockeye. This is the first C++ implementation of Transformer in DyNet (I suppose, correct me if I am wrong!).
*** I received the Google Australia PhD Travel Scholarship for my trip to EMNLP 2017. Special thanks to Google Australia.
*** I participated in the 2017 Jelinek Summer Workshop on Speech and Language Technology (JSALT) at CMU, June-August 2017, Pittsburgh, PA, USA. My main research focus will be on Neural Machine Translation conditioned on low/zero resources.