Zilong Wang
PhD Student at UC San Diego, CSE.
Welcome! I am a fifth-year PhD student at UC San Diego advised by Prof. Jingbo Shang. I spent wonderful time doing research at Amazon Foundation Modl, Google Cloud AI, Google DeepMind, Google Research, Adobe Research, and Microsoft Research Asia. I received my B.S. in Computer Science from Peking University in 2020, where I was advised by Prof. Xiaojun Wan.
My research centers on leveraging language models for reasoning and planning, with a focus on: (1) LLM + Retrieval: Enhancing factual accuracy and minimizing hallucinations through retrieval-augmented generation (RAG). (2) LLM + Code: Improving the automatic debugging capabilities and coding proficiency of language models. (3) LLM + Multimodal: Empowering language models to tackle multimodal tasks via autonomous agents, such as table-based reasoning and visually-rich document extraction.
My earlier research forcused on visually-rich document understanding, where I enabled language models to encode multimodal features and understand the rich contents in documents of various formats, such as forms, receipts, web pages, etc.
news
Sep 1, 2024 | Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting New paper alert! Achieve state-of-the-art performance both in accuracy and efficiency for RAG. |
---|---|
Jul 26, 2024 | OFFICEBENCH: Benchmarking Language Agents across Multiple Applications for Office Automation New paper alert! Check our latest LLM agent benchmark on the office automation scenario! |
May 27, 2024 | Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step-by-step has been accepted by ACL 2024 Findings! |
Jan 16, 2024 | Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding has been accepted by ICLR 2024! |
Dec 9, 2023 | A Study on Robustness and Reliability of Large Language Model Code Generation got received by AAAI 2024! |