Tianpei Gu

Research Scientist @ ByteDance

San Jose, California

I enjoy building/shipping things and currently working on full-stack generative model research across data, training, and deployment. Previously, I worked at startups like Krea and Lexica.

READ MORE CONTACT
Tianpei Gu
CV

Research

I work on human-centric video generation and real-time interactive systems. My current research focuses on making video generation models more intelligent and running in real-time, helping the models to understand the world and interact with human. Previously, I built data pipelines and trained foundation models at AI startups.

X-Streamer

X-Streamer: Unified Human World Modeling with Audiovisual Interaction

You Xie, Tianpei Gu, Zenan Li, Chenxu Zhang, Guoxian Song, Xiaochen Zhao, Chao Liang, Jianwen Jiang, Hongyi Xu, Linjie Luo

Arxiv, 2025

Lynx

Lynx: Towards High-Fidelity Personalized Video Generation

Shen Sang*, Tiancheng Zhi*, Tianpei Gu, Jing Liu, Linjie Luo

Arxiv, 2025

X-UniMotion: Animating Human Images with Expressive, Unified and Identity-Agnostic Motion Latents

Guoxian Song, Hongyi Xu, Xiaochen Zhao, You Xie, Tianpei Gu, Zenan Li, Chenxu Zhang, Linjie Luo

SIGGRAPH Asia, 2025

X-Actor: Emotional and Expressive Long-Range Portrait Acting from Audio

Chenxu Zhang, Zenan Li, Hongyi Xu, You Xie, Xiaochen Zhao, Tianpei Gu, Guoxian Song, Xin Chen, Chao Liang, Jianwen Jiang, Linjie Luo

SIGGRAPH Asia, 2025

Duolando Project

Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment

Li Siyao, Tianpei Gu, Zhitao Yang, Zhengyu Lin, Ziwei Liu, Henghui Ding, Lei Yang, and Chen Change Loy

ICLR, 2024

Bailando++ Project

Bailando++: 3D Dance GPT With Choreographic Memory

Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu

TPAMI, 2023

Line Inbetweening Project

Deep Geometrized Cartoon Line Inbetweening

Li Siyao, Tianpei Gu, Weiye Xiao, Henghui Ding, Ziwei Liu, and Chen Change Loy

ICCV, 2023

MID Project

Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion

Tianpei Gu*, Guangyi Chen*, Junlong Li, Chunze Lin, Yongming Rao, Jie Zhou, and Jiwen Lu

CVPR, 2022

Bailando Project

Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory

Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu

CVPR Oral, 2022

APNet Project

Person Re-Identification via Attention Pyramid

Guangyi Chen, Tianpei Gu, Jiwen Lu, Jin-An Bao, and Jie Zhou

T-IP, 2021

Work Experience

ByteDance US

Research Scientist

Human-centric video generation, large-scaledata pipeline, and real-time interactive generation.

San Jose, CA | Nov 2024 - Present

Krea

Machine Learning Engineer

Built data pipeline, trained video models.

San Francisco | Apr 2024 - Nov 2024

Lexica

Research Engineer

Built data pipeline and trained foundation image/video models from scratch.

San Francisco | Jun 2023 - Apr 2024

SenseTime Research

Research Intern

Research on GANs and latent space.

Beijing | May 2021 - Nov 2021

Education

University of California, Los Angeles

Master Degree, Computer Science

2021 - 2022

University of Maryland

Bachelor Degree, Computer Science and Mathematics

2017 - 2021

About Me

01 / BACKGROUND

Hey there, thanks for visiting my website. I am Tianpei Gu, currently a Research Scientist at ByteDance, based in San Jose, California. I like writing code and building things, and believe everyone should be full-stack in developing AI. Previously, I spent a couple years at GenAI startups in SF training foundation image/video models and shipping products.

02 / RESEARCH

My current research focuses on human-centric video generation and real-time interactive systems, or world models. I believe image and video generation models are far more than just a toy where you enter a prompt and get an image — they will have a great impact on our daily lives, just like LLMs. These models should have intelligence, not just act like render engines. For all AI products, I have my own “mom benchmark”: how can this product be used by ordinary people like my mom? To make video models more accessible, they have to be real-time and interactive. To make them viral, they have to generate content that’s not only visually appealing but also emotionally engaging. With that goal in mind, I’m mainly working on the following areas.

  • Data Pipeline: The development of efficient and scalable data pipeline is still very under-explored, and the credits for data people are generally insifficient. I have developed data pipelines (image and video) at multiple companies that scale to thousands of GPUs and process trillions of data, serving large number of researchers and powering critical projects.
  • Real-time Interactive Human Models: The final form of real-time human models consists of the following components:
    • Long-context memory and consistency
    • Intelligent modality interaction and alignment
    • Unified understanding and generation
    • Video model acceleration
    All of the above are very challenging and require systematic efforts, but I believe we will eventually get there. In [X-Streamer], we introduced the first human world model that can talk to users in real-time infinitely, with intelligence and memory.
  • Motion and Expression Modeling: I study how to capture and model complex human motions, facial expressions, and emotional states in video content. This includes precise body motion modeling [X-UniMotion], human id perservation [Lynx], and facial expression modeling while syncing with audio [X-Actor].

03 / MISC

Apart from research, I enjoy exploring new technologies and creative applications of AI. I also enjoy cooking and playing Dota2. I have a cute cat named "贝壳" (Bayker in English) and I love to play with him. The below is a gallery of Bayker. You are welcome to see more on Instagram and Xiaohongshu.

Services

Reviewer for:

Get in Touch

The quickest way to reach me is by messaging me on X at @gutianpei_. If you prefer a more serious medium, feel free to send me an email at gutianpei@ucla.edu. For work-related inquiries, please send me an email at tianpei.gu@bytedance.com.

I try to make a point to respond to every message I receive. Some of my friends were strangers I decided to message on a whim.

×
×