Space-Time Diffusion Features for Zero-Shot
Text-Driven Motion Transfer

Weizmann Institute of Science
*Indicates Equal Contribution
CVPR 2024

Abstract

We present a new method for text-driven motion transfer - synthesizing a video that complies with an input text prompt describing the target objects and scene while maintaining an input video's motion and scene layout. Prior methods are confined to transferring motion across two subjects within the same or closely related object categories and are applicable for limited domains (e.g., humans). In this work, we consider a significantly more challenging setting in which the target and source objects differ drastically in shape and fine-grained motion characteristics (e.g., translating a jumping dog into a dolphin). To this end, we leverage a pre-trained and fixed text-to-video diffusion model, which provides us with generative and motion priors. The pillar of our method is a new space-time feature loss derived directly from the model. This loss guides the generation process to preserve the overall motion of the input video while complying with the target object in terms of shape and fine-grained motion traits.

Space-Time Analysis of Diffusion Features

We focus our analysis on features extracted from the intermediate layers activations of the video model. To gain a better understanding of what the features $\{\boldsymbol{f}(\boldsymbol{x}_t)\}_{t=1}^T$ extracted for each diffusion timestep $t$ encode, we adopt the concept of “feature inversion”. We observe that videos produced by feature inversion nearly reconstruct the original frames, regardless of the random inilialization (i.e different seeds). This suggests that the features encode the original objects' pose, shape, and appearance.
To reduce dependency on pixel-level information and enhance robustness to variations in appearance and shape, we introduce a new feature descriptor termed Spatial Marginal Mean (SMM):

$$\texttt{SMM}[\boldsymbol{f}(\boldsymbol{x}_t)] = \frac{1}{M\cdot N} \sum_{i=1}^M \sum_{j=1}^N \boldsymbol{f}(\boldsymbol{x}_t)_{i,j}$$

This descriptor is obtained by averaging the space-time features along the spatial dimensions. Despite collapsing the spatial dimensions, the SMM features retain information about objects' pose and semantic layout, showcasing robustness to appearance and shape variations.
The following videos demonstrate sample results for feature inversion using the original features and the SMM features. Each row represents a different random starting point.

Original Space-time feature loss SMM feature loss
Seed 1
Seed 2

Method

Given an input video, we apply DDIM inversion and extract space-time features $\boldsymbol{f}\in \mathbb{R}^{F\times M \times N \times D}$ from intermediate layer activations. We obtain our Spatial Marginal Mean (SMM) feature $\texttt{SMM}[\boldsymbol{f}] \in \mathbb{R}^{F \times D}$ by computing the mean over the spatial dimensions. We observed that directly optimizing for SMM features often prevents us from deviating from the original appearance. To circumvent this problem, we propose an objective function that aims to preserve the pairwise differences of the SMM features, rather than their exact values, and utilize for guidance during sampling. See Sec. 4 of the paper for more details.

pipeline

Measuring Motion Fidelity

We aim to assess the fidelity of our results in preserving the original motion. Given our task involves structural deviations, there is no alignment between pixels in the original and output videos. Consequentially, traditional metrics such as comparing optical-flow fields are unsuitable for our use case. We thus introduce a new metric, based on similarity between unaligned long-trajectories. For two sets of trajectories $\mathcal{T}$ and $\tilde{\mathcal{T}}$ extracted from the original and output videos, respectively, the metric is defined as: $$\frac{1}{m}\sum_{\tilde{\tau}\in \tilde{\mathcal{T}}} \underset{\tau \in \mathcal{T} }{\text{max}} \ \textbf{corr} (\tau,\tilde{\tau})+\frac{1}{n}\sum_{{\tau}\in {\mathcal{T}}} \underset{\tilde{\tau} \in \tilde{\mathcal{T}} }{\text{max}} \ \textbf{corr} (\tau,\tilde{\tau})$$ where the correlation between two tracklets $\textbf{corr}(\tau,\tilde{\tau})$ is computed as follows, similarly to [1]: $$\textbf{corr}(\tau,\tilde{\tau}) = \dfrac{1}{F}\sum_{k=1}^F\dfrac{{v_k^x \cdot \tilde{v}_k^x + v_k^y \cdot \tilde{v}_k^y}}{{\sqrt{(v_k^x)^2 + (v_k^y)^2} \cdot \sqrt{(\tilde{v}_k^x)^2 + (\tilde{v}_k^y)^2}}}$$ Our method outperforms the baselines by achieving both high fidelity to the target text prompt (CLIP text similarity score) and the original motion (Motion-Fidelity score).

motion_fidelity

BibTeX

@article{yatim2023spacetime,
        title = {Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer},
        author = {Yatim, Danah and Fridman, Rafail and Bar-Tal, Omer and Kasten, Yoni and Dekel, Tali},
        journal={arXiv preprint arxiv:2311.17009},
        year={2023}
        }
[1] Ce Liu, Antonio Torralba, William T Freeman, Fredo Durand, and Edward H Adelson. Motion magnification. ACM transactions on graphics (TOG), 24(3):519–526, 2005