ComfyUI InfoComfyUI Info

ComfyUI-Frame-Interpolation

ComfyUI Frame Interpolation是ComfyUI中用于视频帧插值的自定义节点集。它改进了内存管理,减少了RAM和VRAM占用。VFI节点可接受调度乘数。提供了如GMFSS Fortuna VFI、RIFE VFI等多种VFI节点。可通过ComfyUI管理器或命令行安装,实验性支持非CUDA设备。所有VFI节点位于“ComfyUI - Frame - Interpolation/VFI”类别,至少需要2帧。还提供了简单和复杂的工作流示例,并为每个VFI节点引用了相关研究。

Fannovel16

v1.0.7
721 stars
GitHub

Description

ComfyUI 帧插值 (ComfyUI VFI) (开发中)

这是一组专为 ComfyUI 中的视频帧插值设计的自定义节点。

更新:内存管理得到了增强。现在,与之前相比,此扩展消耗的 RAM 和 VRAM 更少。

更新 2:VFI 节点现在支持调度乘数。

ComfyUI Frame Interpolation (ComfyUI VFI) (WIP)

A custom node set designed for Video Frame Interpolation within ComfyUI.

UPDATE: Memory management has been enhanced. Now, this extension consumes less RAM and VRAM compared to before.

UPDATE 2: VFI nodes now support scheduling multiplier values.

插值调度

测试 VFI 调度

Nodes

  • KSampler Gradually Adding More Denoise (efficient)
  • GMFSS Fortuna VFI
  • IFRNet VFI
  • IFUnet VFI
  • M2M VFI
  • RIFE VFI (4.0 - 4.9) (Note that the fast_mode option will have no effect from v4.5+ as contextnet has been removed)
  • FILM VFI
  • Sepconv VFI
  • AMT VFI
  • Make Interpolation State List
  • STMFNet VFI (requires at least 4 frames and can currently only perform 2x interpolation)
  • FLAVR VFI (has the same conditions as STMFNet)

Install

ComfyUI Manager

The incompatibility issue with ComfyUI Manager has now been resolved.

Follow this guide to install this extension: How to use ComfyUI-Manager

Command-line

Windows

Run install.bat.

For Windows users experiencing issues with cupy, please run install.bat instead of install-cupy.py or python install.py.

Linux

Open your shell application and activate the virtual environment if it is used for ComfyUI. Then run:

python install.py

Support for non-CUDA device (experimental)

If you don't have an NVidia card, you can try the taichi ops backend powered by Taichi Lang.

On Windows, you can install it by running install.bat. On Linux, run pip install taichi.

Then, change the value of ops_backend from cupy to taichi in config.yaml.

If a NotImplementedError occurs, it means a VFI node in the workflow is not supported by Taichi.

Usage

If the installation is successful, all VFI nodes can be found in the category ComfyUI-Frame-Interpolation/VFI. These nodes require an IMAGE containing frames (at least 2, or at least 4 for STMF-Net/FLAVR).

Regarding STMFNet and FLAVR, if you only have two or three frames, you should use the following process: Load Images -> Another VFI node (FILM is recommended in this case) with multiplier=4 -> STMFNet VFI/FLAVR VFI.

The clear_cache_after_n_frames option is used to prevent out-of-memory errors. Decreasing this value reduces the risk but also increases the processing time.

It is recommended to use LoadImages (LoadImagesFromDirectory) from ComfyUI-Advanced-ControlNet and ComfyUI-VideoHelperSuite in conjunction with this ComfyUI-Frame-Interpolation extension.

Example

Simple workflow

Workflow metadata is not embedded. Download these two images anime0.png and anime1.png and place them in a folder such as E:\test as shown in this image. Example Workflow

Complex workflow

This workflow is used in AnimationDiff and can load workflow metadata. Complex Workflow

Credit

We express our sincere gratitude to styler00dollar for creating VSGAN-tensorrt-docker. Approximately 99% of the code in this ComfyUI-Frame-Interpolation repository is derived from it.

Citation for each VFI node:

GMFSS Fortuna

The All-In-One GMFSS: Dedicated for Anime Video Frame Interpolation GMFSS Fortuna GitHub

IFRNet

@InProceedings{Kong_2022_CVPR, 
  author = {Kong, Lingtong and Jiang, Boyuan and Luo, Donghao and Chu, Wenqing and Huang, Xiaoming and Tai, Ying and Wang, Chengjie and Yang, Jie}, 
  title = {IFRNet: Intermediate Feature Refine Network for Efficient Frame Interpolation}, 
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 
  year = {2022}
}

IFUnet

RIFE with IFUNet, FusionNet and RefineNet IFUnet GitHub

M2M

@InProceedings{hu2022m2m,
    title={Many-to-many Splatting for Efficient Video Frame Interpolation},
    author={Hu, Ping and Niklaus, Simon and Sclaroff, Stan and Saenko, Kate},
    journal={CVPR},
    year={2022}
    }

RIFE

@inproceedings{huang2022rife,
  title={Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}

FILM

Frame interpolation in PyTorch

@inproceedings{reda2022film,
 title = {FILM: Frame Interpolation for Large Motion},
 author = {Fitsum Reda and Janne Kontkanen and Eric Tabellion and Deqing Sun and Caroline Pantofaru and Brian Curless},
 booktitle = {European Conference on Computer Vision (ECCV)},
 year = {2022}
}
@misc{film-tf,
  title = {Tensorflow 2 Implementation of "FILM: Frame Interpolation for Large Motion"},
  author = {Fitsum Reda and Janne Kontkanen and Eric Tabellion and Deqing Sun and Caroline Pantofaru and Brian Curless},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/google-research/frame-interpolation}}
}

Sepconv

[1]  @inproceedings{Niklaus_WACV_2021,
         author = {Simon Niklaus and Long Mai and Oliver Wang},
         title = {Revisiting Adaptive Convolutions for Video Frame Interpolation},
         booktitle = {IEEE Winter Conference on Applications of Computer Vision},
         year = {2021}
     }
[2]  @inproceedings{Niklaus_ICCV_2017,
         author = {Simon Niklaus and Long Mai and Feng Liu},
         title = {Video Frame Interpolation via Adaptive Separable Convolution},
         booktitle = {IEEE International Conference on Computer Vision},
         year = {2017}
     }
[3]  @inproceedings{Niklaus_CVPR_2017,
         author = {Simon Niklaus and Long Mai and Feng Liu},
         title = {Video Frame Interpolation via Adaptive Convolution},
         booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
         year = {2017}
     }

AMT

@inproceedings{licvpr23amt,
      title={AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation},
      author={Li, Zhen and Zhu, Zuo-Liang and Han, Ling-Hao and Hou, Qibin and Guo, Chun-Le and Cheng, Ming-Ming},
      booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2023}
   }

ST-MFNet

@InProceedings{Danier_2022_CVPR,
    author    = {Danier, Duolikun and Zhang, Fan and Bull, David},
    title     = {ST-MFNet: A Spatio-Temporal Multi-Flow Network for Frame Interpolation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {3521-3531}
}

FLAVR

@article{kalluri2021flavr,
  title={FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation},
  author={Kalluri, Tarun and Pathak, Deepak and Chandraker, Manmohan and Tran, Du},
  booktitle={arxiv},
  year={2021}
}

节点

  • KSampler 逐步增加去噪(高效)
  • GMFSS Fortuna VFI
  • IFRNet VFI
  • IFUnet VFI
  • M2M VFI
  • RIFE VFI (4.0 - 4.9)(请注意,从 v4.5+ 开始,fast_mode 选项将不起作用,因为 contextnet 已被移除)
  • FILM VFI
  • Sepconv VFI
  • AMT VFI
  • 创建插值状态列表
  • STMFNet VFI(至少需要 4 帧,目前只能进行 2 倍插值)
  • FLAVR VFI(与 STMFNet 有相同的条件)

安装

ComfyUI 管理器

与 ComfyUI 管理器的不兼容问题现已解决。

按照以下指南安装此扩展: 如何使用 ComfyUI 管理器

命令行

Windows

运行 install.bat

对于在使用 cupy 时遇到问题的 Windows 用户,请运行 install.bat,而不是 install-cupy.pypython install.py

Linux

打开您的 shell 应用程序,如果 ComfyUI 使用了虚拟环境,请激活它。然后运行:

python install.py

对非 CUDA 设备的支持(实验性)

如果您没有 NVIDIA 显卡,您可以尝试由 Taichi Lang 提供支持的 taichi 操作后端。

在 Windows 上,您可以通过运行 install.bat 来安装它。在 Linux 上,运行 pip install taichi

然后,在 config.yaml 中将 ops_backend 的值从 cupy 更改为 taichi

如果出现 NotImplementedError,则表示工作流中的某个 VFI 节点不受 Taichi 支持。

使用方法

如果安装成功,所有 VFI 节点都可以在 类别 ComfyUI-Frame-Interpolation/VFI 中找到。这些节点需要一个包含帧的 IMAGE(至少 2 帧,对于 STMF-Net/FLAVR 至少需要 4 帧)。

对于 STMFNet 和 FLAVR,如果您只有两帧或三帧,您应该使用以下流程:加载图像 -> 另一个 VFI 节点(在这种情况下,建议使用 FILM),multiplier=4 -> STMFNet VFI/FLAVR VFI。

clear_cache_after_n_frames 选项用于防止内存不足错误。降低此值会降低风险,但也会增加处理时间。

建议将 ComfyUI-Advanced-ControlNetComfyUI-VideoHelperSuite 中的 LoadImages(从目录加载图像)与这个 ComfyUI-Frame-Interpolation 扩展结合使用。

示例

简单工作流

工作流元数据未嵌入。 下载这两张图片 anime0.pnganime1.png,并将它们放在一个文件夹中,例如 E:\test,如下图所示。 示例工作流

复杂工作流

此工作流用于 AnimationDiff,并且可以加载工作流元数据。 复杂工作流

致谢

我们衷心感谢 styler00dollar 创建了 VSGAN-tensorrt-docker。这个 ComfyUI-Frame-Interpolation 仓库中大约 99% 的代码都源自它。

每个 VFI 节点的引用信息:

GMFSS Fortuna

一体化 GMFSS:专为动漫视频帧插值设计 GMFSS Fortuna GitHub

IFRNet

@InProceedings{Kong_2022_CVPR, 
  author = {Kong, Lingtong and Jiang, Boyuan and Luo, Donghao and Chu, Wenqing and Huang, Xiaoming and Tai, Ying and Wang, Chengjie and Yang, Jie}, 
  title = {IFRNet: Intermediate Feature Refine Network for Efficient Frame Interpolation}, 
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, 
  year = {2022}
}

IFUnet

带有 IFUNet、FusionNet 和 RefineNet 的 RIFE IFUnet GitHub

M2M

@InProceedings{hu2022m2m,
    title={Many-to-many Splatting for Efficient Video Frame Interpolation},
    author={Hu, Ping and Niklaus, Simon and Sclaroff, Stan and Saenko, Kate},
    journal={CVPR},
    year={2022}
    }

RIFE

@inproceedings{huang2022rife,
  title={Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}

FILM

PyTorch 中的帧插值

@inproceedings{reda2022film,
 title = {FILM: Frame Interpolation for Large Motion},
 author = {Fitsum Reda and Janne Kontkanen and Eric Tabellion and Deqing Sun and Caroline Pantofaru and Brian Curless},
 booktitle = {European Conference on Computer Vision (ECCV)},
 year = {2022}
}
@misc{film-tf,
  title = {Tensorflow 2 Implementation of "FILM: Frame Interpolation for Large Motion"},
  author = {Fitsum Reda and Janne Kontkanen and Eric Tabellion and Deqing Sun and Caroline Pantofaru and Brian Curless},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/google-research/frame-interpolation}}
}

Sepconv

[1]  @inproceedings{Niklaus_WACV_2021,
         author = {Simon Niklaus and Long Mai and Oliver Wang},
         title = {Revisiting Adaptive Convolutions for Video Frame Interpolation},
         booktitle = {IEEE Winter Conference on Applications of Computer Vision},
         year = {2021}
     }
[2]  @inproceedings{Niklaus_ICCV_2017,
         author = {Simon Niklaus and Long Mai and Feng Liu},
         title = {Video Frame Interpolation via Adaptive Separable Convolution},
         booktitle = {IEEE International Conference on Computer Vision},
         year = {2017}
     }
[3]  @inproceedings{Niklaus_CVPR_2017,
         author = {Simon Niklaus and Long Mai and Feng Liu},
         title = {Video Frame Interpolation via Adaptive Convolution},
         booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
         year = {2017}
     }

AMT

@inproceedings{licvpr23amt,
      title={AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation},
      author={Li, Zhen and Zhu, Zuo-Liang and Han, Ling-Hao and Hou, Qibin and Guo, Chun-Le and Cheng, Ming-Ming},
      booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2023}
   }

ST-MFNet

@InProceedings{Danier_2022_CVPR,
    author    = {Danier, Duolikun and Zhang, Fan and Bull, David},
    title     = {ST-MFNet: A Spatio-Temporal Multi-Flow Network for Frame Interpolation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {3521-3531}
}

FLAVR

@article{kalluri2021flavr,
  title={FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation},
  author={Kalluri, Tarun and Pathak, Deepak and Chandraker, Manmohan and Tran, Du},
  booktitle={arxiv},
  year={2021}
}