ComfyUI InfoComfyUI Info

ComfyUI ControlNet Aux Preprocessors

Plug-and-play ComfyUI ControlNet Aux node set for creating ControlNet prompt images

ControlNet Aux Overview

What are ControlNet Aux Preprocessors?

This is a plug-and-play ControlNet Aux node set designed for ComfyUI to create ControlNet prompt images. The code is copied from the corresponding folder of the ControlNet project and linked to 🤗 Hub.

All credit and copyright belong to lllyasviel -https://github.com/lllyasviel

ComfyUI ControlNet Aux Banner

"Anime style, street protest, cyberpunk city, a woman with pink hair and golden eyes (looking at the viewer) holding a sign with neon pink bold text 'ComfyUI ControlNet Aux'"

ControlNet Aux Updates

Visit the updates page to see the latest ControlNet Aux updates.

Updates Page

ControlNet Aux Installation Guide

Install using ComfyUI Manager (Recommended)

Install ComfyUI Manager and follow the steps described therein to install this repository.

ComfyUI Manager

Alternative Method

If you are running on Linux or using a non-admin account on Windows, you need to ensure that /ComfyUI/custom_nodes and comfyui_controlnet_aux have write permissions.

注意:There is now an install.bat file that you can run to install to the portable version (if detected). Otherwise, it will default to a system installation, assuming you followed ComfyUI's manual installation steps.

If you cannot run install.bat (e.g., you are a Linux user)

Open CMD/Shell and do the following:

  1. Navigate to your /ComfyUI/custom_nodes/ folder
  2. Run git clone https://github.com/Fannovel16/comfyui_controlnet_aux/
  3. Navigate to your comfyui_controlnet_aux folder
  4. Portable/venv:
    path/to/ComfUI/python_embeded/python.exe -s -m pip install -r requirements.txt
  5. Using system Python:
    pip install -r requirements.txt
  6. Start ComfyUI

ControlNet Aux Nodes

Please note that this ControlNet Aux repository only supports preprocessors for creating prompt images (e.g., stick figures, Canny edges, etc.).

注意:Except for Inpaint, all ControlNet Aux preprocessors are integrated into the AIO Aux Preprocessor node. This node allows you to quickly get the preprocessor, but you cannot set the preprocessor's own threshold parameters. You need to use its node directly to set the threshold.

Line Extractors

Preprocessor Nodesd-webui-controlnet/otherControlNet/T2I-Adapter
Binary Linesbinarycontrol_scribble
Canny Edgecannycontrol_v11p_sd15_canny
control_canny
t2iadapter_canny
HED Soft-Edge Lineshedcontrol_v11p_sd15_softedge
control_hed
Standard Lineartstandard_lineartcontrol_v11p_sd15_lineart
Realistic Lineartlineart (or `lineart_coarse` if `coarse` is enabled)control_v11p_sd15_lineart
Anime Lineartlineart_animecontrol_v11p_sd15s2_lineart_anime
Manga Lineartlineart_anime_denoisecontrol_v11p_sd15s2_lineart_anime
M-LSD Linesmlsdcontrol_v11p_sd15_mlsd
control_mlsd
PiDiNet Soft-Edge Linespidinetcontrol_v11p_sd15_softedge
control_scribble
Scribble Linesscribblecontrol_v11p_sd15_scribble
control_scribble
Scribble XDoG Linesscribble_xdogcontrol_v11p_sd15_scribble
control_scribble
Fake Scribble Linesscribble_hedcontrol_v11p_sd15_scribble
control_scribble
TEED Soft-Edge Linesteedcontrolnet-sd-xl-1.0-softedge-dexined
control_v11p_sd15_softedge (theoretically)
Scribble PiDiNet Linesscribble_pidinetcontrol_v11p_sd15_scribble
control_scribble
AnyLine LineartmistoLine_fp16.safetensors
mistoLine_rank256
control_v11p_sd15s2_lineart_anime
control_v11p_sd15_lineart

Normal and Depth Estimators

Preprocessor Nodesd-webui-controlnet/otherControlNet/T2I-Adapter
MiDaS Depth Map(normal) depthcontrol_v11f1p_sd15_depth
control_depth
t2iadapter_depth
LeReS Depth Mapdepth_lerescontrol_v11f1p_sd15_depth
control_depth
t2iadapter_depth
Zoe Depth Mapdepth_zoecontrol_v11f1p_sd15_depth
control_depth
t2iadapter_depth
MiDaS Normal Mapnormal_mapcontrol_normal
BAE Normal Mapnormal_baecontrol_v11p_sd15_normalbae
MeshGraphormer Hand Refinerdepth_hand_refinercontrol_sd15_inpaint_depth_hand_fp16
Depth Anythingdepth_anythingDepth-Anything
Zoe Depth Anythingdepth_anythingDepth-Anything
Normal DSINEcontrol_normal/control_v11p_sd15_normalbae
Metric3D Depthcontrol_v11f1p_sd15_depth
control_depth
t2iadapter_depth
Metric3D Normalcontrol_v11p_sd15_normalbae
Depth Anything V2Depth-Anything

Face and Pose Estimators

Preprocessor Nodesd-webui-controlnet/otherControlNet/T2I-Adapter
DWPose Estimatordw_openpose_fullcontrol_v11p_sd15_openpose
control_openpose
t2iadapter_openpose
OpenPose Estimatoropenpose (detect_body)
openpose_hand (detect_body + detect_hand)
openpose_faceonly (detect_face)
openpose_full (detect_hand + detect_body + detect_face)
control_v11p_sd15_openpose
control_openpose
t2iadapter_openpose
MediaPipe Face Meshmediapipe_facecontrolnet_sd21_laion_face_v2
Animal Estimatoranimal_openposecontrol_sd15_animal_openpose_fp16

Optical Flow Estimators

Preprocessor Nodesd-webui-controlnet/otherControlNet/T2I-Adapter
Unimatch Optical FlowDragNUWA

Semantic Segmentation

Preprocessor Nodesd-webui-controlnet/otherControlNet/T2I-Adapter
OneFormer ADE20K Segmentoroneformer_ade20kcontrol_v11p_sd15_seg
OneFormer COCO Segmentoroneformer_cococontrol_v11p_sd15_seg
UniFormer Segmentorsegmentationcontrol_sd15_seg
control_v11p_sd15_seg

T2IAdapter Only

Preprocessor Nodesd-webui-controlnet/otherControlNet/T2I-Adapter
Color Palletecolort2iadapter_color
Content Shuffleshufflet2iadapter_style

Recoloring

Preprocessor Nodesd-webui-controlnet/otherControlNet/T2I-Adapter
Image Luminancerecolor_luminanceioclab_sd15_recolor
sai_xl_recolor_256lora
bdsqlsz_controlllite_xl_recolor_luminance
Image Intensityrecolor_intensityMay be the same as above

How to get OpenPose format JSON?

User Side

This workflow saves the image to ComfyUI's output folder (same location as the output image). If you don't find the "Save Pose Keypoints" node, please update this extension

Save Pose Keypoints Example

Developer Side

The OpenPose format JSON array corresponding to each frame in the IMAGE batch can be obtained from DWPose and OpenPose using the UI's app.nodeOutputs or /history API endpoint. AnimalPose's JSON output uses a format similar to OpenPose JSON:

[
    {
        "version": "ap10k",
        "animals": [
            [[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],
            [[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],
            ...
        ],
        "canvas_height": 512,
        "canvas_width": 768
    },
    ...
]

For extension developers (e.g., Openpose editor):

const poseNodes = app.graph._nodes.filter(node => ["OpenposePreprocessor", "DWPreprocessor", "AnimalPosePreprocessor"].includes(node.type))
for (const poseNode of poseNodes) {
    const openposeResults = JSON.parse(app.nodeOutputs[poseNode.id].openpose_json[0])
    console.log(openposeResults) //Array containing Openpose JSON for each frame
}

For API users:

Javascript
import fetch from "node-fetch" //Remember to add "type": "module" in "package.json"
async function main() {
    const promptId = '792c1905-ecfe-41f4-8114-83e6a4a09a9f' //Too lazy, don't want to POST /queue
    let history = await fetch(`http://127.0.0.1:8188/history/${promptId}`).then(re => re.json())
    history = history[promptId]
    const nodeOutputs = Object.values(history.outputs).filter(output => output.openpose_json)
    for (const nodeOutput of nodeOutputs) {
        const openposeResults = JSON.parse(nodeOutput.openpose_json[0])
        console.log(openposeResults) //Array containing Openpose JSON for each frame
    }
}
main()
Python
import json, urllib.request

server_address = "127.0.0.1:8188"
prompt_id = '' #Too lazy, don't want to POST /queue

def get_history(prompt_id):
    with urllib.request.urlopen("http://{}/history/{}".format(server_address, prompt_id)) as response:
        return json.loads(response.read())

history = get_history(prompt_id)[prompt_id]
for o in history['outputs']:
    for node_id in history['outputs']:
        node_output = history['outputs'][node_id]
        if 'openpose_json' in node_output:
            print(json.loads(node_output['openpose_json'][0])) #List containing Openpose JSON for each frame

ControlNet Aux Examples

ControlNet Aux - A picture is worth a thousand words

Example 1
Example 2

ControlNet Aux FAQ

Why do some nodes not appear after installing the ControlNet Aux repository?

This ControlNet Aux repository has a new mechanism that skips any custom nodes that cannot be imported. If you encounter this situation, please create an issue on the Issues tab and attach the log from the command line.

DWPose/AnimalPose only uses CPU so it's slow. How to make it use GPU?

There are two methods to speed up DWPose: using TorchScript checkpoints (.torchscript.pt) or ONNXRuntime (.onnx). The TorchScript method is slightly slower than ONNXRuntime but doesn't require any additional libraries and is still much faster than CPU.

TorchScript bounding box detector is compatible with onnx pose estimator and vice versa.

TorchScript

Set bbox_detector and pose_estimator according to this image. If the input image is ideal, you can try other bounding box detectors ending with .torchscript.pt to reduce bounding box detection time.

TorchScript Example

ONNXRuntime

If onnxruntime is successfully installed and the checkpoint uses .onnx ending, it will replace the default cv2 backend to utilize GPU. Note that if you are using an NVidia card, this method currently only works on CUDA 11.8 (ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z) unless you compile onnxruntime yourself.

ONNX Example
Understand your onnxruntime build:
  • NVidia CUDA 11.x or below/AMD GPU: onnxruntime-gpu
  • NVidia CUDA 12.x: onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
  • DirectML: onnxruntime-directml
  • OpenVINO: onnxruntime-openvino

Note that if this is your first time using ComfyUI, please test if it can run on your device before proceeding to the next step.

Add it to requirements.txt

Run install.bat or the pip command mentioned in the installation section

ControlNet Aux Preprocessor Resource Files

For more resource files, please refer toGitHub Repository