ComfyUI ControlNet Aux Preprocessors
Plug-and-play ComfyUI ControlNet Aux node set for creating ControlNet prompt images
ControlNet Aux Overview
What are ControlNet Aux Preprocessors?
This is a plug-and-play ControlNet Aux node set designed for ComfyUI to create ControlNet prompt images. The code is copied from the corresponding folder of the ControlNet project and linked to 🤗 Hub.
All credit and copyright belong to lllyasviel -https://github.com/lllyasviel

"Anime style, street protest, cyberpunk city, a woman with pink hair and golden eyes (looking at the viewer) holding a sign with neon pink bold text 'ComfyUI ControlNet Aux'"
ControlNet Aux Updates
Visit the updates page to see the latest ControlNet Aux updates.
Updates Page →ControlNet Aux Installation Guide
Install using ComfyUI Manager (Recommended)
Install ComfyUI Manager and follow the steps described therein to install this repository.
ComfyUI Manager →Alternative Method
If you are running on Linux or using a non-admin account on Windows, you need to ensure that /ComfyUI/custom_nodes and comfyui_controlnet_aux have write permissions.
注意:There is now an install.bat file that you can run to install to the portable version (if detected). Otherwise, it will default to a system installation, assuming you followed ComfyUI's manual installation steps.
If you cannot run install.bat (e.g., you are a Linux user)
Open CMD/Shell and do the following:
- Navigate to your /ComfyUI/custom_nodes/ folder
- Run git clone https://github.com/Fannovel16/comfyui_controlnet_aux/
- Navigate to your comfyui_controlnet_aux folder
- Portable/venv:
path/to/ComfUI/python_embeded/python.exe -s -m pip install -r requirements.txt
- Using system Python:
pip install -r requirements.txt
- Start ComfyUI
ControlNet Aux Nodes
Please note that this ControlNet Aux repository only supports preprocessors for creating prompt images (e.g., stick figures, Canny edges, etc.).
注意:Except for Inpaint, all ControlNet Aux preprocessors are integrated into the AIO Aux Preprocessor node. This node allows you to quickly get the preprocessor, but you cannot set the preprocessor's own threshold parameters. You need to use its node directly to set the threshold.
Line Extractors
Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
---|---|---|
Binary Lines | binary | control_scribble |
Canny Edge | canny | control_v11p_sd15_canny control_canny t2iadapter_canny |
HED Soft-Edge Lines | hed | control_v11p_sd15_softedge control_hed |
Standard Lineart | standard_lineart | control_v11p_sd15_lineart |
Realistic Lineart | lineart (or `lineart_coarse` if `coarse` is enabled) | control_v11p_sd15_lineart |
Anime Lineart | lineart_anime | control_v11p_sd15s2_lineart_anime |
Manga Lineart | lineart_anime_denoise | control_v11p_sd15s2_lineart_anime |
M-LSD Lines | mlsd | control_v11p_sd15_mlsd control_mlsd |
PiDiNet Soft-Edge Lines | pidinet | control_v11p_sd15_softedge control_scribble |
Scribble Lines | scribble | control_v11p_sd15_scribble control_scribble |
Scribble XDoG Lines | scribble_xdog | control_v11p_sd15_scribble control_scribble |
Fake Scribble Lines | scribble_hed | control_v11p_sd15_scribble control_scribble |
TEED Soft-Edge Lines | teed | controlnet-sd-xl-1.0-softedge-dexined control_v11p_sd15_softedge (theoretically) |
Scribble PiDiNet Lines | scribble_pidinet | control_v11p_sd15_scribble control_scribble |
AnyLine Lineart | mistoLine_fp16.safetensors mistoLine_rank256 control_v11p_sd15s2_lineart_anime control_v11p_sd15_lineart |
Normal and Depth Estimators
Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
---|---|---|
MiDaS Depth Map | (normal) depth | control_v11f1p_sd15_depth control_depth t2iadapter_depth |
LeReS Depth Map | depth_leres | control_v11f1p_sd15_depth control_depth t2iadapter_depth |
Zoe Depth Map | depth_zoe | control_v11f1p_sd15_depth control_depth t2iadapter_depth |
MiDaS Normal Map | normal_map | control_normal |
BAE Normal Map | normal_bae | control_v11p_sd15_normalbae |
MeshGraphormer Hand Refiner | depth_hand_refiner | control_sd15_inpaint_depth_hand_fp16 |
Depth Anything | depth_anything | Depth-Anything |
Zoe Depth Anything | depth_anything | Depth-Anything |
Normal DSINE | control_normal/control_v11p_sd15_normalbae | |
Metric3D Depth | control_v11f1p_sd15_depth control_depth t2iadapter_depth | |
Metric3D Normal | control_v11p_sd15_normalbae | |
Depth Anything V2 | Depth-Anything |
Face and Pose Estimators
Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
---|---|---|
DWPose Estimator | dw_openpose_full | control_v11p_sd15_openpose control_openpose t2iadapter_openpose |
OpenPose Estimator | openpose (detect_body) openpose_hand (detect_body + detect_hand) openpose_faceonly (detect_face) openpose_full (detect_hand + detect_body + detect_face) | control_v11p_sd15_openpose control_openpose t2iadapter_openpose |
MediaPipe Face Mesh | mediapipe_face | controlnet_sd21_laion_face_v2 |
Animal Estimator | animal_openpose | control_sd15_animal_openpose_fp16 |
Optical Flow Estimators
Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
---|---|---|
Unimatch Optical Flow | DragNUWA |
Semantic Segmentation
Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
---|---|---|
OneFormer ADE20K Segmentor | oneformer_ade20k | control_v11p_sd15_seg |
OneFormer COCO Segmentor | oneformer_coco | control_v11p_sd15_seg |
UniFormer Segmentor | segmentation | control_sd15_seg control_v11p_sd15_seg |
T2IAdapter Only
Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
---|---|---|
Color Pallete | color | t2iadapter_color |
Content Shuffle | shuffle | t2iadapter_style |
Recoloring
Preprocessor Node | sd-webui-controlnet/other | ControlNet/T2I-Adapter |
---|---|---|
Image Luminance | recolor_luminance | ioclab_sd15_recolor sai_xl_recolor_256lora bdsqlsz_controlllite_xl_recolor_luminance |
Image Intensity | recolor_intensity | May be the same as above |
How to get OpenPose format JSON?
User Side
This workflow saves the image to ComfyUI's output folder (same location as the output image). If you don't find the "Save Pose Keypoints" node, please update this extension

Developer Side
The OpenPose format JSON array corresponding to each frame in the IMAGE batch can be obtained from DWPose and OpenPose using the UI's app.nodeOutputs or /history API endpoint. AnimalPose's JSON output uses a format similar to OpenPose JSON:
[
{
"version": "ap10k",
"animals": [
[[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],
[[x1, y1, 1], [x2, y2, 1],..., [x17, y17, 1]],
...
],
"canvas_height": 512,
"canvas_width": 768
},
...
]
For extension developers (e.g., Openpose editor):
const poseNodes = app.graph._nodes.filter(node => ["OpenposePreprocessor", "DWPreprocessor", "AnimalPosePreprocessor"].includes(node.type))
for (const poseNode of poseNodes) {
const openposeResults = JSON.parse(app.nodeOutputs[poseNode.id].openpose_json[0])
console.log(openposeResults) //Array containing Openpose JSON for each frame
}
For API users:
Javascript
import fetch from "node-fetch" //Remember to add "type": "module" in "package.json"
async function main() {
const promptId = '792c1905-ecfe-41f4-8114-83e6a4a09a9f' //Too lazy, don't want to POST /queue
let history = await fetch(`http://127.0.0.1:8188/history/${promptId}`).then(re => re.json())
history = history[promptId]
const nodeOutputs = Object.values(history.outputs).filter(output => output.openpose_json)
for (const nodeOutput of nodeOutputs) {
const openposeResults = JSON.parse(nodeOutput.openpose_json[0])
console.log(openposeResults) //Array containing Openpose JSON for each frame
}
}
main()
Python
import json, urllib.request
server_address = "127.0.0.1:8188"
prompt_id = '' #Too lazy, don't want to POST /queue
def get_history(prompt_id):
with urllib.request.urlopen("http://{}/history/{}".format(server_address, prompt_id)) as response:
return json.loads(response.read())
history = get_history(prompt_id)[prompt_id]
for o in history['outputs']:
for node_id in history['outputs']:
node_output = history['outputs'][node_id]
if 'openpose_json' in node_output:
print(json.loads(node_output['openpose_json'][0])) #List containing Openpose JSON for each frame
ControlNet Aux Examples
ControlNet Aux - A picture is worth a thousand words


ControlNet Aux FAQ
Why do some nodes not appear after installing the ControlNet Aux repository?
This ControlNet Aux repository has a new mechanism that skips any custom nodes that cannot be imported. If you encounter this situation, please create an issue on the Issues tab and attach the log from the command line.
DWPose/AnimalPose only uses CPU so it's slow. How to make it use GPU?
There are two methods to speed up DWPose: using TorchScript checkpoints (.torchscript.pt) or ONNXRuntime (.onnx). The TorchScript method is slightly slower than ONNXRuntime but doesn't require any additional libraries and is still much faster than CPU.
TorchScript bounding box detector is compatible with onnx pose estimator and vice versa.
TorchScript
Set bbox_detector and pose_estimator according to this image. If the input image is ideal, you can try other bounding box detectors ending with .torchscript.pt to reduce bounding box detection time.

ONNXRuntime
If onnxruntime is successfully installed and the checkpoint uses .onnx ending, it will replace the default cv2 backend to utilize GPU. Note that if you are using an NVidia card, this method currently only works on CUDA 11.8 (ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z) unless you compile onnxruntime yourself.

Understand your onnxruntime build:
- NVidia CUDA 11.x or below/AMD GPU: onnxruntime-gpu
- NVidia CUDA 12.x: onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
- DirectML: onnxruntime-directml
- OpenVINO: onnxruntime-openvino
Note that if this is your first time using ComfyUI, please test if it can run on your device before proceeding to the next step.
Add it to requirements.txt
Run install.bat or the pip command mentioned in the installation section
ControlNet Aux Preprocessor Resource Files
anime_face_segment
dwpose
bbox_detector
Can be
animal_pose (ap10k)
bbox_detector
Can be
face_yolox
hand_yolox
leres
lineart
oneformer
openpose
For more resource files, please refer toGitHub Repository