# Core Nodes

## Image

1. **Pad Image for Outpainting**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2Fwp3CktgwAj7UrQu6HhbN%2F8ecd50e0-b087-472f-924a-61e7696e8676.png?alt=media&#x26;token=173eb626-970a-43db-9df4-f18bf1736e9b" alt="Core Nodes - Pad image for outpainting" width="563"><figcaption></figcaption></figure>

> **Fill and extend the image, similar to expansion. First increase the image size, then draw the expanded area as a mask. It is recommended to use the VAE Encode (for Inpainting) to ensure that the original image remains unchanged.**

Parameters:

left、top、right、bottom: Padding Amounts for Top, Bottom, Left, and Right&#x20;

feathering: Edge Feathering Degree

2. **Save Image**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FXLgVDpyFgPFftqitftux%2Fsave_image.png?alt=media&#x26;token=5bca3c55-36b7-45e8-9bbb-45c4196582d7" alt="Core Nodes - save image"><figcaption></figcaption></figure>

3. **Load Image**
4. **ImageBlur**

> **Add a Blur Effect to the Image**

Parameters:

sigma: The smaller the value, the more concentrated the blur is around the center pixel.

5. **Image Blend**

> **You can blend two images together using transparency.**

6. **Image Quantize**

> **Reduce the number of colors in the image**

Parameters:

**colors:** Quantize the number of colors in the image. When set to 1, the image will have only one color.

**dither:** Whether to use dithering to make the quantized image appear smoother

7. **Image Sharpen**

Parameters:

sigma: The smaller the value, the more concentrated the sharpening is around the center pixel.

8. **Invert Image**

> **Invert the colors of the image**

9. **Upscaling**

9.1 **Upscale Image （Using Model）**

9.2 **Upscale Image**

> **The Upscale Image node can be used to resize pixel images.**

Parameters:

upscale\_method: Select the pixel-filling method.

width: The adjusted width of the image

height: The adjusted height of the image

crop: Whether to crop the image

10. **Preview Image**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2F84pDfmdsEMkPSmEttpuT%2F%25E9%25A2%2584%25E8%25A7%2588%25E5%259B%25BE%25E7%2589%2587.png?alt=media&#x26;token=754497b5-e29f-4be5-b10e-56937ee3842b" alt="Core Nodes - Preview image"><figcaption></figcaption></figure>

## Loaders

1. **Load CLIP Vision**

> **Decode the image to form descriptions (prompts), and then convert them into conditional inputs for the sampler. Based on the decoded descriptions (prompts), generate new similar images. Multiple nodes can be used together. Suitable for transforming concepts, abstract things, used in combination with Clip Vision Encode.**

2. **Load CLIP**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FiY8B2KRmDGQPpPlv1YE2%2Fload_clip1.png?alt=media&#x26;token=bff776f1-0ea3-42be-9f55-e7bb16cb5a1f" alt="Loaders - Load CLIP"><figcaption></figcaption></figure>

> **The Load CLIP node can be used to load a specific CLIP model, CLIP models are used to encode text prompts that guide the diffusion process.**

<mark style="color:red;">\*</mark>Conditional diffusion models are trained using a specific CLIP model, using a different model than the one which it was trained with is unlikely to result in good images. The Load Checkpoint node automatically loads the correct CLIP model.

3. **unCLIP Checkpoint Loader**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FutgKhOuM8HlTOjQtt89Z%2Funclip.png?alt=media&#x26;token=c50440f5-0009-4b64-8232-ccc89ca70ddb" alt="Loaders - unCLIP Checkpoint Loader"><figcaption></figcaption></figure>

> **The unCLIP Checkpoint Loader node can be used to load a diffusion model specifically made to work with unCLIP. unCLIP Diffusion models are used to denoise latents conditioned not only on the provided text prompt, but also on provided images. This node will also provide the appropriate VAE and CLIP amd CLIP vision models.**

<mark style="color:red;">\*</mark>even though this node can be used to load all diffusion models, not all diffusion models are compatible with unCLIP.

4. **Load ControInet Model**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2F8SziiAXo6ZwOSgjMhbNa%2F%25E6%258E%25A7%25E5%2588%25B6%25E7%25BD%2591%25E6%25A8%25A1%25E5%259E%258B.png?alt=media&#x26;token=823e6029-4dbf-4839-8032-0e61e43454e9" alt="Loaders - Load ControInet Model" width="563"><figcaption></figcaption></figure>

> **The Load ControlNet Model node can be used to load a ControlNet model, Used in conjunction with Apply ControINet.**

5. **Load LoRA**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FW1TYjlhj7zIiEeiR1K5s%2Fload_lora.png?alt=media&#x26;token=6520a677-5eb9-4947-aefd-87d77533e744" alt="Loaders - Load LoRA"><figcaption></figcaption></figure>

6. **Load VAE**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FyulQlPO2SZz0S9RDa8uv%2Fae.png?alt=media&#x26;token=25c0be89-c0c1-4acc-b99d-ae3845c391de" alt="Loaders - Load VAE"><figcaption></figcaption></figure>

7. **Load Upscale Model**
8. **Load Checkpoint**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FXObKqBQLp75sGchVIgXq%2F%25E5%258A%25A0%25E8%25BD%25BD%25E5%25A4%25A7%25E6%25A8%25A1%25E5%259E%258B.png?alt=media&#x26;token=9daa4d6c-de9e-449a-8941-ffe999425112" alt="Loaders - Load Checkpoint"><figcaption></figcaption></figure>

9. **Load Style Model**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FdXFGfPg7G3EE6gjqmdDE%2F%25E5%258A%25A0%25E8%25BD%25BD%25E9%25A3%258E%25E6%25A0%25BC%25E6%25A8%25A1%25E5%259E%258B.png?alt=media&#x26;token=0d23a7f8-b307-4fed-8cac-26750e5fed69" alt="Loaders - Load Style Model"><figcaption></figcaption></figure>

> **The Load Style Model node can be used to load a Style model. Style models can be used to provide a diffusion model a visual hint as to what kind of style the denoised latent should be in.**

<mark style="color:red;">\*</mark>Only T2IAdaptor style models are currently supported

10. **Hypernetwork Loader**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FOBDa1BOJJaCpYRocjTPb%2F%25E7%25BD%2591%25E7%25BB%259C%25E5%258A%25A0%25E8%25BD%25BD%25E8%258A%2582%25E7%2582%25B9.png?alt=media&#x26;token=b62812e9-7e53-49ee-9a27-c97a62bbda5e" alt="Loaders - Hypernetwork Loader"><figcaption></figcaption></figure>

> **The Hypernetwork Loader node can be used to load a hypernetwork. similar to LoRAs, they are used to modify the diffusion model, to alter the way in which latents are denoised. Typical use-cases include adding to the model the ability to generate in certain styles, or better generate certain subjects or actions. One can even chain multiple hypernetworks together to further modify the model.**

## **Conditioning**

1. **Apply ControlNet**

> **Load ControlNet model, which can connect multiple ControlNet nodes.**

Parameters:

strength: The higher the value, the stronger the constraint on the image.

<mark style="background-color:red;">\*The ControlNet image should be the corresponding preprocessed image, for example, the Canny preprocessed image corresponds to the Canny preprocessed graph. Therefore, it is necessary to add corresponding nodes between the original image and the ControlNet to preprocess it into the preprocessed graph</mark>

2. **CLIP Text Encode (Prompt)**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2F0WZg6QF1v0QlMzqPEBrz%2F%25E6%2596%2587%25E6%259C%25AC%25E7%25BC%2596%25E8%25BE%2591.png?alt=media&#x26;token=3b5f7f4a-26df-4746-b433-c8c4ae06f440" alt="Conditioning - Input text prompts" width="563"><figcaption></figcaption></figure>

> **Input text prompts, including positive and negative prompts.**

3. **CLIP Vision Encode**

> **Decode the image to generate descriptions (prompts), then convert them into conditional inputs for the sampler. Based on the decoded descriptions (prompts), generate new similar images. Multiple nodes can be used together. Suitable for transforming concepts, abstract things, used in conjunction with Load Clip Vision.**

4. **CLIP Set Last Layer**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FEIuT11jnficpV5b9nIRZ%2F%25E8%25B7%25B3%25E8%25BF%2587%25E5%25B1%2582%25E7%25BA%25A7.png?alt=media&#x26;token=9cf366ba-caa2-4525-96f7-e6c37da18fcb" alt="Conditioning - CLIP Set Last Layer" width="468"><figcaption></figcaption></figure>

> **Clip Skip, It is generally set to -2**

5. **GLIGEN Textbox Apply**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FnUIgAp3ROswPxFmRaIwv%2F%25E6%2596%2587%25E6%259C%25AC%25E6%25A1%2586%25E5%25BA%2594%25E7%2594%25A8%25E8%258A%2582%25E7%2582%25B9.png?alt=media&#x26;token=52f0d9a0-60ee-4306-beac-6787c556c1b6" alt="Conditioning - GLIGEN Textbox Apply"><figcaption></figcaption></figure>

Guide the prompts to generate in the specified portion of the image.

<mark style="background-color:red;">\*The origin of the coordinate system in ComfyUI is located at the top left corner.</mark>

6. **unCLIP Conditioning**

> **The images encoded through the CLIP vision model provide additional visual guidance for the unCLIP model. This node can be chained to provide multiple images as guidance.**

7. **Conditioning Average**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2Ffum0xSO7VM1x73j09ODa%2F%25E5%25B9%25B3%25E5%259D%2587.png?alt=media&#x26;token=b5aa4486-215f-494e-84f5-df41285c95c9" alt="Conditioning - Conditioning Average"><figcaption></figcaption></figure>

> **Blend two pieces of information based on their strengths. When conditioning\_to\_strength is set to 1, diffusion will only be influenced by conditioning\_to. When conditioning\_to\_strength is set to 0, image diffusion will only be influenced by conditioning\_from.**

8. **Apply Style Model**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FRBWOhQxZ1UGCeoFa8e7e%2F%25E5%25BA%2594%25E7%2594%25A8%25E6%25A0%25B7%25E5%25BC%258F%25E6%25A8%25A1%25E5%259E%258B%25E8%258A%2582%25E7%2582%25B9.png?alt=media&#x26;token=8f8d7eb6-1cfc-42a5-b862-b77358a5c420" alt="Conditioning - Apply Style Model"><figcaption></figcaption></figure>

> **Can be used to provide additional visual guidance for the diffusion model, especially regarding the style of the generated images**

9. **Conditioning (Combine)**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FIY4oM2gLBSjzVvf9ODgl%2F%25E7%25BB%2593%25E5%2590%2588.png?alt=media&#x26;token=ed8dee35-1e8a-431a-a049-6c9e5ee707b0" alt="Conditioning - Combine"><figcaption></figcaption></figure>

> **Blend two pieces of information.**

10. **Conditioning (Set Area)**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FnmoDduhiZS7AnBFn7puR%2F%25E5%2588%2586%25E5%258C%25BA%25E5%259F%259F.png?alt=media&#x26;token=de396ecd-d32a-465e-8a47-66cee1616ab7" alt="Conditioning - Set Area"><figcaption></figcaption></figure>

> **Conditioning (Set Area) can be used to confine the affected region within a specified area of the image. Used together with the Conditioning (Combine) , it allows for better control over the composition of the final image.**

Parameters:

width: The width of the control region

height: The height of the control region

x: The x-coordinate of the origin of the control region

y: The y-coordinate of the origin of the control region

strength: The strength of the conditional information

\*<mark style="background-color:red;">The origin of the coordinate system in ComfyUI is located at the top left corner.</mark>

> **As shown in the figure: set the left side to "cat" and the right side to "dog".**

11. **Conditioning (Set Mask)**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FAvBAZvDAOc3ZBPoOsJBS%2F%25E8%25AE%25BE%25E7%25BD%25AE%25E9%2581%25AE%25E7%25BD%25A9%25E8%258A%2582%25E7%2582%25B9.png?alt=media&#x26;token=e59c6b34-7641-4a92-87e5-2fa5fef502d0" alt="Conditioning - Set Mask"><figcaption></figcaption></figure>

> **Conditioning (Set Mask) can be used to confine an adjustment within a specified mask. Used together with the Conditioning (Combine) node, it allows for better control over the composition of the final image.**

## Latent

1. **VAE Encode（for Inpainting）**

> **Applicable for Partial Repainting, right-click to achieve Partial Repainting through Open in MaskEditor.**

2. **Set Latent Noise Mask**

> **The second method for partial repainting involves first encoding the image through a VAE encoder to transform it into content recognizable in latent space. Then, regenerate the masked part in the latent space.**

> **Compared to the VAE Encode (for Inpainting) method, this approach can better understand the content that needs to be regenerated, resulting in a lower probability of generating incorrect images. It will reference the image to be redrawn.**

3. **Rotate Latent**

> **Rotate the image clockwise.**

4. **Flip Latent**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2F4dnNmEJ6ivLe3cWxhWB7%2Fflip_latent.png?alt=media&#x26;token=22c08d63-554f-419b-8763-5641e40b785d" alt="Latent - Flip Latent"><figcaption></figcaption></figure>

> **Flip the image horizontally or vertically.**

5. **Crop Latent**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FKN7MjC6qrUuPVSFpi7Rc%2F%25E8%25A3%2581%25E5%2589%25AA%25E5%259B%25BE%25E7%2589%2587.png?alt=media&#x26;token=3a38594d-04b1-4396-88da-79d9fe0678e9" alt="Latent - Crop Latent"><figcaption></figcaption></figure>

> **Used to crop the image into a new shape.**

6. **VAE Encode**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FhHBdPdw2Ewywp82U63AP%2F%25E7%25BC%2596%25E7%25A0%2581.png?alt=media&#x26;token=6e45e438-8041-41b8-b2d8-459c4be958cb" alt="Latent - VAE Encode"><figcaption></figcaption></figure>

7. **VAE Decode**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FMYmW9zPcbPqb3UsJ5V1o%2F%25E8%25A7%25A3%25E7%25A0%2581.png?alt=media&#x26;token=cebb0d0b-3225-4ca7-81f8-ad3fec9b82f1" alt="Latent - VAE Decode"><figcaption></figcaption></figure>

8. **Latent From Batch**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2Fe9xusxbm0EyUF0wiZEva%2F%25E6%2589%25B9%25E6%25AC%25A1%25E4%25B8%25AD%25E6%258F%2590%25E5%258F%2596.png?alt=media&#x26;token=d9a13239-4092-43e9-ad11-54229d226e9f" alt="Latent - Latent From Batch"><figcaption></figcaption></figure>

> **Extract latent images from batches. The Latent From Batch node can be used to select a latent image or image segment from a batch. This is very useful in workflows where isolating specific latent images or images is required.**

Parameters:

batch\_index: The index of the first latent image to be selected.

length: The number of latent images to retrieve.

9. **Repeat Latent Batch**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FO73vFe4ceM1uBSUtMB5e%2F%25E9%2587%258D%25E5%25A4%258D%25E5%2588%2586%25E6%2589%25B9%25E5%259B%25BE%25E5%2583%258F.png?alt=media&#x26;token=fcf219c2-c817-4ef6-9b5c-41cf7541ba8d" alt="Latent - Repeat Latent Batch"><figcaption></figcaption></figure>

> **Repeat a batch of images, useful for creating multiple variations of an image in an IMG2IMG workflow.**

Parameters:

amount: The number of repetitions.

10. **Rebatch Latents**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FNpHaUWQWMt3Q1YUvoBom%2F%25E9%2587%258D%25E6%2596%25B0%25E6%2589%25B9%25E6%25AC%25A1.png?alt=media&#x26;token=ec696ca7-b0f2-4371-88bb-45d080e6d14b" alt="Latent - Rebatch Latents"><figcaption></figcaption></figure>

> **Can be used to split or merge batches of latent space images.**

11. **Upscale Latent**

> **Adjust the resolution of latent space images, with pixel filling.**

Parameters:

upscale\_method: The method of pixel filling.

width: The width of the adjusted latent space image.

height: The height of the adjusted latent space image.

crop: Indicates whether the image is to be cropped.

<mark style="background-color:red;">\*The Upscale image in latent space may suffer from degradation when decoded through VAE. KSampler can be used for secondary sampling to repair the image.</mark>

12. **Latent Composite**

> **Overlay one image onto another.**

Parameters:

x: The x-coordinate of the overlay position of the upper layer.

y: The y-coordinate of the overlay position of the upper layer.

feather: Indicates the degree of feathering at the edges.

<mark style="background-color:red;">\*The image needs to be encoded (VAE Encode) into latent space.</mark>

13. **Latent Composite Masked**

> **Overlay an image with a mask onto another, only overlaying the masked part.**

input:

destination: The underlying latent space image.

source: The overlaying latent space image.

Parameters:

x: The x-coordinate of the overlay region.

y: The y-coordinate of the overlay region.

resize\_source: Indicates whether to adjust the resolution of the masked region.

14. **Empty Latent Image**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FWNNCEKLUl4gNHMD2PaAP%2F%25E7%25A9%25BA%25E6%25BD%259C%25E5%259C%25A8%25E5%259B%25BE%25E5%2583%258F.png?alt=media&#x26;token=69625876-8105-4ed8-964d-380039efcd21" alt="Latent - Empty Latent Image"><figcaption></figcaption></figure>

> **The Empty Latent Image can be used to create a set of new empty latent images. These latent images can then be used in workflows such as Text2Img by adding noise and denoising to them using sampling nodes.**

## Mask

1. **Load Image As Mask**
2. **Invert Mask**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2Fc9Ez9IbGbv5Pgr7jZCnq%2F%25E5%258F%258D%25E8%25BD%25AC%25E9%2581%25AE%25E7%25BD%25A9.png?alt=media&#x26;token=4edd4146-fc0c-4c1c-8c38-73c6c45ca7f9" alt="Mask - Invert Mask"><figcaption></figcaption></figure>

3. **Solid Mask**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FhwyD45tiRt79O9nUP9yV%2F%25E9%2581%25AE%25E7%25BD%25A9%201.png?alt=media&#x26;token=3e3a5265-0e01-4ad0-8358-6f0acec3d91f" alt="Mask - Solid Mask" width="563"><figcaption></figcaption></figure>

> **It acts as a canvas for generating images and can be combined with Mask Composite**

4. **Convert Mask To Image**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FqcNdzSMpZxAH70YLf2rC%2F%25E9%2581%25AE%25E7%25BD%25A9%25E8%25BD%25AC%25E5%259B%25BE%25E5%2583%258F.png?alt=media&#x26;token=aa137e53-6ab2-472d-b51f-6a1d63221346" alt="Mask - Convert Mask To Image"><figcaption></figcaption></figure>

5. **Convert Image To Mask**

> **Convert the mask to a grayscale image.**

6. **Feather Mask**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FusJt0HqG6pDkSvdnTGm9%2F%25E7%25BE%25BD%25E5%258C%2596%25E8%2592%2599%25E7%2589%2588.png?alt=media&#x26;token=15b79f14-effc-4675-89db-bd144627fc7c" alt="Mask - Feather Mask"><figcaption></figcaption></figure>

> **Apply feathering to the mask.**

7. **Crop Mask**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2Fqs6UlNWhNRaz8yaqxjmP%2F%25E8%25A3%2581%25E5%2589%25AA%25E9%2581%25AE%25E7%25BD%25A9.png?alt=media&#x26;token=181fb484-02a6-404c-8247-206f0eef0fa5" alt="Mask - Crop Mask"><figcaption></figcaption></figure>

> **Clip the mask to a new shape.**

8. **Mask Composite**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2F0exglmpzUkRT87qdiSGq%2F%25E9%2581%25AE%25E7%25BD%25A9.png?alt=media&#x26;token=afccdabf-efed-4323-991a-9471522b6f28" alt="Mask - Mask Composite" width="563"><figcaption></figcaption></figure>

> **Paste one mask into another, connecting Solid Mask. A Value of 0 represents black, which will not be drawn, while a Value of 1 represents white, which will be drawn. The values in the two connected Solid Masks must be different, otherwise the mask will not take effect.**

input:

destination(1): The mask to be pasted in, equivalent to the final image dimensions.

source(0): The mask to be pasted.

Parameters:

X,Y: Adjust the position of the source.

operation: When the source is 0, use multiply; when it is 1, use add.

## Sampler

1. **KSampler**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FFt7q3deuxHgw9Mqiwuo6%2F%25E9%2587%2587%25E6%25A0%25B7%25E5%2599%25A8.png?alt=media&#x26;token=1e9e35b8-106d-4a36-b839-57bec793d6db" alt="Sampler - KSampler"><figcaption></figcaption></figure>

input:

latent\_image: The latent image to be denoised.

output:

LATENT: The latent image after denoising.

2. **KSampler Advanced**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2FCpio0DZcH0G4xGYFmbl5%2F%25E9%25AB%2598%25E7%25BA%25A7%25E9%2587%2587%25E6%25A0%25B7%25E5%2599%25A8.png?alt=media&#x26;token=79d52b4f-2ba2-4197-8ec5-e607b6a1f463" alt="Sampler - KSampler Advanced"><figcaption></figcaption></figure>

> **You can manually control the noise.**

## Advanced

1. **Load Checkpoint With Config**

<figure><img src="https://2219884424-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FigAtVLBrlaI8jVruJfC8%2Fuploads%2Fz5251O00m5rl3L5SG3mI%2F%25E5%25B8%25A6%25E9%2585%258D%25E7%25BD%25AE%25E5%25A4%25A7%25E6%25A8%25A1%25E5%259E%258B.png?alt=media&#x26;token=f5d573a2-6889-4e15-855e-5c87616ac658" alt="Advanced - Load Checkpoint With Config"><figcaption></figcaption></figure>

> **Load the diffusion model based on the provided configuration file.**

## Other nodes（Updating）

1. **AIO Aux Preprocessor**

> **Select different preprocessors to generate corresponding images.**
