# Inpainting

### **Change hair color, hairstyle, chest, abs, clothing, etc.**

Before starting to build nodes, it's helpful to establish a workflow.

For example, if you want to change hair color, you'll need these two steps:

**Step 1:** First, the hair needs to be identified. You can choose manual painting or automatic recognition. **Step 2:** Partial repaint, modify the recognized area.

<figure><img src="/files/8NA7MMxX8mtYFrZoIM6Z" alt=" ComfyUI Redraw - Change Hair Color Process" width="504"><figcaption></figcaption></figure>

Now, let's start building nodes based on these steps.

<mark style="background-color:red;">**Method 1: segment anything**</mark>

**Step 1: Identify the hair**

I. After uploading the image, add a 'segment anything' node. Drag the node out and load two corresponding models. Connect the image to the node, input the desired area for recognition.

> **Add nodes:**&#x20;
>
> **GroundingDinoSAMSegment (segment anything)**：SAMLoader (Impact)、GroundingDinoModelLoader (segment anything)&#x20;
>
> **Parameters:**&#x20;
>
> device\_model: Prefer GPU

II. To enhance the accuracy of mask recognition, add another 'segment anything' node to identify the face. You can use the same model as before for input. Finally, subtract the two masks using 'mask-mask': mask1 - mask2 to obtain precise hair parts.

**Add nodes:**

**Bitwise(MASK - MASK)**

<mark style="background-color:yellow;">\*When encountering areas that cannot be recognized, you can use this method of subtracting masks.</mark>

III. After obtaining the mask, you can perform some processing on the mask. For example, here we added 'GrowMask' and 'feathered mask' to expand the mask and feather the edges, making the final inpainting image more natural.

**Add nodes:**

**GrowMask**

**FeatheredMask**

IV. Finally, you can add a mask preview node to view the final mask effect.

<figure><img src="/files/mVSk2jDuVIBPfT8sqbgd" alt="ComfyUI Redraw - Segment Anything - Mask Preview Node" width="422"><figcaption></figcaption></figure>

**Add nodes:**

**Convert Mask to Image**

**Preview Image**

V. For easier viewing, we can organize this group of nodes together.

**Step 2: Partial Repaint**

I. Here, for the inpainting, we'll use the previously mentioned 'Set Latent Noise Mask', which will reference the original image for the inpainting. Connect the mask nodes together.

<figure><img src="/files/WLDKM8OnKFSr2oaLM1i1" alt="ComfyUI Redraw - Partial Repaint - Set Latent Noise Mask" width="513"><figcaption></figcaption></figure>

**Add nodes:**

**Set Latent Noise Mask**

II. Next, add the model group. Follow the Img2Img, add models prompts, sampler, and finally VAE Decode. Then connect the mask group to the model group.

<mark style="background-color:yellow;">\*The images need to be encoded to enter the latent space for inpainting</mark>

III. All lines are connected. You can input corresponding prompts according to your needs, set relevant parameters, and the maximum Denoising Strength is 1.

Note that when inpainting the face, it's advisable to add the FaceDetailer node, which helps to enhance facial details. Connect the input end to the corresponding node.

**Add nodes:**

**FaceDetailer:** UltralyticsDetectorProvider、SAMLoader (Impact)

<figure><img src="/files/vK59tWUY5u0DIRYypoLH" alt="ComfyUI Redraw - FaceDetailer"><figcaption></figcaption></figure>

Additionally, since we only need to repaint the face and don't need to subtract two masks, you can select the corresponding node, press Ctrl+B to hide the node, and then connect the corresponding nodes.

This type of inpainting will redraw the entire face, essentially making it look like a different person. Regarding how to achieve different expressions for the same person, a more detailed tutorial will be released later. Finally, we can save this workflow for future use.

Using this method, we can achieve many one-click inpaintings.

Such as One-click Breast Enhancement , one-click muscle gain, one-click facial modification, one-click change of clothing, hairstyle, etc.

<mark style="background-color:red;">**Method 2: Yoloworldwodel ESAM**</mark>

Capable of quickly extracting a specific object, with relatively less precise recognition compared to segmen anything. Some parts (such as the belly) may not be detectable. It can be used in conjunction with mask-mask.

I. First, follow the same steps as "segment anything" and integrate the Yoloworld ESAM automatic detection feature to identify areas that need to be redrawn. Taking 'One-click muscle gain' as an example, since it's unable to directly recognize the areas needing muscle enhancement, the mask-mask approach is used to obtain the regions for enhancement. Then, process the masked areas accordingly.

<mark style="background-color:yellow;">\*You can upscale the image on the original image before starting.</mark>

> **Important parameters:**&#x20;
>
> **confidence\_threshold:** Accuracy of recognition. A smaller value results in more precise recognition. **iou\_threshold:** Degree of overlap for bounding boxes. A smaller value results in more precise recognition.&#x20;
>
> **mask\_combined:** Whether to overlay masks. If "true," the mask will be combined and output on a single image. If "false," the mask will be output separately.

II. Integrate the mask into the "InpaintModelConditioning" node for inpainting masks. The prompts are only effective for masked areas. According to theImg2Img mode, add the model-prompt-sampler-decode-preview image.

<figure><img src="/files/Tpuc56fYj8N04vUJ5AGG" alt="ComfyUI Redraw - InpaintModelConditioning"><figcaption></figcaption></figure>

III. Adjust relevant parameters, then click "Generate.”

Here are more [examples of ComfyUI Workflow](https://docs.seaart.ai/seaart-comfyui-wiki/comfyui-workflow-example) for your inspiration.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.seaart.ai/guide-1/3-advanced-guide/3-3-workflow-guide/inpainting.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
