SeaArt Guide
How to SeaArt AI - Office Tutorial
How to SeaArt AI - Office Tutorial
  • ✨How to Use an AI Image Generator
  • ✨1-SeaArt AI Basic Page
  • ✨2-SeaArt AI Basic Function
    • 2-1 Text to Image
    • 2-2 Image to Image
    • 2-3 ControlNet
    • 2-4 AI Apps
      • How to publish as App
      • Swift AI Apps
    • 2-5 AI Characters
      • How to create your own character?
      • Character Description Writing Tips
      • Conversation Tips
    • 2-6 Models
    • 2-7 Post
    • 2-8 AI Video Generation
      • Txt2Vid
      • Img2Vid
      • Camera Movement
      • Start and End Frames
    • 2-9 AI Audio
    • 2-10 Workflow
      • Text to Image Workflow
      • IMG2IMG+Partial Repainting
      • Core Nodes
      • Tips
    • 2-11 Canvas
    • 2-12 LoRA Training
      • Flux Lora Training
  • ✨3-Advanced Guide
    • 3-1 Principles of AI art
    • 3-2 LoRA Training (Advance)
      • How To Create Dataset For Training
    • 3-3 Workflow Guide
      • Image Conversion
      • Inpainting
    • 3-4 Canvas Guide
    • Composite Poster
  • ✨4-Parameters
    • 4-1 Model
    • 4-2 Mode
    • 4-3 Basic Settings
    • 4-4 Advanced Config
    • 4-5 Advanced Repair
    • 4-6 Complete Prompting Guide
    • 4-7 Prompt Edit | Keyword Blending Guide
  • ✨5-Practical Examples
    • AI Influencer
    • LOGO Design
    • E-commerce Poster
    • How To Make Multiple Characters
    • Prompt Templates
    • How to Maintain Character Consistency
    • Useful Prompts
  • ✨6-Permanent Events
    • SeaArt.AI Creator Incentive Program
      • Creator Incentive Program FAQ
    • High-Quality Models Recommendation
      • SeaArt Infinity
      • Stable Diffusion 3.5
      • SeaArt Realism
      • NOOBAI XL
      • T-Ponynai3 V6
      • Counterfeit V3.0
      • Temporal Paradox Mix
    • High-Quality AI Apps Recommendation
    • High-Quality Character Recommendation
    • High-Quality Workflow Recommendation
  • ✨7-FAQ
Powered by GitBook
On this page
  1. 3-Advanced Guide
  2. 3-3 Workflow Guide

Inpainting

Learn inpainting and modifying images in ComfyUI! This guide covers hair, clothing, features, and more using "segment anything" for precise edits.

PreviousImage ConversionNext3-4 Canvas Guide

Last updated 16 days ago

Change hair color, hairstyle, chest, abs, clothing, etc.

Before starting to build nodes, it's helpful to establish a workflow.

For example, if you want to change hair color, you'll need these two steps:

Step 1: First, the hair needs to be identified. You can choose manual painting or automatic recognition. Step 2: Partial repaint, modify the recognized area.

Now, let's start building nodes based on these steps.

Method 1: segment anything

Step 1: Identify the hair

I. After uploading the image, add a 'segment anything' node. Drag the node out and load two corresponding models. Connect the image to the node, input the desired area for recognition.

Add nodes:

GroundingDinoSAMSegment (segment anything):SAMLoader (Impact)、GroundingDinoModelLoader (segment anything)

Parameters:

device_model: Prefer GPU

II. To enhance the accuracy of mask recognition, add another 'segment anything' node to identify the face. You can use the same model as before for input. Finally, subtract the two masks using 'mask-mask': mask1 - mask2 to obtain precise hair parts.

Add nodes:

Bitwise(MASK - MASK)

*When encountering areas that cannot be recognized, you can use this method of subtracting masks.

III. After obtaining the mask, you can perform some processing on the mask. For example, here we added 'GrowMask' and 'feathered mask' to expand the mask and feather the edges, making the final inpainting image more natural.

Add nodes:

GrowMask

FeatheredMask

IV. Finally, you can add a mask preview node to view the final mask effect.

Add nodes:

Convert Mask to Image

Preview Image

V. For easier viewing, we can organize this group of nodes together.

Step 2: Partial Repaint

I. Here, for the inpainting, we'll use the previously mentioned 'Set Latent Noise Mask', which will reference the original image for the inpainting. Connect the mask nodes together.

Add nodes:

Set Latent Noise Mask

II. Next, add the model group. Follow the Img2Img, add models prompts, sampler, and finally VAE Decode. Then connect the mask group to the model group.

*The images need to be encoded to enter the latent space for inpainting

III. All lines are connected. You can input corresponding prompts according to your needs, set relevant parameters, and the maximum Denoising Strength is 1.

Note that when inpainting the face, it's advisable to add the FaceDetailer node, which helps to enhance facial details. Connect the input end to the corresponding node.

Add nodes:

FaceDetailer: UltralyticsDetectorProvider、SAMLoader (Impact)

Additionally, since we only need to repaint the face and don't need to subtract two masks, you can select the corresponding node, press Ctrl+B to hide the node, and then connect the corresponding nodes.

This type of inpainting will redraw the entire face, essentially making it look like a different person. Regarding how to achieve different expressions for the same person, a more detailed tutorial will be released later. Finally, we can save this workflow for future use.

Using this method, we can achieve many one-click inpaintings.

Such as One-click Breast Enhancement , one-click muscle gain, one-click facial modification, one-click change of clothing, hairstyle, etc.

Method 2: Yoloworldwodel ESAM

Capable of quickly extracting a specific object, with relatively less precise recognition compared to segmen anything. Some parts (such as the belly) may not be detectable. It can be used in conjunction with mask-mask.

I. First, follow the same steps as "segment anything" and integrate the Yoloworld ESAM automatic detection feature to identify areas that need to be redrawn. Taking 'One-click muscle gain' as an example, since it's unable to directly recognize the areas needing muscle enhancement, the mask-mask approach is used to obtain the regions for enhancement. Then, process the masked areas accordingly.

*You can upscale the image on the original image before starting.

Important parameters:

confidence_threshold: Accuracy of recognition. A smaller value results in more precise recognition. iou_threshold: Degree of overlap for bounding boxes. A smaller value results in more precise recognition.

mask_combined: Whether to overlay masks. If "true," the mask will be combined and output on a single image. If "false," the mask will be output separately.

II. Integrate the mask into the "InpaintModelConditioning" node for inpainting masks. The prompts are only effective for masked areas. According to theImg2Img mode, add the model-prompt-sampler-decode-preview image.

III. Adjust relevant parameters, then click "Generate.”

Here are more for your inspiration.

✨
examples of ComfyUI Workflow
ComfyUI Redraw - Before and After
 ComfyUI Redraw - Change Hair Color Process
ComfyUI Redraw - Segment Anything - Identify the Hair
ComfyUI Redraw - Segment Anything - Enhance the Accuracy of Mask Recognition
ComfyUI Redraw - Segment Anything - GrowMask and Feathered Mask
ComfyUI Redraw - Segment Anything - Mask Preview Node
ComfyUI Redraw - Segment Anything - Organize Nodes Together
ComfyUI Redraw - Partial Repaint - Set Latent Noise Mask
ComfyUI Redraw - Partial Repaint - Add the Model Group
ComfyUI Redraw - Partial Repaint - Set Relevant Parameters
ComfyUI Redraw - Change Hair Color - Before and After
ComfyUI Redraw - FaceDetailer
ComfyUI Redraw - Save Workflow
ComfyUI Redraw - Change Hairstyle - Before and After
ComfyUI Redraw - Yoloworldwodel ESAM
ComfyUI Redraw - InpaintModelConditioning
ComfyUI Redraw - Muscle Enhancement - Before and After