Text to Image Workflow
Explore the text-to-image workflow in SeaArt's ComfyUI, from adding nodes like KSampler and LoRA to setting parameters and generating stunning images based on your text prompts.
Last updated
Explore the text-to-image workflow in SeaArt's ComfyUI, from adding nodes like KSampler and LoRA to setting parameters and generating stunning images based on your text prompts.
Last updated
The workflow in ComfyUI is similar to that in Web UI:
select a model → enter prompt → set parameters → generate image
Parameters
control_after_generate: control over seed generation
fixed: fixing the seed
increment: adding 1 from the existing seed
decrement: subtracting 1 from the existing seed
randomize: random seed
scheduler: usually choose between normal or karras denoise: which indicates the strength of noise reduction in image generation, the higher the value, the greater the impact and change on the image
size recommendation:
SD1.5: 512512
SDXL: 10241024
First, add a core node: the KSampler
Right-click: Add Node → sampling → KSampler
model→CheckpointLoaderSimple
positive→CLIPTextEncode
negative→CLIPTextEncode
latent_image→EmptyLatentImage
LATENT→VAEDecode
IMAGE→SaveImage
Loaders: mainly used for loading the diffusion model, including model and LoRA.
Add Node→loaders→Load LoRA
Conditioning: guides the diffusion model to generate specific outputs, including Prompt, ControINet, Clip Skip, etc.
It is recommended to add Clip Skip to control the layers skipped, which helps adjust the final image details.
Add Node→conditioning→CLIP Set Last Layer
After adding nodes, many nodes might not be connected yet. They need to be connected in order according to the sequence and matching colors.
Checkpoint→LoRA→Clip Skip→Prompt→KSampler、Empty Latent Image→VAE Decode→Save Image
Since the SDXL model is chosen here, set the sampling steps to around 40, and image size to 1024*1024