GLIGEN Text Box Apply
Last updated
Last updated
Class name: GLIGENTextBoxApply
Category: conditioning/gligen
Output node: False
The GLIGENTextBoxApply node is designed to integrate text-based conditioning into a generative model’s input, specifically by applying text box parameters and encoding them using a CLIP model. This process enriches the conditioning with spatial and textual information, facilitating more precise and context-aware generation.
conditioning_to
CONDITIONING
Specifies the initial conditioning input to which the text box parameters and encoded text information will be appended. It plays a crucial role in determining the final output by integrating new conditioning data.
clip
CLIP
The CLIP model used for encoding the provided text into a format that can be utilized by the generative model. It’s essential for converting textual information into a compatible conditioning format.
gligen_textbox_model
GLIGEN
Represents the specific GLIGEN model configuration to be used for generating the text box. It’s crucial for ensuring that the text box is generated according to the desired specifications.
text
STRING
The text content to be encoded and integrated into the conditioning. It provides the semantic information that guides the generative model.
width
INT
The width of the text box in pixels. It defines the spatial dimension of the text box within the generated image.
height
INT
The height of the text box in pixels. Similar to width, it defines the spatial dimension of the text box within the generated image.
x
INT
The x-coordinate of the top-left corner of the text box within the generated image. It specifies the text box’s position horizontally.
y
INT
The y-coordinate of the top-left corner of the text box within the generated image. It specifies the text box’s position vertically.
conditioning
CONDITIONING
The enriched conditioning output, which includes the original conditioning data along with the newly appended text box parameters and encoded text information. It’s used to guide the generative model in producing context-aware outputs.