Nodes

Nodes Reference

This page documents all available node types in Node Banana. Each node serves a specific purpose in your workflow.

Overview

NodePurposeInputsOutputs
Image InputLoad imagesImage
Audio InputLoad audio filesAudio
PromptText promptsText (optional)Text
Prompt ConstructorTemplate-based promptsText (multiple)Text
Generate ImageAI image generationImage, TextImage
Generate VideoAI video generationImage, TextVideo
Generate 3DAI 3D model generationImage, Text3D Model
3D ViewerView and capture 3D models3D ModelImage
Video StitchCombine videosVideo (multiple), AudioVideo
Ease CurveApply speed curvesVideo, Ease CurveVideo, Ease Curve
LLM GenerateAI text generationText, ImageText
AnnotationDraw on imagesImageImage
Split GridSplit into gridImageReference
OutputDisplay resultsImage, Video
Output GalleryView image collectionsImage (multiple)
Image CompareCompare two imagesImage (2)

Image Input

The Image Input node loads images into your workflow from your local filesystem.

Outputs

  • Image — The loaded image as base64 data

Features

  • Drag and drop images directly onto the node
  • Click to open file picker
  • Supports PNG, JPG, and WebP formats
  • Maximum file size: 10 MB

Usage

  1. Add an Image Input node to the canvas
  2. Click the node or drag an image file onto it
  3. The image appears in the node preview
  4. Connect the output to downstream nodes

Image Paste Support: Press Cmd+V (Mac) or Ctrl+V (Windows/Linux) to paste images directly from your clipboard. If an Image Input node is selected, it will update with the pasted image. Otherwise, a new Image Input node is created at the viewport center with the pasted image.


Audio Input

The Audio Input node loads audio files into your workflow for use in video generation.

Outputs

  • Audio — The loaded audio file

Features

  • Drag and drop audio files directly onto the node
  • Click to open file picker
  • Supports MP3, WAV, and other browser-supported audio formats
  • Waveform visualization displays audio content at a glance
  • Built-in audio playback with play/pause controls
  • Duration and file size display

Usage

  1. Add an Audio Input node to the canvas
  2. Click the node or drag an audio file onto it
  3. The waveform visualization appears showing the audio content
  4. Use the playback controls to preview the audio
  5. Connect the audio output to a Video Stitch node to add audio to videos

Audio files are automatically trimmed or extended to match the final video duration when used with the Video Stitch node.


Prompt

The Prompt node provides text input for your workflow. Use it to write prompts for image or text generation, or to receive text from other nodes like LLM Generate.

Inputs

  • Text (optional) — Incoming text from another node (e.g., LLM output)

Outputs

  • Text — The prompt text string

Features

  • Inline text editing
  • Expand button for larger editor (modal)
  • Full-screen editing mode for complex prompts with persistent font size preference
  • Text input connection: Receive text from LLM Generate or other text-producing nodes
    • When connected, the incoming text pre-fills the prompt but remains fully editable
    • Placeholder text reads "Text from connected node (editable)..." when connected
    • The node only updates when the upstream connection output actually changes (re-runs)
    • Enables automated prompt workflows and LLM chaining with manual refinement
  • Variable naming: Assign a variable name to the prompt for use in Prompt Constructor templates
    • Click the variable name button in the header to set a name (e.g., style, subject)
    • Variable names can contain letters, numbers, and underscores (max 30 characters)
    • Named prompts can be referenced as @variableName in Prompt Constructor templates

Usage

Manual entry:

  1. Add a Prompt node
  2. Type your prompt in the text area
  3. Click the expand icon for a larger editor
  4. Connect to Generate Image or LLM Generate nodes

Receiving text from other nodes:

  1. Add a Prompt node
  2. Connect an LLM Generate (or other text output) to the Prompt's text input handle
  3. The Prompt automatically receives and displays the connected text
  4. Use the Prompt output to feed into Generate Image nodes or other text consumers

LLM-to-Prompt Chaining: Connect an LLM Generate output to a Prompt input to create automated prompt enhancement workflows. For example: [Initial Prompt] → [LLM: "enhance this prompt"] → [Enhanced Prompt] → [Generate Image]

Writing Effective Prompts

For image generation:

  • Be specific about subject, style, and composition
  • Include lighting and mood descriptions
  • Mention camera angle or perspective
A professional headshot of a business executive,
studio lighting, neutral gray background,
sharp focus, high resolution

Prompt Constructor

The Prompt Constructor node builds prompts from templates with variable placeholders. Use it to create reusable prompt templates that can be populated with values from multiple Prompt nodes.

Inputs

  • Text (multiple) — Prompt nodes with variable names that supply values for @variable placeholders

Outputs

  • Text — The resolved prompt with all variables substituted

Features

  • Template-based prompts: Define reusable templates with @variable placeholders
  • @variable autocomplete: Start typing @ to see available variables from connected Prompt nodes
    • Keyboard navigation (arrow keys, Enter, Escape)
    • Shows variable name and current value in autocomplete dropdown
  • Expand modal: Click the expand button to open a full-screen editor
    • Adjustable font size
    • Clickable variable pills in toolbar for quick insertion
    • Live "Resolved Preview" panel showing final prompt with substituted variables
    • Unsaved changes confirmation dialog when closing with edits
  • Unresolved variable warnings: Amber badge displays any @variables that don't match a connected Prompt node
  • Real-time preview: Hover over the node to see the resolved prompt as a tooltip

Usage

Basic template workflow:

  1. Add Prompt nodes and assign them variable names (e.g., style, subject, mood)
  2. Add a Prompt Constructor node
  3. Connect the Prompt nodes to the Prompt Constructor's text input
  4. In the Prompt Constructor, type your template using @variableName syntax:
    A @style photograph of @subject with @mood lighting
  5. The template automatically resolves to the final prompt
  6. Connect the Prompt Constructor output to Generate Image or other nodes

Using the expand modal:

  1. Click the expand button on a Prompt Constructor node
  2. Type your template - use @ to trigger autocomplete and insert variables
  3. Click variable pills in the toolbar to insert them at cursor position
  4. View the "Resolved Preview" panel to see the final prompt in real-time
  5. Click "Save Template" to apply changes (or "Cancel" to discard)

Reusable Templates: Create complex prompt templates with multiple variables for consistent generation. For example: A @style photo of @subject, @pose, @lighting, @background can be reused with different combinations of style, subject, pose, lighting, and background values.

⚠️

If you see an amber "Unresolved" badge, it means one or more @variables in your template don't match any connected Prompt node's variable name. Check that your Prompt nodes have matching variable names assigned.

Example Workflows

Product photography variations:

Template: "Product photography of @product, @angle angle, @lighting lighting, @background background"

Connected prompts:
- product: "luxury watch"
- angle: "45-degree"
- lighting: "soft studio"
- background: "white seamless"

Resolved: "Product photography of luxury watch, 45-degree angle, soft studio lighting, white seamless background"

Character portraits:

Template: "@character_name, @art_style style, @expression expression, @setting"

Connected prompts:
- character_name: "elven warrior"
- art_style: "watercolor"
- expression: "determined"
- setting: "misty forest background"

Resolved: "elven warrior, watercolor style, determined expression, misty forest background"

Generate Image

The Generate Image node creates images using AI models from multiple providers including Gemini, Replicate, and fal.ai.

Inputs

  • Image (optional, multiple) — Reference images for the generation (supports image-to-image)
  • Text — The prompt describing what to generate
  • Dynamic inputs — Additional inputs based on selected model's schema

Outputs

  • Image — The generated image

Settings

SettingDescription
ProviderChoose from Gemini, Replicate, or fal.ai
ModelSelect from available models (use search dialog)
Custom ParametersModel-specific parameters appear dynamically

Provider Configuration

Configure API keys for each provider in Project Settings → Providers tab:

  • Gemini — Google AI API key
  • Replicate — Replicate API token
  • fal.ai — fal.ai API key

Model Discovery

Click the model selector to open the Model Search dialog:

  • Browse models from all configured providers
  • Filter by provider using icon buttons
  • View recently used models for quick access
  • See capability badges (image/video) and model IDs
  • External links to model documentation

Dynamic Parameters

Each model exposes its own parameters:

  • Parameters update automatically when changing models
  • Input handles appear/disappear based on schema
  • Parameter validation prevents invalid configurations
  • Custom UI for model-specific settings

Usage

  1. Add a Generate Image node
  2. Select a provider and model
  3. Connect a Prompt node to the text input
  4. Optionally connect Image Input nodes for image-to-image
  5. Configure model-specific parameters
  6. Run the workflow

Image-to-image generation works across all providers. Large images are automatically converted to temporary URLs for provider compatibility.

Image Carousel

After generating, use the carousel to:

  • Browse previous generations (arrow buttons)
  • See generation history for this node
  • Select a previous result as the current output

Legacy Workflows

Workflows using the old NanoBananaNode automatically migrate to GenerateImageNode on load.


Generate Video

The Generate Video node creates videos using AI models from providers that support video generation.

Inputs

  • Image (optional, multiple) — Reference images or starting frames
  • Text — The prompt describing the video to generate
  • Dynamic inputs — Additional inputs based on selected model's schema

Outputs

  • Video — The generated video

Settings

SettingDescription
ProviderChoose from providers with video capabilities
ModelSelect from available video models
Custom ParametersModel-specific parameters (duration, fps, etc.)

Video Generation Features

  • Extended timeout — 10-minute timeout for longer video processing
  • Video playback — In-node video player with controls
  • Format detection — Automatic handling of various video formats
  • Generation queue — Manages video generation tasks

Usage

  1. Add a Generate Video node
  2. Select a provider and video-capable model
  3. Connect a Prompt node describing the video
  4. Optionally connect Image Input for reference frames
  5. Configure video parameters (duration, style, etc.)
  6. Run the workflow
⚠️

Video generation typically takes longer than image generation and may have higher costs. Check provider pricing before running.

Video Carousel

After generating, use the carousel to:

  • Browse previous video generations
  • Play/pause videos directly in the node
  • Navigate through video generation history
  • Select a previous result as the current output

Output Display

Connect Generate Video to an Output node to:

  • Display videos in a larger preview area
  • Access download controls
  • View video metadata (duration, resolution)

Generate 3D

The Generate 3D node creates 3D models using AI models from providers that support 3D generation. This node is separate from image generation and outputs GLB 3D model files.

Inputs

  • Image (optional, multiple) — Reference images for image-to-3d generation
  • Text — The prompt describing the 3D model to generate
  • Dynamic inputs — Additional inputs based on selected model's schema

Outputs

  • 3D Model — The generated 3D model in GLB format

Settings

SettingDescription
ProviderChoose from providers with 3D capabilities (Replicate, fal.ai, WaveSpeed)
ModelSelect from available 3D models (text-to-3d, image-to-3d)
Custom ParametersModel-specific parameters

3D Generation Features

  • Dedicated 3D pipeline — Separate from image generation with its own executor and validation
  • Orange handles — 3D connections use distinct orange handles to differentiate from image (green) and text (blue)
  • Connection validation — 3D outputs can only connect to 3D inputs (e.g., 3D Viewer node)
  • Model search badges — 3D-capable models display 3D capability badges in the Model Search dialog
  • Multi-provider support — Works with Replicate, fal.ai, and WaveSpeed providers

How to Create Generate 3D Nodes

There are two ways to create a Generate 3D node:

  1. From the Generate dropdown:

    • Click the Generate dropdown in the floating action bar
    • Select 3D from the menu
    • A Generate 3D node appears on the canvas
  2. From Model Search:

    • Open the Model Search dialog
    • Browse models with 3D capabilities (look for 3D badges)
    • Select a 3D-capable model
    • A Generate 3D node is automatically created with that model

Usage

  1. Add a Generate 3D node using one of the methods above
  2. Select a provider and 3D-capable model
  3. Connect a Prompt node describing the 3D object
  4. Optionally connect Image Input for image-to-3d generation
  5. Configure model-specific parameters
  6. Run the workflow
  7. Connect the output to a 3D Viewer node to visualize the result

3D model generation uses orange connection handles. You'll need to connect the output to a 3D Viewer node to see and interact with the generated model.

Supported Providers

  • Replicate — Various 3D generation models
  • fal.ai — 3D-capable models
  • WaveSpeed — 3D generation support

Configure API keys for these providers in Project Settings → Providers tab.


3D Viewer

The 3D Viewer node displays and interacts with 3D models in GLB format. It renders an interactive 3D viewport where you can rotate, zoom, and capture snapshots of the model.

Inputs

  • 3D Model — A GLB file from a Generate 3D node or file upload

Outputs

  • Image — Captured snapshot of the 3D viewport as PNG

Features

  • Interactive viewport — Orbit controls let you rotate, zoom, and pan the 3D model
  • Drag-and-drop — Drop GLB files directly onto the node to load them
  • Auto-normalization — Models are automatically centered and scaled to fit the viewport
  • Lighting — Ambient and spot lighting for proper model visualization
  • Capture button — Snapshots the current viewport view as a PNG image
  • Lazy loading — Three.js library loads only when 3D nodes are used (no bundle cost for users who don't use 3D)
  • Resource cleanup — Proper disposal of 3D resources and blob URLs

How to Create 3D Viewer Nodes

There are two ways to create a 3D Viewer node:

  1. Auto-create from connection:

    • Drag a connection from a Generate 3D node's output
    • The connection drop menu appears
    • Select 3D Viewer and it's automatically created and connected
  2. Manual creation:

    • Add from the node menu or floating action bar
    • Drop a .glb file onto the node to load it
    • Or connect to a Generate 3D node output

Usage

  1. Connect a Generate 3D node's output to a 3D Viewer input (or drop a GLB file)
  2. The 3D model renders in the interactive viewport
  3. Use mouse to orbit, zoom, and pan:
    • Left drag — Rotate the model
    • Right drag — Pan the camera
    • Scroll — Zoom in/out
  4. Click the Capture button to snapshot the current view
  5. Connect the image output to downstream nodes (e.g., Output node, Generate Image)

The Capture feature lets you use 3D models as reference images for further image generation. Generate a 3D model, rotate it to the desired angle, capture it, and feed the snapshot into a Generate Image node.

Technical Details

  • Format — GLB (binary GLTF) only
  • Renderer — Three.js WebGL renderer
  • Controls — OrbitControls for camera manipulation
  • Lighting — Ambient light + directional spot light
  • Normalization — Automatic bounding box calculation and scaling

Video Stitch

The Video Stitch node combines multiple video clips into a single continuous video. It's useful for creating sequences, montages, or stitching together generated video clips.

Inputs

  • Video (multiple, minimum 2) — Video clips to concatenate
  • Audio (optional) — Audio track to add to the final video

Outputs

  • Video — The stitched output video

Settings

SettingDescription
LoopRepeat the entire clip sequence 1x, 2x, or 3x

Features

  • Filmstrip UI — Thumbnail previews for each connected video clip
  • Drag-and-drop reordering — Rearrange clips by dragging thumbnails within the filmstrip
  • Dynamic handle creation — New video input handles appear automatically as you connect clips
  • Batch multi-connect — Connect multiple video sources at once
  • Audio mixing — Optional audio input is automatically trimmed to match final video duration
  • Duration tracking — Shows duration for each clip and total duration
  • Hardware encoding support — Uses hardware acceleration when available
  • Rotation handling — Respects video rotation metadata (0°, 90°, 180°, 270°)
  • Progress indicator — Shows stitching progress percentage during processing
  • Output preview — Play the stitched video directly in the node

Loop Feature

The Loop selector allows you to duplicate the entire clip sequence:

  • 1x (default) — Single playthrough of all clips
  • 2x — Entire sequence plays twice back-to-back
  • 3x — Entire sequence plays three times

For example, if you have 3 video clips of 2 seconds each:

  • 1x loop = 6 seconds total
  • 2x loop = 12 seconds total (clips play: 1, 2, 3, 1, 2, 3)
  • 3x loop = 18 seconds total (clips play: 1, 2, 3, 1, 2, 3, 1, 2, 3)

Usage

  1. Add a Video Stitch node to the canvas
  2. Connect at least 2 video sources (e.g., from Generate Video nodes)
  3. (Optional) Connect an Audio Input node to add background music
  4. Drag thumbnails in the filmstrip to reorder clips
  5. (Optional) Set the Loop control to 2x or 3x to repeat the sequence
  6. Click Stitch or run the workflow to combine the videos
  7. The output video appears in the preview and flows to connected nodes

All video clips must have consistent dimensions. If clips have different resolutions, the Video Stitch node will display an error.

Use the loop feature to create seamless repeating animations or extend short video sequences without manually duplicating clips. Audio from the Audio Input is automatically adjusted to match the final video length.

Technical Details

  • Format: Outputs MP4 with H.264 encoding
  • Frame rate: Up to 60fps (matches source clips)
  • Bitrate: Automatically selected based on source quality
  • Audio format: AAC audio codec

Ease Curve

The Ease Curve node applies speed ramping and easing effects to videos, creating smooth acceleration and deceleration. Use it to add cinematic slow-motion, time-lapse effects, or custom speed variations.

Inputs

  • Video — The video to apply the ease curve to
  • Ease Curve (optional) — Inherit easing configuration from another Ease Curve node

Outputs

  • Video — The speed-adjusted video
  • Ease Curve — The easing configuration for passing to other Ease Curve nodes

Features

  • Interactive bezier curve editor — Drag control points to create custom easing curves
  • 30+ preset easing functions — Including sine, quad, cubic, expo, and asymmetric variations
  • Real-time curve visualization — See the speed multiplier over time
  • Easing inheritance — Chain multiple Ease Curve nodes to build complex effects
  • Duration control — Set output video duration (default: 1.5s)
  • Input/output duration display — Track how the easing affects video length
  • Hardware encoding support — Uses hardware acceleration when available
  • Video preview — Preview the eased video directly in the node

Settings

SettingDescription
PresetSelect from 30+ built-in easing functions
Custom CurveDefine cubic bezier control points (x1, y1, x2, y2)
Output DurationTarget duration for the output video (default: 1.5s)

Available Easing Presets

Basic:

  • linear — Constant speed (no easing)
  • easeIn, easeOut, easeInOut — Standard easing curves

Sine: easeInSine, easeOutSine, easeInOutSine

Quadratic: easeInQuad, easeOutQuad, easeInOutQuad

Cubic: easeInCubic, easeOutCubic, easeInOutCubic

Quartic: easeInQuart, easeOutQuart, easeInOutQuart

Quintic: easeInQuint, easeOutQuint, easeInOutQuint

Exponential: easeInExpo, easeOutExpo, easeInOutExpo

Asymmetric: easeInExpoOutCubic, easeInQuartOutQuad, and more

Usage

Basic usage:

  1. Add an Ease Curve node to the canvas
  2. Connect a video source to the video input
  3. Select a preset easing function or create a custom curve
  4. Adjust the output duration if needed
  5. Run the workflow to generate the eased video

Chaining ease curves:

  1. Add multiple Ease Curve nodes
  2. Connect the easeCurve output of one node to the easeCurve input of the next
  3. The second node inherits the curve from the first and can further modify it
  4. Build complex speed variations by stacking multiple effects

Use easeInExpo for dramatic slow-motion starts, easeOutExpo for sudden stops, or easeInOutSine for smooth, natural-looking speed variations.

How It Works

The Ease Curve node resamples video frames according to the easing curve:

  • Steep curve sections = faster playback (time-lapse effect)
  • Flat curve sections = slower playback (slow-motion effect)
  • The curve's Y-axis represents time in the source video
  • The X-axis represents time in the output video

Technical Details

  • Frame resampling: Creates new frames by selecting from source based on curve
  • Smooth interpolation: Uses cubic bezier curves for natural motion
  • Format: Outputs MP4 with H.264 encoding
  • Frame rate: Matches source frame rate (up to 60fps)

LLM Generate

The LLM Generate node creates text using large language models. Use it for prompt enhancement, descriptions, or any text generation task.

Inputs

  • Text — Input prompt or context
  • Image (optional, multiple) — Images for multimodal generation

Outputs

  • Text — The generated text

Settings

SettingDescription
ModelSelect from Gemini or OpenAI models
TemperatureControls randomness (0-2) — adjustable in collapsible Parameters section
Max TokensMaximum output length (256-16384) — adjustable in collapsible Parameters section

Parameters

The Parameters section is collapsible and contains:

  • Temperature slider (0-2) — Controls output randomness
  • Max Tokens slider (256-16384) — Controls maximum output length

Features

  • Copy to clipboard — Click the copy button on generated text output. A green checkmark confirms the copy.

Available Models

Google Gemini:

  • gemini-2.5-flash (fast, capable)
  • gemini-3-flash-preview (latest flash)
  • gemini-3-pro-preview (most capable)

OpenAI:

  • gpt-4.1-mini (balanced)
  • gpt-4.1-nano (fast)
⚠️

OpenAI models require a separate OPENAI_API_KEY in your environment.

Usage

  1. Add an LLM Generate node
  2. Connect a Prompt node with your instructions
  3. Optionally connect images for multimodal input
  4. Configure model and parameters
  5. Run to generate text

Example: Prompt Enhancement

Connect nodes like this:

[Prompt: "enhance this prompt for image generation: cat on roof"]
    → [LLM Generate]
    → [Nano Banana]

The LLM can expand simple prompts into detailed generation instructions.


Annotation

The Annotation node opens a full-screen drawing editor where you can draw on images.

Inputs

  • Image — The image to annotate

Outputs

  • Image — The annotated image

Drawing Tools

ToolDescription
RectangleDraw rectangular shapes
CircleDraw circular shapes
ArrowDraw arrows for highlighting
FreehandFree drawing with mouse/pen
TextAdd text labels

Features

  • 8 color presets
  • 3 stroke width options
  • Undo/redo support
  • Shape selection and transformation
  • Save or cancel changes

Usage

  1. Connect an image source to the Annotation input
  2. Click the Edit button on the node
  3. Use drawing tools to annotate
  4. Click Save to apply changes
  5. The annotated image flows to connected nodes

Use annotations to mask areas, add reference marks, or highlight regions for AI generation. The AI will see and respond to your annotations.


Split Grid

The Split Grid node divides an image into a grid of smaller images. This is useful for contact sheets or batch processing.

Inputs

  • Image — The image to split

Outputs

  • Reference (multiple) — Visual references to grid cells

Grid Options

The Split Grid settings modal offers 7 distinct grid layouts with visual previews and RxC (rows × columns) labels:

LayoutGrid SizeCells
2×22 rows, 2 columns4 cells
1×51 row, 5 columns5 cells
2×32 rows, 3 columns6 cells
3×23 rows, 2 columns6 cells (portrait)
2×42 rows, 4 columns8 cells
3×33 rows, 3 columns9 cells
2×52 rows, 5 columns10 cells

The 3×2 layout is useful for portrait-oriented grids. Note that both 2×3 and 3×2 produce 6 images, but with different aspect ratios.

Usage

  1. Connect an image (like a contact sheet) to Split Grid
  2. Select your grid configuration
  3. The node generates output references for each cell
  4. Connect references to organize downstream processing

How It Works

Split Grid is primarily for visual organization. It:

  • Divides the source image into equal cells
  • Creates reference outputs for each cell
  • Helps you visually track which part of an image flows where

Output

The Output node displays the final result of your workflow. Use it as the endpoint for generated images and videos.

Inputs

  • Image — Images to display
  • Video — Videos to display (connects directly to video outputs from Generate Video, Video Stitch, or Ease Curve nodes)

Settings

SettingDescription
outputFilenameCustom filename for saved outputs (without extension). If empty, uses timestamp-based naming.

Features

  • Large preview area
  • Click to open lightbox viewer
  • Download button for saving results
  • Shows image dimensions or video metadata
  • Video playback controls with format detection
  • Carousel for browsing media history
  • Auto-execute on connect: Automatically runs and displays results when you connect an edge to the Output node — no need to run the full workflow first
  • Run button: A play icon in the node header lets you manually re-fetch and refresh the output at any time
  • Auto-save to outputs folder: When your workflow has a project path configured, Output nodes automatically save results to an /outputs directory
  • Custom filenames: Set a custom outputFilename parameter to control the output filename (special characters are sanitized)
  • Auto-directory creation: The /outputs directory is automatically created if it doesn't exist

File Naming

When saving outputs:

  • With custom filename: {customFilename}_{hash}.{extension}
  • Without custom filename: generated-{timestamp}.{extension}
  • Special characters in custom filenames are replaced with underscores
  • Multiple consecutive underscores are collapsed to a single underscore

Usage

  1. Add an Output node at the end of your workflow
  2. Connect the final image or video source
  3. (Optional) Set a custom outputFilename in the node settings
  4. Run the workflow
  5. View and download results from the Output node
  6. If your workflow has a project path, outputs are automatically saved to the /outputs folder

While you can view images and videos in any node, Output nodes provide a cleaner display area and make it clear where your workflow ends. When a project path is configured, they also handle automatic saving to the /outputs directory.


Output Gallery

The Output Gallery node collects and displays multiple images in a scrollable thumbnail grid with a full-size lightbox viewer. Use it to inspect and compare multiple generations or image collections.

Inputs

  • Image (multiple) — Connect multiple image sources to view them in a grid

Features

  • Thumbnail grid: 3-column grid layout with scrollable viewing
  • Lightbox viewer: Click any thumbnail to open full-size viewer
    • Close button (X) in top-right corner
    • Download button in top-left corner
    • Previous/Next arrow navigation
    • Keyboard navigation (Left/Right arrows, Escape to close)
  • Real-time display: Shows images from connected nodes immediately, not just after execution
  • Automatic collection: Gathers images from all connected image-producing nodes (Image Input, Generate Image, Annotation, etc.)

Usage

  1. Add an Output Gallery node
  2. Connect multiple image sources to its input handle
    • You can connect multiple Generate Image nodes
    • Or connect nodes that output multiple images
  3. The gallery automatically displays all connected images as thumbnails
  4. Click any thumbnail to view full-size
  5. Use keyboard shortcuts or navigation buttons to browse
  6. Click the download button in lightbox to save individual images

Generation Comparison: Connect multiple Generate Image nodes with different parameters to the Output Gallery to compare results side-by-side. Perfect for evaluating different models, prompts, or settings.

Lightbox Controls

ControlAction
Click thumbnailOpen lightbox at that image
Left arrow / ←Previous image
Right arrow / →Next image
Escape / X buttonClose lightbox
Download buttonSave current image

Image Compare

The Image Compare node provides a side-by-side comparison view with a draggable slider for comparing two images. Useful for before/after comparisons or evaluating generation variations.

Inputs

  • Image A — First image to compare (top handle labeled "A")
  • Image B — Second image to compare (bottom handle labeled "B")

Features

  • Draggable slider: Interactive slider to reveal/hide portions of each image
  • Real-time comparison: Works with live connections, updates as source nodes change
  • Labeled inputs: Handles are labeled "A" and "B" for clarity
  • Corner labels: Images are labeled in the comparison view
  • Automatic ordering: First connected image becomes A, second becomes B

Usage

  1. Add an Image Compare node
  2. Connect two image sources:
    • Connect the first image to the top handle (A)
    • Connect the second image to the bottom handle (B)
  3. Drag the slider left or right to compare the images
  4. The node displays A on the left side and B on the right side of the slider

Before/After Workflows: Create powerful before/after demonstrations by connecting an original Image Input to handle A and a processed/generated result to handle B. Perfect for showcasing edits, style transfers, or AI enhancements.

Use Cases

  • Generation comparison: Compare two different AI generations of the same prompt
  • Model comparison: Test the same prompt with different models
  • Before/after: Show original vs. processed/annotated images
  • Parameter tuning: Compare results with different generation parameters
  • Style variations: Compare different style applications to the same subject

Example Workflow

[Image Input: Original Photo] → [Image Compare A]

[Generate Image: Enhanced]    → [Image Compare B]

This workflow lets you compare the original photo (A) with an AI-enhanced version (B) using the interactive slider.


Groups

Groups aren't nodes, but they're an important organizational feature.

Creating Groups

  1. Select multiple nodes
  2. Right-click → "Create Group"
  3. Name your group

Group Features

  • Color coding — Groups have colored backgrounds
  • Collective movement — Drag to move all contained nodes
  • Lock/unlock — Locked groups skip execution

Use Cases

  • Organize related nodes visually
  • Disable workflow sections without deleting
  • Create reusable workflow "modules"

Common Node Features

All nodes share these capabilities:

Title Editing

Click the title to rename any node. Custom names help organize complex workflows.

Comments

Add comments to nodes for documentation. Hover to see the full comment.

Comment Navigation

Use the comment navigation system to move between nodes with comments:

  • Header icon: Shows unviewed comment count badge in the header
  • Previous/Next controls: Navigate between comments using arrow buttons in comment tooltips
  • Comment preview: Hover over nodes to see comment previews in tooltips
  • View tracking: Unread comments are highlighted, and viewed comments are tracked during your session
  • Auto-centering: The viewport automatically centers on the target comment node when navigating
  • Comment order: Comments are sorted by position (top-to-bottom, left-to-right)

This feature helps you review feedback and annotations in complex workflows without manually searching for commented nodes.

Resizing

Drag the bottom-right corner to resize nodes. For Generate Image and Generate Video nodes, manually set heights are preserved when the content aspect ratio changes — only the width adjusts automatically.

Execution Controls

  • Play button — Run from this node
  • Regenerate — Re-run with current inputs

Error States

When a node encounters an error:

  • Red border appears
  • Error message displays
  • Check the browser console for details