Nodes Reference
This page documents all available node types in Node Banana. Each node serves a specific purpose in your workflow.
Overview
| Node | Purpose | Inputs | Outputs |
|---|---|---|---|
| Image Input | Load images | — | Image |
| Audio Input | Load audio files | — | Audio |
| Prompt | Text prompts | Text (optional) | Text |
| Prompt Constructor | Template-based prompts | Text (multiple) | Text |
| Generate Image | AI image generation | Image, Text | Image |
| Generate Video | AI video generation | Image, Text | Video |
| Generate 3D | AI 3D model generation | Image, Text | 3D Model |
| 3D Viewer | View and capture 3D models | 3D Model | Image |
| Video Stitch | Combine videos | Video (multiple), Audio | Video |
| Ease Curve | Apply speed curves | Video, Ease Curve | Video, Ease Curve |
| LLM Generate | AI text generation | Text, Image | Text |
| Annotation | Draw on images | Image | Image |
| Split Grid | Split into grid | Image | Reference |
| Output | Display results | Image, Video | — |
| Output Gallery | View image collections | Image (multiple) | — |
| Image Compare | Compare two images | Image (2) | — |
Image Input
The Image Input node loads images into your workflow from your local filesystem.
Outputs
- Image — The loaded image as base64 data
Features
- Drag and drop images directly onto the node
- Click to open file picker
- Supports PNG, JPG, and WebP formats
- Maximum file size: 10 MB
Usage
- Add an Image Input node to the canvas
- Click the node or drag an image file onto it
- The image appears in the node preview
- Connect the output to downstream nodes
Image Paste Support: Press Cmd+V (Mac) or Ctrl+V (Windows/Linux) to paste images directly from your clipboard. If an Image Input node is selected, it will update with the pasted image. Otherwise, a new Image Input node is created at the viewport center with the pasted image.
Audio Input
The Audio Input node loads audio files into your workflow for use in video generation.
Outputs
- Audio — The loaded audio file
Features
- Drag and drop audio files directly onto the node
- Click to open file picker
- Supports MP3, WAV, and other browser-supported audio formats
- Waveform visualization displays audio content at a glance
- Built-in audio playback with play/pause controls
- Duration and file size display
Usage
- Add an Audio Input node to the canvas
- Click the node or drag an audio file onto it
- The waveform visualization appears showing the audio content
- Use the playback controls to preview the audio
- Connect the audio output to a Video Stitch node to add audio to videos
Audio files are automatically trimmed or extended to match the final video duration when used with the Video Stitch node.
Prompt
The Prompt node provides text input for your workflow. Use it to write prompts for image or text generation, or to receive text from other nodes like LLM Generate.
Inputs
- Text (optional) — Incoming text from another node (e.g., LLM output)
Outputs
- Text — The prompt text string
Features
- Inline text editing
- Expand button for larger editor (modal)
- Full-screen editing mode for complex prompts with persistent font size preference
- Text input connection: Receive text from LLM Generate or other text-producing nodes
- When connected, the incoming text pre-fills the prompt but remains fully editable
- Placeholder text reads "Text from connected node (editable)..." when connected
- The node only updates when the upstream connection output actually changes (re-runs)
- Enables automated prompt workflows and LLM chaining with manual refinement
- Variable naming: Assign a variable name to the prompt for use in Prompt Constructor templates
- Click the variable name button in the header to set a name (e.g.,
style,subject) - Variable names can contain letters, numbers, and underscores (max 30 characters)
- Named prompts can be referenced as
@variableNamein Prompt Constructor templates
- Click the variable name button in the header to set a name (e.g.,
Usage
Manual entry:
- Add a Prompt node
- Type your prompt in the text area
- Click the expand icon for a larger editor
- Connect to Generate Image or LLM Generate nodes
Receiving text from other nodes:
- Add a Prompt node
- Connect an LLM Generate (or other text output) to the Prompt's text input handle
- The Prompt automatically receives and displays the connected text
- Use the Prompt output to feed into Generate Image nodes or other text consumers
LLM-to-Prompt Chaining: Connect an LLM Generate output to a Prompt input to create automated prompt enhancement workflows. For example: [Initial Prompt] → [LLM: "enhance this prompt"] → [Enhanced Prompt] → [Generate Image]
Writing Effective Prompts
For image generation:
- Be specific about subject, style, and composition
- Include lighting and mood descriptions
- Mention camera angle or perspective
A professional headshot of a business executive,
studio lighting, neutral gray background,
sharp focus, high resolutionPrompt Constructor
The Prompt Constructor node builds prompts from templates with variable placeholders. Use it to create reusable prompt templates that can be populated with values from multiple Prompt nodes.
Inputs
- Text (multiple) — Prompt nodes with variable names that supply values for
@variableplaceholders
Outputs
- Text — The resolved prompt with all variables substituted
Features
- Template-based prompts: Define reusable templates with
@variableplaceholders - @variable autocomplete: Start typing
@to see available variables from connected Prompt nodes- Keyboard navigation (arrow keys, Enter, Escape)
- Shows variable name and current value in autocomplete dropdown
- Expand modal: Click the expand button to open a full-screen editor
- Adjustable font size
- Clickable variable pills in toolbar for quick insertion
- Live "Resolved Preview" panel showing final prompt with substituted variables
- Unsaved changes confirmation dialog when closing with edits
- Unresolved variable warnings: Amber badge displays any
@variablesthat don't match a connected Prompt node - Real-time preview: Hover over the node to see the resolved prompt as a tooltip
Usage
Basic template workflow:
- Add Prompt nodes and assign them variable names (e.g.,
style,subject,mood) - Add a Prompt Constructor node
- Connect the Prompt nodes to the Prompt Constructor's text input
- In the Prompt Constructor, type your template using
@variableNamesyntax:A @style photograph of @subject with @mood lighting - The template automatically resolves to the final prompt
- Connect the Prompt Constructor output to Generate Image or other nodes
Using the expand modal:
- Click the expand button on a Prompt Constructor node
- Type your template - use
@to trigger autocomplete and insert variables - Click variable pills in the toolbar to insert them at cursor position
- View the "Resolved Preview" panel to see the final prompt in real-time
- Click "Save Template" to apply changes (or "Cancel" to discard)
Reusable Templates: Create complex prompt templates with multiple variables for consistent generation. For example: A @style photo of @subject, @pose, @lighting, @background can be reused with different combinations of style, subject, pose, lighting, and background values.
If you see an amber "Unresolved" badge, it means one or more @variables in your template don't match any connected Prompt node's variable name. Check that your Prompt nodes have matching variable names assigned.
Example Workflows
Product photography variations:
Template: "Product photography of @product, @angle angle, @lighting lighting, @background background"
Connected prompts:
- product: "luxury watch"
- angle: "45-degree"
- lighting: "soft studio"
- background: "white seamless"
Resolved: "Product photography of luxury watch, 45-degree angle, soft studio lighting, white seamless background"Character portraits:
Template: "@character_name, @art_style style, @expression expression, @setting"
Connected prompts:
- character_name: "elven warrior"
- art_style: "watercolor"
- expression: "determined"
- setting: "misty forest background"
Resolved: "elven warrior, watercolor style, determined expression, misty forest background"Generate Image
The Generate Image node creates images using AI models from multiple providers including Gemini, Replicate, and fal.ai.
Inputs
- Image (optional, multiple) — Reference images for the generation (supports image-to-image)
- Text — The prompt describing what to generate
- Dynamic inputs — Additional inputs based on selected model's schema
Outputs
- Image — The generated image
Settings
| Setting | Description |
|---|---|
| Provider | Choose from Gemini, Replicate, or fal.ai |
| Model | Select from available models (use search dialog) |
| Custom Parameters | Model-specific parameters appear dynamically |
Provider Configuration
Configure API keys for each provider in Project Settings → Providers tab:
- Gemini — Google AI API key
- Replicate — Replicate API token
- fal.ai — fal.ai API key
Model Discovery
Click the model selector to open the Model Search dialog:
- Browse models from all configured providers
- Filter by provider using icon buttons
- View recently used models for quick access
- See capability badges (image/video) and model IDs
- External links to model documentation
Dynamic Parameters
Each model exposes its own parameters:
- Parameters update automatically when changing models
- Input handles appear/disappear based on schema
- Parameter validation prevents invalid configurations
- Custom UI for model-specific settings
Usage
- Add a Generate Image node
- Select a provider and model
- Connect a Prompt node to the text input
- Optionally connect Image Input nodes for image-to-image
- Configure model-specific parameters
- Run the workflow
Image-to-image generation works across all providers. Large images are automatically converted to temporary URLs for provider compatibility.
Image Carousel
After generating, use the carousel to:
- Browse previous generations (arrow buttons)
- See generation history for this node
- Select a previous result as the current output
Legacy Workflows
Workflows using the old NanoBananaNode automatically migrate to GenerateImageNode on load.
Generate Video
The Generate Video node creates videos using AI models from providers that support video generation.
Inputs
- Image (optional, multiple) — Reference images or starting frames
- Text — The prompt describing the video to generate
- Dynamic inputs — Additional inputs based on selected model's schema
Outputs
- Video — The generated video
Settings
| Setting | Description |
|---|---|
| Provider | Choose from providers with video capabilities |
| Model | Select from available video models |
| Custom Parameters | Model-specific parameters (duration, fps, etc.) |
Video Generation Features
- Extended timeout — 10-minute timeout for longer video processing
- Video playback — In-node video player with controls
- Format detection — Automatic handling of various video formats
- Generation queue — Manages video generation tasks
Usage
- Add a Generate Video node
- Select a provider and video-capable model
- Connect a Prompt node describing the video
- Optionally connect Image Input for reference frames
- Configure video parameters (duration, style, etc.)
- Run the workflow
Video generation typically takes longer than image generation and may have higher costs. Check provider pricing before running.
Video Carousel
After generating, use the carousel to:
- Browse previous video generations
- Play/pause videos directly in the node
- Navigate through video generation history
- Select a previous result as the current output
Output Display
Connect Generate Video to an Output node to:
- Display videos in a larger preview area
- Access download controls
- View video metadata (duration, resolution)
Generate 3D
The Generate 3D node creates 3D models using AI models from providers that support 3D generation. This node is separate from image generation and outputs GLB 3D model files.
Inputs
- Image (optional, multiple) — Reference images for image-to-3d generation
- Text — The prompt describing the 3D model to generate
- Dynamic inputs — Additional inputs based on selected model's schema
Outputs
- 3D Model — The generated 3D model in GLB format
Settings
| Setting | Description |
|---|---|
| Provider | Choose from providers with 3D capabilities (Replicate, fal.ai, WaveSpeed) |
| Model | Select from available 3D models (text-to-3d, image-to-3d) |
| Custom Parameters | Model-specific parameters |
3D Generation Features
- Dedicated 3D pipeline — Separate from image generation with its own executor and validation
- Orange handles — 3D connections use distinct orange handles to differentiate from image (green) and text (blue)
- Connection validation — 3D outputs can only connect to 3D inputs (e.g., 3D Viewer node)
- Model search badges — 3D-capable models display 3D capability badges in the Model Search dialog
- Multi-provider support — Works with Replicate, fal.ai, and WaveSpeed providers
How to Create Generate 3D Nodes
There are two ways to create a Generate 3D node:
-
From the Generate dropdown:
- Click the Generate dropdown in the floating action bar
- Select 3D from the menu
- A Generate 3D node appears on the canvas
-
From Model Search:
- Open the Model Search dialog
- Browse models with 3D capabilities (look for 3D badges)
- Select a 3D-capable model
- A Generate 3D node is automatically created with that model
Usage
- Add a Generate 3D node using one of the methods above
- Select a provider and 3D-capable model
- Connect a Prompt node describing the 3D object
- Optionally connect Image Input for image-to-3d generation
- Configure model-specific parameters
- Run the workflow
- Connect the output to a 3D Viewer node to visualize the result
3D model generation uses orange connection handles. You'll need to connect the output to a 3D Viewer node to see and interact with the generated model.
Supported Providers
- Replicate — Various 3D generation models
- fal.ai — 3D-capable models
- WaveSpeed — 3D generation support
Configure API keys for these providers in Project Settings → Providers tab.
3D Viewer
The 3D Viewer node displays and interacts with 3D models in GLB format. It renders an interactive 3D viewport where you can rotate, zoom, and capture snapshots of the model.
Inputs
- 3D Model — A GLB file from a Generate 3D node or file upload
Outputs
- Image — Captured snapshot of the 3D viewport as PNG
Features
- Interactive viewport — Orbit controls let you rotate, zoom, and pan the 3D model
- Drag-and-drop — Drop GLB files directly onto the node to load them
- Auto-normalization — Models are automatically centered and scaled to fit the viewport
- Lighting — Ambient and spot lighting for proper model visualization
- Capture button — Snapshots the current viewport view as a PNG image
- Lazy loading — Three.js library loads only when 3D nodes are used (no bundle cost for users who don't use 3D)
- Resource cleanup — Proper disposal of 3D resources and blob URLs
How to Create 3D Viewer Nodes
There are two ways to create a 3D Viewer node:
-
Auto-create from connection:
- Drag a connection from a Generate 3D node's output
- The connection drop menu appears
- Select 3D Viewer and it's automatically created and connected
-
Manual creation:
- Add from the node menu or floating action bar
- Drop a
.glbfile onto the node to load it - Or connect to a Generate 3D node output
Usage
- Connect a Generate 3D node's output to a 3D Viewer input (or drop a GLB file)
- The 3D model renders in the interactive viewport
- Use mouse to orbit, zoom, and pan:
- Left drag — Rotate the model
- Right drag — Pan the camera
- Scroll — Zoom in/out
- Click the Capture button to snapshot the current view
- Connect the image output to downstream nodes (e.g., Output node, Generate Image)
The Capture feature lets you use 3D models as reference images for further image generation. Generate a 3D model, rotate it to the desired angle, capture it, and feed the snapshot into a Generate Image node.
Technical Details
- Format — GLB (binary GLTF) only
- Renderer — Three.js WebGL renderer
- Controls — OrbitControls for camera manipulation
- Lighting — Ambient light + directional spot light
- Normalization — Automatic bounding box calculation and scaling
Video Stitch
The Video Stitch node combines multiple video clips into a single continuous video. It's useful for creating sequences, montages, or stitching together generated video clips.
Inputs
- Video (multiple, minimum 2) — Video clips to concatenate
- Audio (optional) — Audio track to add to the final video
Outputs
- Video — The stitched output video
Settings
| Setting | Description |
|---|---|
| Loop | Repeat the entire clip sequence 1x, 2x, or 3x |
Features
- Filmstrip UI — Thumbnail previews for each connected video clip
- Drag-and-drop reordering — Rearrange clips by dragging thumbnails within the filmstrip
- Dynamic handle creation — New video input handles appear automatically as you connect clips
- Batch multi-connect — Connect multiple video sources at once
- Audio mixing — Optional audio input is automatically trimmed to match final video duration
- Duration tracking — Shows duration for each clip and total duration
- Hardware encoding support — Uses hardware acceleration when available
- Rotation handling — Respects video rotation metadata (0°, 90°, 180°, 270°)
- Progress indicator — Shows stitching progress percentage during processing
- Output preview — Play the stitched video directly in the node
Loop Feature
The Loop selector allows you to duplicate the entire clip sequence:
- 1x (default) — Single playthrough of all clips
- 2x — Entire sequence plays twice back-to-back
- 3x — Entire sequence plays three times
For example, if you have 3 video clips of 2 seconds each:
- 1x loop = 6 seconds total
- 2x loop = 12 seconds total (clips play: 1, 2, 3, 1, 2, 3)
- 3x loop = 18 seconds total (clips play: 1, 2, 3, 1, 2, 3, 1, 2, 3)
Usage
- Add a Video Stitch node to the canvas
- Connect at least 2 video sources (e.g., from Generate Video nodes)
- (Optional) Connect an Audio Input node to add background music
- Drag thumbnails in the filmstrip to reorder clips
- (Optional) Set the Loop control to 2x or 3x to repeat the sequence
- Click Stitch or run the workflow to combine the videos
- The output video appears in the preview and flows to connected nodes
All video clips must have consistent dimensions. If clips have different resolutions, the Video Stitch node will display an error.
Use the loop feature to create seamless repeating animations or extend short video sequences without manually duplicating clips. Audio from the Audio Input is automatically adjusted to match the final video length.
Technical Details
- Format: Outputs MP4 with H.264 encoding
- Frame rate: Up to 60fps (matches source clips)
- Bitrate: Automatically selected based on source quality
- Audio format: AAC audio codec
Ease Curve
The Ease Curve node applies speed ramping and easing effects to videos, creating smooth acceleration and deceleration. Use it to add cinematic slow-motion, time-lapse effects, or custom speed variations.
Inputs
- Video — The video to apply the ease curve to
- Ease Curve (optional) — Inherit easing configuration from another Ease Curve node
Outputs
- Video — The speed-adjusted video
- Ease Curve — The easing configuration for passing to other Ease Curve nodes
Features
- Interactive bezier curve editor — Drag control points to create custom easing curves
- 30+ preset easing functions — Including sine, quad, cubic, expo, and asymmetric variations
- Real-time curve visualization — See the speed multiplier over time
- Easing inheritance — Chain multiple Ease Curve nodes to build complex effects
- Duration control — Set output video duration (default: 1.5s)
- Input/output duration display — Track how the easing affects video length
- Hardware encoding support — Uses hardware acceleration when available
- Video preview — Preview the eased video directly in the node
Settings
| Setting | Description |
|---|---|
| Preset | Select from 30+ built-in easing functions |
| Custom Curve | Define cubic bezier control points (x1, y1, x2, y2) |
| Output Duration | Target duration for the output video (default: 1.5s) |
Available Easing Presets
Basic:
linear— Constant speed (no easing)easeIn,easeOut,easeInOut— Standard easing curves
Sine: easeInSine, easeOutSine, easeInOutSine
Quadratic: easeInQuad, easeOutQuad, easeInOutQuad
Cubic: easeInCubic, easeOutCubic, easeInOutCubic
Quartic: easeInQuart, easeOutQuart, easeInOutQuart
Quintic: easeInQuint, easeOutQuint, easeInOutQuint
Exponential: easeInExpo, easeOutExpo, easeInOutExpo
Asymmetric: easeInExpoOutCubic, easeInQuartOutQuad, and more
Usage
Basic usage:
- Add an Ease Curve node to the canvas
- Connect a video source to the video input
- Select a preset easing function or create a custom curve
- Adjust the output duration if needed
- Run the workflow to generate the eased video
Chaining ease curves:
- Add multiple Ease Curve nodes
- Connect the easeCurve output of one node to the easeCurve input of the next
- The second node inherits the curve from the first and can further modify it
- Build complex speed variations by stacking multiple effects
Use easeInExpo for dramatic slow-motion starts, easeOutExpo for sudden stops, or easeInOutSine for smooth, natural-looking speed variations.
How It Works
The Ease Curve node resamples video frames according to the easing curve:
- Steep curve sections = faster playback (time-lapse effect)
- Flat curve sections = slower playback (slow-motion effect)
- The curve's Y-axis represents time in the source video
- The X-axis represents time in the output video
Technical Details
- Frame resampling: Creates new frames by selecting from source based on curve
- Smooth interpolation: Uses cubic bezier curves for natural motion
- Format: Outputs MP4 with H.264 encoding
- Frame rate: Matches source frame rate (up to 60fps)
LLM Generate
The LLM Generate node creates text using large language models. Use it for prompt enhancement, descriptions, or any text generation task.
Inputs
- Text — Input prompt or context
- Image (optional, multiple) — Images for multimodal generation
Outputs
- Text — The generated text
Settings
| Setting | Description |
|---|---|
| Model | Select from Gemini or OpenAI models |
| Temperature | Controls randomness (0-2) — adjustable in collapsible Parameters section |
| Max Tokens | Maximum output length (256-16384) — adjustable in collapsible Parameters section |
Parameters
The Parameters section is collapsible and contains:
- Temperature slider (0-2) — Controls output randomness
- Max Tokens slider (256-16384) — Controls maximum output length
Features
- Copy to clipboard — Click the copy button on generated text output. A green checkmark confirms the copy.
Available Models
Google Gemini:
- gemini-2.5-flash (fast, capable)
- gemini-3-flash-preview (latest flash)
- gemini-3-pro-preview (most capable)
OpenAI:
- gpt-4.1-mini (balanced)
- gpt-4.1-nano (fast)
OpenAI models require a separate OPENAI_API_KEY in your environment.
Usage
- Add an LLM Generate node
- Connect a Prompt node with your instructions
- Optionally connect images for multimodal input
- Configure model and parameters
- Run to generate text
Example: Prompt Enhancement
Connect nodes like this:
[Prompt: "enhance this prompt for image generation: cat on roof"]
→ [LLM Generate]
→ [Nano Banana]The LLM can expand simple prompts into detailed generation instructions.
Annotation
The Annotation node opens a full-screen drawing editor where you can draw on images.
Inputs
- Image — The image to annotate
Outputs
- Image — The annotated image
Drawing Tools
| Tool | Description |
|---|---|
| Rectangle | Draw rectangular shapes |
| Circle | Draw circular shapes |
| Arrow | Draw arrows for highlighting |
| Freehand | Free drawing with mouse/pen |
| Text | Add text labels |
Features
- 8 color presets
- 3 stroke width options
- Undo/redo support
- Shape selection and transformation
- Save or cancel changes
Usage
- Connect an image source to the Annotation input
- Click the Edit button on the node
- Use drawing tools to annotate
- Click Save to apply changes
- The annotated image flows to connected nodes
Use annotations to mask areas, add reference marks, or highlight regions for AI generation. The AI will see and respond to your annotations.
Split Grid
The Split Grid node divides an image into a grid of smaller images. This is useful for contact sheets or batch processing.
Inputs
- Image — The image to split
Outputs
- Reference (multiple) — Visual references to grid cells
Grid Options
The Split Grid settings modal offers 7 distinct grid layouts with visual previews and RxC (rows × columns) labels:
| Layout | Grid Size | Cells |
|---|---|---|
| 2×2 | 2 rows, 2 columns | 4 cells |
| 1×5 | 1 row, 5 columns | 5 cells |
| 2×3 | 2 rows, 3 columns | 6 cells |
| 3×2 | 3 rows, 2 columns | 6 cells (portrait) |
| 2×4 | 2 rows, 4 columns | 8 cells |
| 3×3 | 3 rows, 3 columns | 9 cells |
| 2×5 | 2 rows, 5 columns | 10 cells |
The 3×2 layout is useful for portrait-oriented grids. Note that both 2×3 and 3×2 produce 6 images, but with different aspect ratios.
Usage
- Connect an image (like a contact sheet) to Split Grid
- Select your grid configuration
- The node generates output references for each cell
- Connect references to organize downstream processing
How It Works
Split Grid is primarily for visual organization. It:
- Divides the source image into equal cells
- Creates reference outputs for each cell
- Helps you visually track which part of an image flows where
Output
The Output node displays the final result of your workflow. Use it as the endpoint for generated images and videos.
Inputs
- Image — Images to display
- Video — Videos to display (connects directly to video outputs from Generate Video, Video Stitch, or Ease Curve nodes)
Settings
| Setting | Description |
|---|---|
| outputFilename | Custom filename for saved outputs (without extension). If empty, uses timestamp-based naming. |
Features
- Large preview area
- Click to open lightbox viewer
- Download button for saving results
- Shows image dimensions or video metadata
- Video playback controls with format detection
- Carousel for browsing media history
- Auto-execute on connect: Automatically runs and displays results when you connect an edge to the Output node — no need to run the full workflow first
- Run button: A play icon in the node header lets you manually re-fetch and refresh the output at any time
- Auto-save to outputs folder: When your workflow has a project path configured, Output nodes automatically save results to an
/outputsdirectory - Custom filenames: Set a custom
outputFilenameparameter to control the output filename (special characters are sanitized) - Auto-directory creation: The
/outputsdirectory is automatically created if it doesn't exist
File Naming
When saving outputs:
- With custom filename:
{customFilename}_{hash}.{extension} - Without custom filename:
generated-{timestamp}.{extension} - Special characters in custom filenames are replaced with underscores
- Multiple consecutive underscores are collapsed to a single underscore
Usage
- Add an Output node at the end of your workflow
- Connect the final image or video source
- (Optional) Set a custom
outputFilenamein the node settings - Run the workflow
- View and download results from the Output node
- If your workflow has a project path, outputs are automatically saved to the
/outputsfolder
While you can view images and videos in any node, Output nodes provide a cleaner display area and make it clear where your workflow ends. When a project path is configured, they also handle automatic saving to the /outputs directory.
Output Gallery
The Output Gallery node collects and displays multiple images in a scrollable thumbnail grid with a full-size lightbox viewer. Use it to inspect and compare multiple generations or image collections.
Inputs
- Image (multiple) — Connect multiple image sources to view them in a grid
Features
- Thumbnail grid: 3-column grid layout with scrollable viewing
- Lightbox viewer: Click any thumbnail to open full-size viewer
- Close button (X) in top-right corner
- Download button in top-left corner
- Previous/Next arrow navigation
- Keyboard navigation (Left/Right arrows, Escape to close)
- Real-time display: Shows images from connected nodes immediately, not just after execution
- Automatic collection: Gathers images from all connected image-producing nodes (Image Input, Generate Image, Annotation, etc.)
Usage
- Add an Output Gallery node
- Connect multiple image sources to its input handle
- You can connect multiple Generate Image nodes
- Or connect nodes that output multiple images
- The gallery automatically displays all connected images as thumbnails
- Click any thumbnail to view full-size
- Use keyboard shortcuts or navigation buttons to browse
- Click the download button in lightbox to save individual images
Generation Comparison: Connect multiple Generate Image nodes with different parameters to the Output Gallery to compare results side-by-side. Perfect for evaluating different models, prompts, or settings.
Lightbox Controls
| Control | Action |
|---|---|
| Click thumbnail | Open lightbox at that image |
| Left arrow / ← | Previous image |
| Right arrow / → | Next image |
| Escape / X button | Close lightbox |
| Download button | Save current image |
Image Compare
The Image Compare node provides a side-by-side comparison view with a draggable slider for comparing two images. Useful for before/after comparisons or evaluating generation variations.
Inputs
- Image A — First image to compare (top handle labeled "A")
- Image B — Second image to compare (bottom handle labeled "B")
Features
- Draggable slider: Interactive slider to reveal/hide portions of each image
- Real-time comparison: Works with live connections, updates as source nodes change
- Labeled inputs: Handles are labeled "A" and "B" for clarity
- Corner labels: Images are labeled in the comparison view
- Automatic ordering: First connected image becomes A, second becomes B
Usage
- Add an Image Compare node
- Connect two image sources:
- Connect the first image to the top handle (A)
- Connect the second image to the bottom handle (B)
- Drag the slider left or right to compare the images
- The node displays A on the left side and B on the right side of the slider
Before/After Workflows: Create powerful before/after demonstrations by connecting an original Image Input to handle A and a processed/generated result to handle B. Perfect for showcasing edits, style transfers, or AI enhancements.
Use Cases
- Generation comparison: Compare two different AI generations of the same prompt
- Model comparison: Test the same prompt with different models
- Before/after: Show original vs. processed/annotated images
- Parameter tuning: Compare results with different generation parameters
- Style variations: Compare different style applications to the same subject
Example Workflow
[Image Input: Original Photo] → [Image Compare A]
↓
[Generate Image: Enhanced] → [Image Compare B]This workflow lets you compare the original photo (A) with an AI-enhanced version (B) using the interactive slider.
Groups
Groups aren't nodes, but they're an important organizational feature.
Creating Groups
- Select multiple nodes
- Right-click → "Create Group"
- Name your group
Group Features
- Color coding — Groups have colored backgrounds
- Collective movement — Drag to move all contained nodes
- Lock/unlock — Locked groups skip execution
Use Cases
- Organize related nodes visually
- Disable workflow sections without deleting
- Create reusable workflow "modules"
Common Node Features
All nodes share these capabilities:
Title Editing
Click the title to rename any node. Custom names help organize complex workflows.
Comments
Add comments to nodes for documentation. Hover to see the full comment.
Comment Navigation
Use the comment navigation system to move between nodes with comments:
- Header icon: Shows unviewed comment count badge in the header
- Previous/Next controls: Navigate between comments using arrow buttons in comment tooltips
- Comment preview: Hover over nodes to see comment previews in tooltips
- View tracking: Unread comments are highlighted, and viewed comments are tracked during your session
- Auto-centering: The viewport automatically centers on the target comment node when navigating
- Comment order: Comments are sorted by position (top-to-bottom, left-to-right)
This feature helps you review feedback and annotations in complex workflows without manually searching for commented nodes.
Resizing
Drag the bottom-right corner to resize nodes. For Generate Image and Generate Video nodes, manually set heights are preserved when the content aspect ratio changes — only the width adjusts automatically.
Execution Controls
- Play button — Run from this node
- Regenerate — Re-run with current inputs
Error States
When a node encounters an error:
- Red border appears
- Error message displays
- Check the browser console for details