Skip to main content
Version: V11

Split In Batches Node

The Split In Batches Node divides arrays into fixed-size batches with optional range extraction and output limiting for controlled data processing. It supports flexible indexing with negative indices and dynamic batch sizing through variable interpolation. The consistent array-of-arrays output format simplifies downstream processing.

How It Works

The node divides arrays through a three-stage process: range extraction, batch creation, and optional output limiting. First, if start or end indices are specified, the node extracts that specific range using Python-style slicing with support for negative indices (-1 for last item, -2 for second-to-last). Second, the extracted range is divided into batches of the configured size, with the last batch containing fewer items if the array length isn't evenly divisible. Third, if a batch limit is configured, only the first N batches are returned.

The node always returns an array of arrays for consistent output format: empty input produces [], single batch produces [[1,2,3,4,5]], and multiple batches produce [[1,2,3], [4,5,6]]. This consistent structure simplifies downstream processing since you always work with the same data format.

All numeric parameters support variable interpolation, enabling dynamic batch sizing and range extraction based on workflow state.

Configuration Parameters

Input field

Input Field (Text, Required): Workflow variable containing the array to split.

The input must be a valid array type. Other data types (strings, numbers, objects) cause validation errors. The array can contain any element type (numbers, strings, objects, nested arrays), and element types are preserved in output batches.

Output field

Output Field (Text, Required): Workflow variable where batched data is stored.

The output is an array of arrays containing the batched data. Zero batches return [], single batch returns [[...]], multiple batches return [[...], [...], ...].

Common naming patterns: batched_items, document_batches, chunked_data.

Batch size

Batch Size (Number, Required): Number of items per batch (2-10,000).

The last batch may contain fewer items if the array isn't evenly divisible. Example: 25 items with Batch Size 10 produces three batches: [10, 10, 5]. Choose sizes based on downstream needs: 10-50 for API rate limits, 50-200 for parallel processing, 200-1000 for bulk operations. Variable interpolation is supported.

Start index

Start Index (Number, Default: 0): Starting position using 0-based indexing.

0 = first item, 1 = second item. Supports negative indices: -1 = last item, -2 = second-to-last. Use for skipping initial items, processing subsets, or pagination. Variable interpolation is supported. Range: -10,000 to 10,000.

Limit mode

Limit Mode (Dropdown, Default: No Limit): How to control output size.

ModeBehaviorUse when
No Limit (Process All)Processes entire array from Start Index, creating all batches neededProcessing complete datasets, no size constraints
Limit by End IndexExtracts range from Start Index to End Index, then creates batchesNeed precise array slice, working with known data ranges
Limit by Batch CountCreates batches from Start Index but returns only first N batchesControlling downstream load, implementing pagination

End index

End Index (Number, Conditional): Ending position using 0-based indexing (exclusive).

Required when Limit Mode is "Limit by End Index". Item at this index is NOT included. Supports negative indices. Example: End Index 10 extracts items 0-9. Variable interpolation is supported. Range: -10,000 to 10,000.

Batch limit

Batch Limit (Number, Conditional): Maximum batches to return (1-10,000).

Required when Limit Mode is "Limit by Batch Count". Example: if input produces 20 batches but Batch Limit is 5, only the first 5 are returned. Use for controlling downstream load, implementing pagination, or testing with data subsets. Variable interpolation is supported.

Common parameters

This node supports common parameters shared across workflow nodes, including Stream Output Response and Streaming Messages. For detailed information, see Common Parameters.

Best practices

  • Choose batch sizes based on downstream needs: 10-50 items for API rate limits, 50-200 for parallel processing, 200-1000 for bulk operations
  • For large arrays, use Start Index and End Index to process segments incrementally rather than loading everything at once
  • Use Batch Limit to validate logic with small subsets before processing complete datasets
  • Leverage negative indices when working with array endings without knowing exact lengths
  • For pagination across executions, combine Start Index with Batch Limit to process controlled chunks
  • Store batch metadata (total count, current index, items per batch) in workflow variables to track progress

Limitations

  • Array type requirement: Input must be an array/list type. Other data types cause validation errors.
  • Index validation: Start Index must be less than End Index when both are positive values. Negative indices are validated against array length at runtime.
  • Batch size minimum: Batch Size must be at least 2. Single-item batches are not supported.
  • Empty results: If range extraction produces an empty array (e.g., Start Index beyond array length), the node returns an empty array without error.
  • Last batch size: The final batch may contain fewer items than Batch Size if the array length isn't evenly divisible.
  • Output format: Always returns array of arrays for consistency, even for single batches.