Data Operations Node
The Data Operations Node performs multiple data manipulation operations sequentially on workflow variables to transform, filter, and clean data. It supports operations including Set, Filter, Rename Keys, Remove, Clear, and Drop Duplicates. The sequential execution model allows chaining transformations within a single node, reducing workflow complexity.
How It Works
When the node executes, it processes each configured operation in sequence, modifying workflow variables in place. Operations support different data types and provide type-appropriate comparison and manipulation capabilities. The node handles both simple values and complex nested structures, automatically navigating data hierarchies.
The sequential execution model means later operations see results of earlier operations, enabling complex transformations. Each operation validates its inputs and provides clear error messages if data types don't match expectations.
Configuration Parameters
Operations
Operations (Array, Required): List of operations to execute sequentially.
Operations execute in configured order. Each operation modifies workflow variables directly. Available operation types: Set, Filter, Rename Keys, Remove, Clear, Drop Duplicates.
Set
Creates or updates fields in workflow variables.
- Key: Field path to set
- Value: Data to store (supports JSON and variable interpolation with
${variable_name})
Use for setting default values, copying data between fields, or creating new fields.
Filter
Keeps only list items matching specified conditions.
- Input Field: List to filter
- Conditions: Criteria for filtering, each with:
- Data Field: Field to evaluate (leave empty for primitive arrays)
- Data Type: String, Number, Boolean, Date, Array, or Object
- Operator: Comparison type (varies by data type)
- Value: Compare against
- Logical Operator: AND (all must match) or OR (any must match)
Available operators include standard comparisons (Equals, Greater Than), text matching (Contains, Starts With, Regex), and empty checks.
Rename keys
Renames fields in workflow variables.
- Old Key: Original field path
- New Key: New field path
Operation fails if new key already exists, preventing accidental overwrites. Use for standardizing field names or adapting data structure.
Remove
Deletes specified fields from workflow variables.
- Keys: Field paths to delete
Supports nested paths and array indexing. Use for cleaning up unnecessary fields or removing sensitive data.
Clear
Resets fields to their type's empty or default value without deleting them.
- Fields: Field paths to clear
Resets based on type: String → "", Number → 0, Boolean → False, List → [], Dict → {}. Use for resetting fields while preserving data structure.
Drop duplicates
Removes duplicate items from a list.
- Input Field: List to deduplicate
- Deduplicate Key: Field for comparison (leave empty to compare whole items)
Use for deduplicating search results or merged data.
Common parameters
This node supports common parameters shared across workflow nodes, including Stream Output Response, Streaming Messages, and Logging Mode. For detailed information, see Common Parameters.
Best practices
- Order operations strategically since they execute sequentially; filter and deduplicate early to reduce data volume for subsequent operations
- Use Filter with AND logical operator when all conditions must be met, OR when any condition is sufficient
- Test field paths with a small dataset first to ensure they correctly target intended fields in nested data
- For Set operations, use variable interpolation
${field_name}to copy values between fields rather than hardcoding - Enable case-sensitive comparison for string filters only when exact case matching is required
- Consider whether to preserve field structure (use Clear) or completely remove the field (use Remove)
Limitations
- Sequential execution only: Operations execute in configured order and cannot be parallelized. Later operations see results of earlier ones.
- In-place modification: All operations modify workflow variables directly. Original data is not preserved unless copied first.
- Filter input validation: Filter operation requires list input and fails on non-list data types.
- Rename collision: Rename operation fails if the new key already exists, preventing accidental overwrites.
- Array index access: When filtering arrays, accessing nested fields requires array indexing (e.g.,
items[0].name) not dot notation. - Type detection for Clear: Clear operation detects current type to determine empty value. If type is ambiguous, defaults to None.