Skip to main content
Version: V11

Python Code Node

The Python Code Node executes custom Python code in isolated Docker containers with secure resource management and state variable integration. It supports both shared containers for performance and ephemeral containers for maximum isolation, with automatic idle timeout monitoring. The underscore notation system provides seamless integration with workflow state for reading and writing variables.

How It Works

The Python Code node executes user-defined Python code in isolated Docker containers, providing a secure sandbox environment for custom logic execution. Two deployment approaches are available: shared containers that persist across executions for better performance, or ephemeral containers that are created fresh for each execution and destroyed afterward for maximum security isolation. Shared containers include automatic idle timeout monitoring that self-destructs the container after a period of inactivity, optimizing resource usage while maintaining security.

When the node executes, it prepares the execution environment by creating or reusing a Docker container based on configuration, installs any required Python packages that aren't already present, and then executes the code with access to workflow state variables. The code runs with resource limits on CPU, memory, and process count to prevent resource exhaustion. Code interacts with workflow state through underscore notation, where variables prefixed with underscore automatically read from and write to the workflow state. After execution completes, the node extracts all underscore-prefixed variables and their values, merging them back into the workflow state for downstream nodes.

The node supports both simple package names and version-specific package requirements. Packages without version specifiers are only installed if missing, while packages with version constraints are always installed or upgraded to ensure the correct version. All print statements in the code are captured and stored in a special workflow variable, allowing debugging and logging of execution progress. Container cleanup strategies ensure that shared containers remain clean between executions, with options to clear temporary directories or run custom cleanup scripts.

Configuration Parameters

Python code

Python Code (Code Editor, Required): Python code with underscore notation for state variables.

Write Python code using underscore notation to interact with workflow state variables, where all state interactions require underscore prefix for both reading and writing. Single underscore (_) reads from and writes to workflow state (_result = 'value' creates state.data.result), while double underscore (__) creates underscore-prefixed fields (__temp = 42 creates state.data._temp). All underscore variables must be JSON-serializable (dict, list, str, int, float, bool, None).

When modifying objects in loops, changes update the original state variable, but objects preserve only their defined fields—dynamically added fields are ignored, so create new state variables for extended data structures. Print statements are automatically captured in the python_std_out workflow variable for debugging. The default template includes comprehensive comments explaining underscore notation rules, object handling, and serialization requirements.

Docker image

Docker Image (Dropdown, Default: python:3.12-slim): Docker image for code execution.

The dropdown provides a curated list of allowed Python images with different variants optimized for size, features, and compatibility. Images are validated against an allowed list configured at the platform level for security.

VariantExampleDescriptionBest for
Slimpython:3.12-slimMinimal Debian-based images with smaller footprintMost use cases, general-purpose execution
Bookwormpython:3.12-bookwormFull Debian-based images with more system packages pre-installedPackages requiring system dependencies
Alpinepython:3.12-alpineLightweight Alpine Linux-based images with smallest sizeMinimal resource usage when package compatibility is verified

Required packages

Required Packages (Array, Optional): Python packages to install before code execution.

The node intelligently handles package installation by checking what's already present and only installing missing or version-specific packages.

FormatExampleInstallation behavior
Simple namenumpy, pandas, requestsInstalled only if not present
Version equalitynumpy==1.24.0Always installed to ensure exact version
Version constraintspandas>=2.0.0, scipy<=1.10.0Always installed/upgraded to meet constraint
Extraspackage[extra]Installs package with optional dependencies

Packages without version specifiers are checked against installed packages and skipped if already present, reducing execution time. Packages with version specifiers are always installed or upgraded using pip, ensuring code runs with the correct versions. Installation uses --no-cache-dir to avoid filling container disk space.

Common packages: numpy, pandas, requests, beautifulsoup4, pillow, opencv-python, scikit-learn

Shared container

Shared Container (Toggle, Default: true): Controls whether the Docker container persists across executions or is created fresh each time.

ModeBehaviorPerformanceSecurityUse when
True (Default)Container persists across executions with user isolation. Includes idle timeout monitoring that self-destructs after inactivity.Faster - no container startup overheadGood - user-level isolation with cleanup strategiesRunning frequent executions, need faster response times, acceptable security with user isolation
FalseNew container created for each execution and destroyed afterwardSlower - container creation overhead per executionMaximum - complete isolation per executionMaximum security required, infrequent executions, untrusted code scenarios

When using shared containers, the node creates user-specific and node-specific working directories within the container to isolate execution environments. Cleanup strategies run between executions to maintain a clean state. The container automatically monitors idle time and self-destructs after the configured timeout period (default 3600 seconds), freeing resources when not in use.

Enable network

Enable Network (Toggle, Default: true): Controls whether the Docker container has network access during code execution.

When enabled, code can make HTTP requests, download files, and access external APIs. When disabled, the container is completely isolated from the network. Disabling network access provides an additional security layer for code that doesn't require external connectivity, preventing potential data exfiltration or unauthorized external communication.

Cleanup strategies

Cleanup Strategies (Array, Default: ["clear_tmp"]): Strategies to apply when reusing shared containers.

StrategyActionUse when
Clear Node Directory (Default)Removes all files from this node's working directory within the user's isolated spaceStandard cleanup for most use cases, removes temporary files and execution artifacts
Custom Cleanup ScriptExecutes custom Python cleanup codeNeed specific cleanup logic like removing cache files, resetting global state, or cleaning specific directories

Cleanup strategies execute in the order specified. The Clear Node Directory strategy only affects the node-specific working directory (/tmp/node_{node_id}/session_{session_id}), not the entire container filesystem, maintaining isolation between nodes and sessions.

Custom cleanup code

Custom Cleanup Code (Code Editor, Optional): Python code to execute as part of the cleanup process.

This field is only used when "Custom Cleanup Script" is selected in Cleanup Strategies. The code runs in the container's working directory after main code execution completes. Custom cleanup handles scenarios like removing specific cache files, clearing global variables, deleting temporary databases, or resetting application state. The cleanup code has access to the container's filesystem and can use standard Python libraries.

Example: import os; import shutil; shutil.rmtree('/tmp/cache', ignore_errors=True)

Common parameters

This node supports common parameters shared across workflow nodes. For detailed information, see Common Parameters.

This node uses "Stream Progress" instead of "Stream Output Response" to indicate execution progress messages (container initialization, package installation, code execution, cleanup) rather than streaming code output.

Best practices

  • Use shared containers for workflows with frequent executions to minimize container startup overhead, but ensure appropriate cleanup strategies are configured to prevent state leakage
  • Specify exact package versions in Required Packages when reproducibility is critical, as this ensures consistent behavior across executions and environments
  • Keep Python code modular and focused on single responsibilities, using multiple Python Code nodes for complex workflows rather than cramming all logic into one node
  • Leverage print statements liberally for debugging as they're captured in the python_std_out variable and available for inspection
  • For production workflows handling sensitive data, consider using ephemeral containers (Shared Container = false) to ensure complete isolation
  • Test code with network disabled first to identify external dependencies, then enable network only if required
  • When using custom cleanup code, ensure it's idempotent and handles errors gracefully to prevent cleanup failures from affecting subsequent executions
  • Monitor resource usage when working with large datasets or memory-intensive operations

Limitations

  • Timeout enforcement: Code execution is limited by the timeout configuration (default 30 seconds, maximum 300 seconds). Long-running operations may be terminated.
  • Resource limits: Containers run with CPU, memory, and process limits configured at the platform level. Resource-intensive operations may hit these limits.
  • Package installation time: Installing packages at runtime adds execution overhead. Pre-build custom Docker images with required packages for production use.
  • JSON serialization: All underscore variables must be JSON-serializable. Complex objects like file handles, database connections, or thread objects cannot be stored in workflow state.
  • Object field preservation: When modifying objects, only their originally defined fields are preserved in state. Dynamically added fields are ignored.
  • Unicode characters: Avoid special unicode characters in code that may cause encoding issues. Use escape sequences when needed.
  • Container disk space: Each node has a disk quota (default 100MB) for temporary files. Large file operations may exceed this limit.