By default, if a tool encounters an error, the pipeline will stop execution immediately and report the error. You can configure this behavior in the tool's settings. By setting the "On Failure" option to "Continue," the pipeline will ignore the error, discard the failed tool's output, and simply pass the input it received from the previous step on to the next one.
Navigate to the pipeline you are interested in and click on the "Runs" tab. This will show you a complete, timestamped history of every execution, whether it was triggered manually or via the API. You can click on any run to inspect the full input, the final output, and a detailed breakdown of each step, including its duration and data size.
Yes. Pipelines are designed to be modular and reusable. You can add a "Pipeline" node to your workflow, which allows you to execute an entirely different pipeline as a single step. The output of the nested pipeline will then become the input for the next step in the parent pipeline. This is a powerful feature for reusing common logic and building complex, multi-stage workflows.
The platform integrates a variety of powerful, serverless AI models for tasks like text summarization, text and image generation, translations, sentiment analysis, text-to-speech, speech and object recognition and classifications and more. These models are managed and scaled for you, so you can focus on building your pipeline logic.
Yes. Your pipeline definitions and run histories are stored securely. API access requires a unique, per-pipeline API key, and all communication is handled over HTTPS.
While the platform is designed to be robust, individual AI models and tools may have their own input size limitations. For very large data processing tasks, it is best to structure your pipeline to process data in manageable chunks. The run history provides data size information for each step, which can help you optimize your pipelines.