Self-Service Pipelines democratize the creation and usage of data pipelines
TetraScience provides many pre-built pipelines that can help you create queryable, harmonized Tetra Data, and then enrich and push that data to downstream systems. To extend these capabilities of the Tetra Data Platform (TDP), Self Service Pipelines (SSPs) empower the creation of custom task-scripts and protocols for all customers in TDP v3.6. SSPs add simplicity and flexibility to the user experience, enabling customers to independently adapt processes to address an evolving landscape of analytical tools and data systems.
Your data, your way
Whether transforming raw instrument data, adding data labels, enriching data with external sources, or transferring processed data to another application, the power to create custom solutions is now in your hands. Task-scripts are the building blocks of every data journey, and all customers can now create new building blocks for custom pipelines with TDP v3.6. They are assembled by the protocol and contain python code that performs a specific function (e.g., parse an instrument file, transform data, or push files to 3rd party software). Custom task-scripts allow scientists to easily create their own building blocks to dynamically integrate elements of the modern laboratory.
Simple design for complex processes
Building at your convenience
With busy schedules and pressing deadlines, self sufficiency rules the day. TetraScience provides a Context Application Programming Interface (API) and Software Development Kit (SDK) that enable customers to build and tailor the software to their needs or create connectors to their services. Using the TetraScience SDK, you can deploy new functionality to the TDP on your schedule. The TetraScience SDK is used to push custom task-scripts and protocols to the TDP and has been updated to SDK v2.0 to support the new YAML protocol format.
Whether you wish to create new parsers for scientific instrument data, add your own labels to data to make it searchable, enrich Tetra Data with metadata from third-party sources, automate data reprocessing, send processed data to other applications, or even create a multi-step pipeline to contextualize, harmonize, enrich, and push data to third-party applications within a single protocol- the power to innovate is in your hands!
A closer look
See how creating custom pipelines allows engineers to transform raw scientific data into AI-ready datasets
For full details on Self-Service Pipelines, read release notes.