What's New
2024-07-15

Runway 1.2

Dedicated MLflow servers for streamlined experiment management 

Each project can connect to its own MLflow server, enhancing collaboration by allowing team members to log runs, explore results, and deploy models directly from the service console. This eliminates the need for separate configurations and ensures centralized access to model artifacts, providing a smoother workflow with your existing tools.

 

 

Added support for custom inference services 

You can now deploy machine learning models to external production environments, enabling real-time decision-making, reducing latency, and improving productivity—all while maintaining data security. This enhancement allows for continuous integration and deployment (CI/CD) and continuous training (CT), even in external clustered environments with limited network connectivity.

 

For a detailed overview, watch our feature highlights video.

SHARE THIS RELEASE
Use Cases
See how AI technology in anomaly detection, optimization, and predictive analytics is making industries intelligent