The open source MLOps framework accelerates the creation and deployment of ML microservices within a unified interface.
Union.ai, provider of the open-source workflow orchestration platform Flyte and its hosted version, Union Cloud, announced the release of UnionML at MLOps World 2022.
The open-source MLOps framework for building web-native machine learning applications provides a unified interface for bundling Python functions into machine learning (ML) microservices. It is the only library that seamlessly manages both data science workflows and production lifecycle tasks. This makes it easier to build new AI applications from scratch or run existing Python code faster at scale.
UnionML aims to unify the ever-evolving ecosystem of machine learning and data tools into a single interface for expressing microservices as Python functions. Data scientists can build UnionML applications by defining a few basic methods that are automatically bundled into ML microservices, starting with model training and offline/online prediction.
Marketing Technology News: Virtual Pangea reveals plans for first multi-layered metaverse
“Building machine learning applications should be easy, frictionless and simple, but today it really isn’t,” said Union.ai CEO Ketan Umare. “The cost and complexity of choosing tools, how to combine them into a cohesive ML stack, and keeping them in production requires a whole team of people who often use different programming languages and follow disparate practices. UnionML greatly simplifies creating and deploying machine learning applications.
UnionML applications include two objects: Dataset and Model. Together, they expose function decorator entry points that serve as building blocks for a machine learning application. By focusing on the basic building blocks rather than how they fit together, data scientists can reduce their cognitive load to iterate on models and deploy them in production. UnionML uses Flyte to run training and prediction workflows locally or on production Kubernetes clusters, relieving MLOps engineers of the overhead of provisioning compute resources for their stakeholders. ML models and applications can be served via FastAPI or AWS Lambda. More options will be available in the future.
Marketing Technology News: MarTech Interview with Jeffrey Ha, Chief Go-to-Market Officer at Rev