VESSL AI Secures $12 Million in Funding to Revolutionize MLOps and Slash GPU Costs by 80%

As the integration of artificial intelligence into business operations accelerates, the demand for efficient tools to create, test, and deploy machine learning models has surged. This need has given rise to a bustling sector known as machine learning operations (MLOps), populated by startups like InfuseAI, Comet, and Arize, along with established players like Google Cloud, Azure, and AWS.

South Korean startup VESSL AI is carving out its own niche in this competitive landscape by focusing on optimizing GPU costs. The company recently announced a successful $12 million Series A funding round, which will help advance its hybrid infrastructure designed for businesses looking to develop custom large language models (LLMs) and specialized AI solutions.

Meeting Growing Demand

VESSL AI already boasts an impressive roster of 50 enterprise customers, including major names like Hyundai and LIG Nex1, a South Korean aerospace and defense firm. Other notable clients include TMAP Mobility, a joint venture between Uber and SK Telecom, and innovative tech startups like Yanolja and Upstage. With strategic partnerships in place with Oracle and Google Cloud, VESSL AI has garnered over 2,000 users, as co-founder and CEO Jaeman Kuss An shared with TechCrunch.

Founded in 2020 by An alongside Jihwan Jay Chun (CTO), Intae Ryoo (CPO), and Yongseon Sean Lee (tech lead)—all of whom have experience at Google and in the gaming and AI sectors—VESSL AI emerged to tackle the challenges of developing machine learning models more efficiently.

A Cost-Effective Approach

The team identified that utilizing a hybrid infrastructure model could dramatically reduce both the complexity and cost associated with machine learning tools. By adopting a multi-cloud strategy that combines on-premise and cloud resources, VESSL AI can cut GPU expenses by up to 80%. This innovative approach not only addresses ongoing GPU shortages but also streamlines the entire lifecycle of AI model training, deployment, and operations.

“VESSL AI’s multi-cloud strategy allows us to leverage GPUs from various cloud providers, including AWS and Google Cloud,” An explained. “This system automatically selects the most cost-effective and efficient resources, significantly lowering costs for our customers.”

Key Features of VESSL AI’s Platform

VESSL AI offers four primary features to optimize the MLOps workflow:

  1. VESSL Run: Automates the training of AI models.
  2. VESSL Serve: Supports real-time model deployment.
  3. VESSL Pipelines: Integrates model training with data preprocessing for streamlined workflows.
  4. VESSL Cluster: Optimizes GPU resource usage in a cluster setting.

The recent funding round brings VESSL AI’s total capital raised to $16.8 million, with notable investors including A Ventures, Mirae Asset Securities, and Wooshin Venture Investment. The startup employs 35 staff members in South Korea and at its San Mateo office in the U.S.

Conclusion

With its innovative approach to MLOps and a keen focus on cost reduction, VESSL AI is well-positioned to meet the evolving needs of businesses looking to harness the power of AI. As it continues to expand its capabilities and customer base, VESSL AI stands out as a promising player in the crowded MLOps landscape.