Inspur Information Releases AIStation Inference Platform for Compute Power Scheduling in Enterprise AI Production Environment

 

 Highlights:

  • Inspur’s AIStation AI Inference Service Platform is a computing power scheduling software specially designed for the enterprise-level AI production environment
  • The AIStation inference platform increases resource utilization from 40% to 80% by enabling agile deployment of inference service resources, and reduces model deployment from two to three days to a few minutes by supporting unified scheduling of multi-source models.
  • Fully supports the two major scenarios of training and inference, and enables efficient one-stop delivery of the full process from model development to training, deployment, testing, releasing and service.

SAN JOSE, Calif.–(BUSINESS WIRE)–#AI–Inspur Information released a new AIStation artificial intelligence inference service platform, which is a computing power scheduling software specially designed for enterprise-level AI production environment. It enables agile deployment of inference service resources and reduces model deployment from two to three days to a few minutes by supporting unified scheduling of multi-source models. It will effectively help enterprises easily deploy AI inference services, thereby greatly improving AI delivery and production efficiency.

At present, the development of AI models faces multiple difficulties and challenges in the phase of production deployment. Before AI models can be deployed, a great deal of debugging and testing is needed, which usually takes 2-3 days. AI online service compute resources are mostly fixed, resulting in slow response to emergency demands and difficulties in business expansion. Unified management is also hard to achieve due to different sources of AI models. Enterprises hope to seamlessly link AI model training development and inference deployment, perform efficient resource scheduling and model management, and shorten the business launch time.

Inspur’s newly released AIStation inference platform helps enterprises make effective use of AI computing resources and quickly deploy AI models through important technical innovations, such as flexible and scalable architecture, low-latency and lightweight design, A/B testing and multi-model weighted evaluation. Features such as one-click deployment, log monitoring, resource management and control, and data processing makes the inference platform a comprehensive and powerful AI resource platform.

By supporting both on-premise and cloud deployment, the inference platform enables quick and automated operation of the AI models throughout the complex process from development to production and deployment, as well as reduces model deployment time from 2-3 days to a few minutes.

In terms of computing resource scheduling, the inference platform is capable of allocating resources for model services. Thanks to an innovative flexible and scalable architecture, resource allocation can be adjusted in a timely manner according to changes on inference service resource demands, reducing instance deployment time from hours to minutes in response to unexpected demands. A/B testing before the release of new models is also supported to validate the models in actual business scenarios, thus ensuring the safety and reliability of inference services while avoiding cluster load pressure caused by traffic switching.

In terms of model management, the inference platform implements unified scheduling of multi-source models. The inference service of multi-source and multi-scenario models are managed through a unified platform, which can achieve real-time control of global resources as well as comprehensive scheduling and dynamic deployment of model services. Multiple model services are supported by the same resource pool, resulting in an increase of resource utilization from 40% to 80%. Multi-model weighted evaluation is also enabled. Weights can be set for different models, effectively improving the reliability of predictions in actual business scenarios, building a robust and reliable intelligent system, and reducing the false rate.

Inspur Information had previously launched the AIStation training platform which has been widely used. It employs mechanisms such as fine-grained scheduling of computing resources, cache acceleration of training data, and automatic scheduling of distributed training tasks to increase the utilization rate of AI computing resources to more than 90%, which greatly shortens the model development cycle. With the launch of Inspur’s AIStation inference platform, the AIStation resource platform fully supports the two major scenarios of training and inference, and enables efficient one-stop delivery of the full AI development process from model development to training, deployment, testing, releasing and service.

Inspur Information is a leading provider of artificial intelligence computing solutions. With full-stack product capabilities across the three major platforms of AI computing, resources and algorithms, Inspur Information helps AI customers enhance application performance significantly in voice, semantics, image, video, search, network and other AI arenas, and accelerates the implementation of AI industrial applications.

About Inspur

Inspur Electronic Information Industry Co., LTD is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to www.inspursystems.com.

Contacts

Fiona Liu

Liuxuan01@inspur.com

error: Content is protected !!