Product Overview
AI cache service

Provide storage cache acceleration, support Posix and CS interfaces, cache hot-spot files from storages, and deliver high-performance and high-throughput data access service.

Product Superiority
Provides high-IO and high-throughput data acceleration service for AI training scenarios.
  • 01High performance
  • 02High availability
  • 03Support for multiple training scenarios
High performance
01High performance

It is training-oriented, with bandwidth increasing linearly with the capacity to achieve million-level IO and throughput and improve the speed of model training.

High availability
02High availability

It supports high availability up to 99.9% to ensure uninterrupted high performance during training acceleration.

Support for multiple training scenarios
03Support for multiple training scenarios

It supports SDK, Posix, and CSI interfaces to meet different training scenarios.

High performance
01
High performance

It is training-oriented, with bandwidth increasing linearly with the capacity to achieve million-level IO and throughput and improve the speed of model training.

High availability
02
High availability

It supports high availability up to 99.9% to ensure uninterrupted high performance during training acceleration.

Support for multiple training scenarios
03
Support for multiple training scenarios

It supports SDK, Posix, and CSI interfaces to meet different training scenarios.

01
/
03
Product Features
Support for multiple computing training scenarios.
  • Multiple modes of acceleration
    Multiple modes of acceleration

    It supports cache acceleration for object and file storage, and provides posix and CSI interfaces to meet the performance requirements of different training scenarios.

  • Support for capacity scale-up and scale-down
    Support for capacity scale-up and scale-down

    Allow users to scale up the capacity according to business demands to meet the performance requirements and scale down the capacity to reduce costs.

  • Support for warm-up
    Support for warm-up

    Before training, allow users to warm up the dataset to be trained to the cache for improving the training efficiency.

Application Scenarios
Provide acceleration service for different AI training scenarios.
  • 01Model training scenario
  • 02Multi-node and multi-GPU training scenarios
Model training scenario
Model training scenario
Store training data in the large-scale low-cost S3 object storage, conduct model training by means of cache acceleration,and provide 2 major data formats for training: small-file format used for video, image, and voice training tasks, and large-file format used for NLP, recommendation, and multimodal scenarios.

It provides cache services for S3 object storages.

Connect with various computing clusters to provide ultra-high throughput and ultra-high IOPS capabilities.

Multi-node and multi-GPU training scenarios
Multi-node and multi-GPU training scenarios
Cover single-node multiple-GPU and multiple-node multiple-GPU application scenarios where thousands of GPUs are used and the number of dataset files ranges between tens of thousands and hundreds of billions.

Optimize the cache service deeply to support large model training scenarios.

01Model training scenario
02Multi-node and multi-GPU training scenarios
Model training scenario
Model training scenario
Store training data in the large-scale low-cost S3 object storage, conduct model training by means of cache acceleration,and provide 2 major data formats for training: small-file format used for video, image, and voice training tasks, and large-file format used for NLP, recommendation, and multimodal scenarios.

It provides cache services for S3 object storages.

Connect with various computing clusters to provide ultra-high throughput and ultra-high IOPS capabilities.

Multi-node and multi-GPU training scenarios
Multi-node and multi-GPU training scenarios
Cover single-node multiple-GPU and multiple-node multiple-GPU application scenarios where thousands of GPUs are used and the number of dataset files ranges between tens of thousands and hundreds of billions.

Optimize the cache service deeply to support large model training scenarios.

Continuously update the whole line of products and insist on sincere communication and win-win cooperation

Help you achieve new breakthroughs in business with professional AI solutions and advanced AI products