in

Toyota cuts AI training time with Cloud Storage FUSE file cache


Did you know? A2 & A3 machine types come with up to 6TB of Local SSD already bundled, which can be used for the Cloud Storage Fuse file cache.

Cache capacity

max-file-size-mb is the maximum size in MiB that the file cache can use. This is useful if you want to limit the total capacity the Cloud Storage FUSE cache can use within its mounted directory. 

By default, the max-size-mb field is set to a value of ‘-1’, which allows the cached data to grow until it occupies all the available capacity in the directory you specify as cache-dir. The eviction of cached metadata and data is based on a least recently used (LRU) algorithm that begins once the space threshold configured per max-size-mb limit is reached.

Time to live (TTL) settings

Cache invalidation is controlled by a ttl-secs flag. When using the file cache, we recommend increasing the ttl-secs to as high as allowed by your workload, based on the importance and frequency of data changes. For example, in an ML training job where the same data is typically read across multiple epochs, you can set this to the total training duration that you expect across all epochs.

Apart from specifying a value in seconds to invalidate the cache, the Cloud Storage FUSE file cache also supports more advanced settings:

  • A value of ‘0’ guarantees consistency, in that at least one call to Cloud Storage is made to check that the object generation in the cache matches that in Cloud Storage, guaranteeing that the most up to date file is read.  

  • On the other hand, a value of ‘-1’ tells the cache to always serve the file from cache if the file is available in the cache, without checking the object generation. This provides the best performance, but may serve inconsistent data. You should consider these advanced options carefully based on your training needs.

Random and partial reads

When first reading a file sequentially from the beginning of a file (offset 0), the Cloud Storage FUSE file cache intelligently asynchronously ingests the entire file into the cache, even if you are only reading a small range subset. This allows subsequent random/partial reads from the same object, whether from the same offset or not, to be served directly from the cache once it’s populated. 

However, if Cloud Storage FUSE isn’t reading a file from the beginning (offset 0), it does not trigger an asynchronous full file fetch by default. To change this behavior, you can pass cache-file-for-range-read: true. We recommended this be enabled if many different random/partial reads are done from the same object. For example, if working with an Avro or Parquet file that represents a structured defined dataset, many different random reads of many different ranges/offsets will happen against the same file, and will benefit from this being enabled.

Stat cache and type cache

Cloud Storage FUSE stat and type caches reduce the number of serial calls to Cloud Storage on repeat reads to the same file. These can be enabled by default and can be used without the file cache. Properly configuring these can have a significant impact on performance. We recommend the following limits for each cache type:

  • stat-cache-max-size-mb: use the default value of 32 if your workload involves up to 20,000 files. If your workload is larger than 20,000 files, increase the stat-cache-max-size-mb value by 10 for every additional 6,000 files, around 1,500 bytes per file.

  • type-cache-max-size-mb: use the default value of 4 if the maximum number of files within a single directory from the bucket you’re mounting contains 20,000 files or less. If the maximum number of files within a single directory that you’re mounting contains more than 20,000 files, increase the type-cache-max-size-mb value by 1 for every 5,000 files — around 200 bytes per file.

Alternatively, you can set the value for each to -1 to let the type and stat caches use as much memory as needed. Use this only if you have enough available memory to handle the size of the bucket you are mounting per the guidance above, as this can lead to memory starvation for other running processes or out-of-memory errors.

Cloud Storage FUSE and GKE

GKE has become a critical infrastructure component for running AI/ML workloads, including for Woven by Toyota. GKE users are using Cloud Storage FUSE to get simple file access to their objects, through familiar Kubernetes APIs, via the Cloud Storage FUSE CSI. And now, you can enable the Cloud Storage FUSE file cache in GKE to accelerate performance as well. Use the following example YAML to create a Pod to enable the Cloud Storage FUSE file cache, which you can do by passing a value into fileCacheCapacity. This controls the maximum size of the Cloud Storage FUSE file cache. Additionally, configure the volume attributes metadataStatCacheCapacity, metadataTypeCacheCapacity, and metadataCacheTtlSeconds to adjust the file cache settings per your workload needs. 

Note that you don’t need to explicitly specify the cache location, because it is automatically detected and configured by GKE. If the node has Local SSD enabled, the file cache medium will use Local SSD automatically; otherwise, by default, the file cache medium is the node boot disk.

Money Ball

To understand the risks posed by AI, follow the money – O’Reilly

Camanchaca innovates its employee experience with real-time generative agents

Summarization techniques, iterative refinement and map-reduce for document workflows