0
Scaling ML Inference on Databricks: Liquid or Partitioned? Salted or Not?
https://towardsdatascience.com/liquid-or-partitioned-salted-or-not-scaling-ml-inference-on-databricks/(towardsdatascience.com)A case study explores optimizing a machine learning inference pipeline on Databricks to improve scalability and cluster usage. The initial pipeline performed poorly, taking nearly 10 hours to process a small number of partitions due to data skew across different products. The analysis compares four data preparation scenarios, contrasting traditional partitioning with liquid clustering, both with and without a dynamic "salting" technique. Salting is implemented to create more evenly sized data partitions, which helps to maximize parallelism during inference and reduce bottlenecks caused by large, uneven data chunks. The results demonstrate how different data layout strategies significantly impact runtime performance and resource utilization.
0 points•by chrisf•2 hours ago