A decision-tree framework to select optimal box-sizes for product
shipments
- URL: http://arxiv.org/abs/2202.04277v1
- Date: Wed, 9 Feb 2022 04:46:55 GMT
- Title: A decision-tree framework to select optimal box-sizes for product
shipments
- Authors: Karthik S. Gurumoorthy, Abhiraj Hinge
- Abstract summary: In package-handling facilities, boxes of varying sizes are used to ship products. Improperly sized boxes with box dimensions much larger than the product dimensions create wastage and unduly increase the shipping costs.
We propose a solution for the single-count shipment containing one product per box in two steps: (i) reduce it to a clustering problem in the $3$ dimensional space of length, width and height where each cluster corresponds to the group of products that will be shipped in a particular size variant, and (ii) present an efficient forward-backward decision tree based clustering method with low computational complexity on $N$ and $K
- Score: 0.700545830845487
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In package-handling facilities, boxes of varying sizes are used to ship
products. Improperly sized boxes with box dimensions much larger than the
product dimensions create wastage and unduly increase the shipping costs. Since
it is infeasible to make unique, tailor-made boxes for each of the $N$
products, the fundamental question that confronts e-commerce companies is: How
many $K << N$ cuboidal boxes need to manufactured and what should be their
dimensions? In this paper, we propose a solution for the single-count shipment
containing one product per box in two steps: (i) reduce it to a clustering
problem in the $3$ dimensional space of length, width and height where each
cluster corresponds to the group of products that will be shipped in a
particular size variant, and (ii) present an efficient forward-backward
decision tree based clustering method with low computational complexity on $N$
and $K$ to obtain these $K$ clusters and corresponding box dimensions. Our
algorithm has multiple constituent parts, each specifically designed to achieve
a high-quality clustering solution. As our method generates clusters in an
incremental fashion without discarding the present solution, adding or deleting
a size variant is as simple as stopping the backward pass early or executing it
for one more iteration. We tested the efficacy of our approach by simulating
actual single-count shipments that were transported during a month by Amazon
using the proposed box dimensions. Even by just modifying the existing box
dimensions and not adding a new size variant, we achieved a reduction of
$4.4\%$ in the shipment volume, contributing to the decrease in non-utilized,
air volume space by $2.2\%$. The reduction in shipment volume and air volume
improved significantly to $10.3\%$ and $6.1\%$ when we introduced $4$
additional boxes.
Related papers
- Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss [59.835032408496545]
We propose a tile-based strategy that partitions the contrastive loss calculation into arbitrary small blocks.
We also introduce a multi-level tiling strategy to leverage the hierarchical structure of distributed systems.
Compared to SOTA memory-efficient solutions, it achieves a two-order-of-magnitude reduction in memory while maintaining comparable speed.
arXiv Detail & Related papers (2024-10-22T17:59:30Z) - Container pre-marshalling problem minimizing CV@R under uncertainty of ship arrival times [2.9061423802698565]
The container pre-marshalling problem involves relocating containers in the storage area so that they can be efficiently loaded onto ships without reshuffles.
We derive a mixed-integer linear optimization model to find an optimal container layout.
We devise an exact algorithm based on the cutting-plane method to handle large-scale problems.
arXiv Detail & Related papers (2024-05-27T18:19:09Z) - Head-wise Shareable Attention for Large Language Models [56.92068213969036]
Large Language Models (LLMs) suffer from huge number of parameters, which restricts their deployment on edge devices.
Weight sharing is one promising solution that encourages weight reuse, effectively reducing memory usage with less performance drop.
We present a perspective on head-wise shareable attention for large language models.
arXiv Detail & Related papers (2024-02-19T04:19:36Z) - BoxSnake: Polygonal Instance Segmentation with Box Supervision [34.487089567665556]
We propose a new end-to-end training technique, termed BoxSnake, to achieve effective polygonal instance segmentation using only box annotations for the first time.
Compared with the mask-based weakly-supervised methods, BoxSnake further reduces the performance gap between the predicted segmentation and the bounding box, and shows significant superiority on the Cityscapes dataset.
arXiv Detail & Related papers (2023-03-21T06:54:18Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - Randomized Dimensionality Reduction for Facility Location and
Single-Linkage Clustering [13.208510864854894]
Random dimensionality reduction is a versatile tool for speeding up algorithms for high-dimensional problems.
We study its application to two clustering problems: the facility location problem, and the single-linkage hierarchical clustering problem.
arXiv Detail & Related papers (2021-07-05T05:55:26Z) - 1$\times$N Block Pattern for Network Sparsity [90.43191747596491]
We propose one novel concept of $1times N$ block sparsity pattern (block pruning) to break this limitation.
Our pattern obtains about 3.0% improvements over filter pruning in the top-1 accuracy of MobileNet-V2.
It also obtains 56.04ms inference savings on Cortex-A7 CPU over weight pruning.
arXiv Detail & Related papers (2021-05-31T05:50:33Z) - Linear Optimal Transport Embedding: Provable Wasserstein classification
for certain rigid transformations and perturbations [79.23797234241471]
Discriminating between distributions is an important problem in a number of scientific fields.
The Linear Optimal Transportation (LOT) embeds the space of distributions into an $L2$-space.
We demonstrate the benefits of LOT on a number of distribution classification problems.
arXiv Detail & Related papers (2020-08-20T19:09:33Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z) - Think out of the package: Recommending package types for e-commerce
shipments [2.741530713365541]
Multiple product attributes determine the package type used by e-commerce companies to ship products.
Sub-optimal package types lead to damaged shipments, incurring huge damage related costs.
We propose a multi-stage approach that trades-off between shipment and damage costs for each product.
arXiv Detail & Related papers (2020-06-05T05:27:51Z) - An anytime tree search algorithm for two-dimensional two- and
three-staged guillotine packing problems [0.0]
algorithm was ranked first among 64 participants.
We generalize it and show that it is not only effective for the specific problem it was originally designed for, but is also very competitive.
The algorithm is implemented in a new software package called Packingr.
arXiv Detail & Related papers (2020-04-02T13:41:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.