Abstract: Vertical Federated Learning (VFL) refers to the collaborative training of a
model on a dataset where the features of the dataset are split among multiple
data owners, while label information is owned by a single data owner. In this
paper, we propose a novel method, Multi Vertical Federated Learning
(Multi-VFL), to train VFL models when there are multiple data and label owners.
Our approach is the first to consider the setting where $D$-data owners (across
which features are distributed) and $K$-label owners (across which labels are
distributed) exist. This proposed configuration allows different entities to
train and learn optimal models without having to share their data. Our
framework makes use of split learning and adaptive federated optimizers to
solve this problem. For empirical evaluation, we run experiments on the MNIST
and FashionMNIST datasets. Our results show that using adaptive optimizers for
model aggregation fastens convergence and improves accuracy.