Who "Controls" Where Work Shall be Done? State-of-Practice in Post-Pandemic Remote Work Regulation
- URL: http://arxiv.org/abs/2505.15743v1
- Date: Wed, 21 May 2025 16:50:09 GMT
- Title: Who "Controls" Where Work Shall be Done? State-of-Practice in Post-Pandemic Remote Work Regulation
- Authors: Darja Smite, Nils Brede Moe, Maria Teresa Baldassarre, Fabio Calefato, Guilherme Horta Travassos, Marcin Floryan, Marcos Kalinowski, Daniel Mendez, Graziela Basilio Pereira, Margaret-Anne Storey, Rafael Prikladnicki,
- Abstract summary: This study examines how companies employ software engineers and supporting roles regulate work location.<n>We collected data on remote work regulation from corporate HR and/or management representatives from 68 corporate entities.<n>Although no companies have increased flexibility, only four companies are returning to full-time office work.
- Score: 9.891156901595595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The COVID-19 pandemic has permanently altered workplace structures, making remote work a widespread practice. While many employees advocate for flexibility, many employers reconsider their attitude toward remote work and opt for structured return-to-office mandates. Media headlines repeatedly emphasize that the corporate world is returning to full-time office work. This study examines how companies employing software engineers and supporting roles regulate work location, whether corporate policies have evolved in the last five years, and, if so, how, and why. We collected data on remote work regulation from corporate HR and/or management representatives from 68 corporate entities that vary in size, location, and orientation towards remote or office work. Our findings reveal that although many companies prioritize office-centred working (50%), most companies in our sample permit hybrid working to varying degrees (85%). Remote work regulation does not reveal any particular new "best practice" as policies differ greatly, but the single most popular arrangement was the three in-office days per week. More than half of the companies (51%) encourage or mandate office days, and more than quarter (28%) have changed regulations, gradually increasing the mandatory office presence or implementing differentiated conditions. Although no companies have increased flexibility, only four companies are returning to full-time office work. Our key recommendation for office-oriented companies is to consider a trust-based alternative to strict office presence mandates, while for companies oriented toward remote working, we warn about the points of no (or hard) return. Finally, the current state of policies is clearly not final, as companies continue to experiment and adjust their work regulation.
Related papers
- CHANCERY: Evaluating Corporate Governance Reasoning Capabilities in Language Models [30.288227578616905]
We introduce a corporate governance reasoning benchmark (CHANCERY) to test a model's ability to reason about whether executive/board/shareholder's proposed actions are consistent with corporate governance charters.<n>The benchmark consists of a corporate charter (a set of governing covenants) and a proposal for executive action.<n> Evaluations on state-of-the-art (SOTA) reasoning models confirm the difficulty of the benchmark, with models such as Claude 3.7 Sonnet and GPT-4o achieving 64.5% and 75.2% accuracy respectively.
arXiv Detail & Related papers (2025-06-05T05:13:32Z) - What Attracts Employees to Work Onsite in Times of Increased Remote
Working? [6.179340247070282]
In this paper, we offer insights into the role of the office, corporate policies and actions regarding remote work in eight companies.
We found that companies indeed struggle with office presence and a large share of corporate space (35-67%) is underutilized.
We summarize actionable advice to promote onsite work, which is likely to help many other companies to rejuvenate life in their offices.
arXiv Detail & Related papers (2023-10-06T09:34:48Z) - Policy Dispersion in Non-Markovian Environment [53.05904889617441]
This paper tries to learn the diverse policies from the history of state-action pairs under a non-Markovian environment.
We first adopt a transformer-based method to learn policy embeddings.
Then, we stack the policy embeddings to construct a dispersion matrix to induce a set of diverse policies.
arXiv Detail & Related papers (2023-02-28T11:58:39Z) - Impacts and Integration of Remote-First Working Environments [0.0]
"Remote first" working environments exist within companies where most employees work remotely.
This paper takes a deep dive into the remote-first mentality.
It investigates its effects on employees at varying stages in their careers, day-to-day productivity, and working relationships with team members.
arXiv Detail & Related papers (2022-09-09T16:32:51Z) - A State-Distribution Matching Approach to Non-Episodic Reinforcement
Learning [61.406020873047794]
A major hurdle to real-world application arises from the development of algorithms in an episodic setting.
We propose a new method, MEDAL, that trains the backward policy to match the state distribution in the provided demonstrations.
Our experiments show that MEDAL matches or outperforms prior methods on three sparse-reward continuous control tasks.
arXiv Detail & Related papers (2022-05-11T00:06:29Z) - Work-From-Home is Here to Stay: Call for Flexibility in Post-Pandemic
Work Policies [4.409836695738518]
Covid-19 pandemic forced employees in tech companies worldwide to abruptly transition from working in offices to working from their homes.
Many companies are currently experimenting with new work policies that balance employee- and manager expectations.
arXiv Detail & Related papers (2022-03-21T17:11:20Z) - Influencing Long-Term Behavior in Multiagent Reinforcement Learning [59.98329270954098]
We propose a principled framework for considering the limiting policies of other agents as the time approaches infinity.
Specifically, we develop a new optimization objective that maximizes each agent's average reward by directly accounting for the impact of its behavior on the limiting set of policies that other agents will take on.
Thanks to our farsighted evaluation, we demonstrate better long-term performance than state-of-the-art baselines in various domains.
arXiv Detail & Related papers (2022-03-07T17:32:35Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Understanding Developers Well-Being and Productivity: a 2-year
Longitudinal Analysis during the COVID-19 Pandemic [20.958668676181947]
We explore changes in well-being, productivity, social contacts, and needs of software engineers during the COVID-19 pandemic.
For example, well-being and quality of social contacts increased while emotional loneliness decreased as lockdown measures were relaxed.
A preliminary investigation into the future of work at the end of the pandemic revealed a consensus among developers for a preference of hybrid work arrangements.
arXiv Detail & Related papers (2021-11-19T18:07:21Z) - Self-Supervised Policy Adaptation during Deployment [98.25486842109936]
Self-supervision allows the policy to continue training after deployment without using any rewards.
Empirical evaluations are performed on diverse simulation environments from DeepMind Control suite and ViZDoom.
Our method improves generalization in 31 out of 36 environments across various tasks and outperforms domain randomization on a majority of environments.
arXiv Detail & Related papers (2020-07-08T17:56:27Z) - BRPO: Batch Residual Policy Optimization [79.53696635382592]
In batch reinforcement learning, one often constrains a learned policy to be close to the behavior (data-generating) policy.
We propose residual policies, where the allowable deviation of the learned policy is state-action-dependent.
We derive a new for RL method, BRPO, which learns both the policy and allowable deviation that jointly maximize a lower bound on policy performance.
arXiv Detail & Related papers (2020-02-08T01:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.