Optimal To-Do List Gamification for Long Term Planning
- URL: http://arxiv.org/abs/2109.06505v2
- Date: Wed, 15 Sep 2021 05:05:46 GMT
- Title: Optimal To-Do List Gamification for Long Term Planning
- Authors: Saksham Consul, Jugoslav Stojcheski, Valkyrie Felso, Falk Lieder
- Abstract summary: We release an API that makes it easy to deploy our method in Web and app services.
We extend the previous version of our optimal gamification method with added services for helping people decide which tasks should and should not be done when there is not enough time to do everything.
We test the accuracy of the incentivised to-do list by comparing the performance of the strategy with the points computed exactly using Value Iteration for a variety of case studies.
To demonstrate its functionality, we released an API that makes it easy to deploy our method in Web and app services.
- Score: 0.6882042556551609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most people struggle with prioritizing work. While inexact heuristics have
been developed over time, there is still no tractable principled algorithm for
deciding which of the many possible tasks one should tackle in any given day,
month, week, or year. Additionally, some people suffer from cognitive biases
such as the present bias, leading to prioritization of their immediate
experience over long-term consequences which manifests itself as
procrastination and inefficient task prioritization. Our method utilizes
optimal gamification to help people overcome these problems by incentivizing
each task by a number of points that convey how valuable it is in the long-run.
We extend the previous version of our optimal gamification method with added
services for helping people decide which tasks should and should not be done
when there is not enough time to do everything. To improve the efficiency and
scalability of the to-do list solver, we designed a hierarchical procedure that
tackles the problem from the top-level goals to fine-grained tasks. We test the
accuracy of the incentivised to-do list by comparing the performance of the
strategy with the points computed exactly using Value Iteration for a variety
of case studies. These case studies were specifically designed to cover the
corner cases to get an accurate judge of performance. Our method yielded the
same performance as the exact method for all case studies. To demonstrate its
functionality, we released an API that makes it easy to deploy our method in
Web and app services. We assessed the scalability of our method by applying it
to to-do lists with increasingly larger numbers of goals, sub-goals per goal,
hierarchically nested levels of subgoals. We found that the method provided
through our API is able to tackle fairly large to-do lists having a 576 tasks.
This indicates that our method is suitable for real-world applications.
Related papers
- Learning Dual-arm Object Rearrangement for Cartesian Robots [28.329845378085054]
This work focuses on the dual-arm object rearrangement problem abstracted from a realistic industrial scenario of Cartesian robots.
The goal of this problem is to transfer all the objects from sources to targets with the minimum total completion time.
We develop an effective object-to-arm task assignment strategy for minimizing the cumulative task execution time and maximizing the dual-arm cooperation efficiency.
arXiv Detail & Related papers (2024-02-21T09:13:08Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks [133.40619754674066]
Goal-conditioned reinforcement learning can solve tasks in a wide range of domains, including navigation and manipulation.
We propose the distant goal-reaching task by using search at training time to automatically generate intermediate states.
E-step corresponds to planning an optimal sequence of waypoints using graph search, while the M-step aims to learn a goal-conditioned policy to reach those waypoints.
arXiv Detail & Related papers (2021-10-22T22:05:31Z) - Auxiliary Task Update Decomposition: The Good, The Bad and The Neutral [18.387162887917164]
We formulate a model-agnostic framework that performs fine-grained manipulation of the auxiliary task gradients.
We propose to decompose auxiliary updates into directions which help, damage or leave the primary task loss unchanged.
Our approach consistently outperforms strong and widely used baselines when leveraging out-of-distribution data for Text and Image classification tasks.
arXiv Detail & Related papers (2021-08-25T17:09:48Z) - Replacing Rewards with Examples: Example-Based Policy Search via
Recursive Classification [133.20816939521941]
In the standard Markov decision process formalism, users specify tasks by writing down a reward function.
In many scenarios, the user is unable to describe the task in words or numbers, but can readily provide examples of what the world would look like if the task were solved.
Motivated by this observation, we derive a control algorithm that aims to visit states that have a high probability of leading to successful outcomes, given only examples of successful outcome states.
arXiv Detail & Related papers (2021-03-23T16:19:55Z) - Optimal to-do list gamification [4.8986598953553555]
We introduce and evaluate a scalable method for identifying which tasks are most important in the long run and incentivizing each task according to its long-term value.
Our method makes it possible to create to-do list gamification apps that can handle the size and complexity of people's to-do lists in the real world.
arXiv Detail & Related papers (2020-08-12T10:59:13Z) - A Multiperiod Workforce Scheduling and Routing Problem with Dependent
Tasks [0.0]
We study a new Workforce Scheduling and Routing Problem.
In this problem, customers request services from a company.
Tasks belonging to a service may be executed by different teams, and customers may be visited more than once a day.
arXiv Detail & Related papers (2020-08-06T19:31:55Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal
Constraints [52.58352707495122]
We present a multi-robot allocation algorithm that decouples the key computational challenges of sequential decision-making under uncertainty and multi-agent coordination.
We validate our results over a wide range of simulations on two distinct domains: multi-arm conveyor belt pick-and-place and multi-drone delivery dispatch in a city.
arXiv Detail & Related papers (2020-05-27T01:10:41Z) - Conditional Channel Gated Networks for Task-Aware Continual Learning [44.894710899300435]
Convolutional Neural Networks experience catastrophic forgetting when optimized on a sequence of learning problems.
We introduce a novel framework to tackle this problem with conditional computation.
We validate our proposal on four continual learning datasets.
arXiv Detail & Related papers (2020-03-31T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.