Abstract: Intent modifies an actor's culpability of many types wrongdoing. Autonomous
Algorithmic Agents have the capability of causing harm, and whilst their
current lack of legal personhood precludes them from committing crimes, it is
useful for a number of parties to understand under what type of intentional
mode an algorithm might transgress. From the perspective of the creator or
owner they would like ensure that their algorithms never intend to cause harm
by doing things that would otherwise be labelled criminal if committed by a
legal person. Prosecutors might have an interest in understanding whether the
actions of an algorithm were internally intended according to a transparent
definition of the concept. The presence or absence of intention in the
algorithmic agent might inform the court as to the complicity of its owner.
This article introduces definitions for direct, oblique (or indirect) and
ulterior intent which can be used to test for intent in an algorithmic actor.