Cognitive psychology presupposes an explanatory role for representations. The assumption is that at least some mental states are representational and thus have intentionality, i.e., they are about things. Few cognitive psychologists believe that intentionality is fundamental. Thus, it must be explained in other terms. The question is whether any reductive approach can capture the intentionality attributed to cognitive agents. Can a naturalised intentionality grounded in computational neuroscience productively explain the psychology of cognitive agents? Relatedly, what kind of intentionality would an artificial system need to have for it to be considered a cognitive agent? I consider these questions as being two sides of the same coin, one top-down, the other bottom-up.