Abstract
While teaching a data analytics unit situated within the HASS-STEM nexus, a shifting praxis uncovered an emerging apex priority of better assessment design to ‘AI-proof’ and therefore ‘cheat-proof’ what students do. Addressing the warp-speed evolution of AI must be more than a tokenistic box-ticking exercise or bolt-on to curriculum; and it must certainly be more than a reactionary practice that holds the practitioner to ransom under the insidious notion that every student is now guilty until proven innocent. Our responsibility in teaching is surely to equip students with the values, the knowledge and the tools they can use to foster a genuine sense of agency and social responsibility in the world. Our challenge in teaching is not in knowing what we need to prevent within the complexity of an emerging ontological framework that is unlike any other in human history, our challenge is knowing how we can best prepare our students to become responsible and humane planetary citizens. In considering the implicit presence and problematics of an unauthorised use of AI by students to complete work for assessment, and the corresponding systems-wide absence of ‘sure-fire’ mechanisms of detection, this paper is asking how we might do better for students than simply police the borders and presume guilt by default.