Sanctioned AI as a Pedagogical Tool: a Quasi-Experiment on the Formation of a New Academic Subjectivity

Introduction. With the spread of generative AI (GenAI) in education, discussions on the “crisis of authorship” intensified. While most measures focus on prohibition, this paper examines how student subjectivity changes when AI is legitimized. It explores how interdictive (prohibition) and prescriptive (mandatory use with verification) attitudes modulate behavioral strategies, academic ethics, and responsibility. Methodology and sources. A controlled quasi-experiment (n = 10) was conducted within bachelor's thesis projects. Students were divided into two didactically isolated groups: mandatory conscious AI use versus an interdictive framework. Data collection included document analysis, reflective surveys, pedagogical observation, and verification metrics. Special attention was paid to correlating formal indicators and subjective interpretations. Results and discussion. Data demonstrate an association between AI legitimization and process-oriented ethics. The prescriptive group declared higher authorship, reduced ethical discomfort, and developed critical verification practices. Conversely, the interdictive group showed uncritical borrowing. Prohibition failed to stimulate autonomous ethical reflection. Legitimized AI catalyzed cognitive activity, transforming subjectivity from task performer to designer of the epistemic environment. Conclusion. Prohibiting GenAI fails to strengthen ethical responsibility, potentially promoting passive trust. Normative AI integration transforms academic subjectivity: the student becomes a “prompt designer”, “model operator”, and “arbiter of knowledge”. This requires rethinking educational practices and fundamental categories of authorship, responsibility, and competence.

Authors: Vladimir E. Drach

Direction: Sociology

Keywords: higher education, generative artificial intelligence, sociological research, thesis defense, machine learning, ethical aspects


View full article