Skip to content
Studyond
AI in Management Education

AI in Management Education

Finn Gustenberg
Finn Gustenberg
· · 5 min read

Artificial Intelligence (AI) is rapidly transforming industries, and higher education as well as research is no exception. Yet, within universities and the broader research community, clear standards for how AI should be integrated into academic work are still evolving. Students increasingly rely on AI tools for literature reviews, paper writing, and other research-related tasks. While the technology offers new learning opportunities in various ways, its lack of universal and clear guidelines for its implementation raises questions about academic integrity and educational standards across universities.

Recognizing the relevance of this issue, students from the Master in Strategy and International Management (SIM) program at the University of St. Gallen embarked on a research initiative to evaluate best practices in the usage of AI in management education. The journey was initiated with three research posters analyzing the existing guidelines, followed by a presentation built around a central research question derived from the insights of the posters. In a final step, an academic paper will synthesize the insights gathered across the different phases. To make findings accessible to a broader academic audience, a designated communication team is sharing outcomes through multiple channels. This blog post describes the current state of the project, while sharing the most profound insights and implications.

Background

The research posters emphasize both the potential benefits and the critical challenges of integrating AI into management education. On the one hand, AI tools can enhance students’ productivity, personalized learning experience as well as cognitive skill development (Çela et al., 2024; Wang et al., 2024). On the other, concerns exist around overreliance on AI leading to dependency, diminished creativity, and weakened problem-solving abilities (Zhang et al., 2024; Ahmad et al., 2023).

To navigate and balance the benefits and risks the student researchers emphasize the need for a strategic and balanced approach. AI should complement – not replace – the role of educators. Gradual implementation, supported by guided and flexible implementation, was found to be essential to leverage AI’s benefits while mitigating its risks. As a guiding principle, a group recommends applying the SMART framework for responsible AI integration in management education. The framework encourages educators to:

·      Start small and scale wisely (Vashita et al., 2023)

·      Maintain human-centred learning (Nanyang Technological University, 2024)

·      Assess AI outputs (ESMT Berlin, 2024)

·      Refine course content (IMD Business School, 2024)

·      Train & build AI literacy (MIT Sloan School of Management, 2024)

With AI literacy emerging as an educational priority, universities are developing courses and trainings to help students and researchers navigate AI tools more effectively (McDonald et al., 2025). However, the extent of implementation of AI differs widely across Swiss universities. While some institutions have established strong and centralized policies, others are still in the development phase. To explore this disparity further and understand how policy structures support or hinder responsible AI adoption by educators, the presentation group developed a central research question: “How does the degree of policy centralization affect the AI enablement of university educators?”.

Current State & Analysis

To explore this central research question, the students have developed a conceptual framework called the AI enablement matrix. The tool is designed to assess the alignment between AI policies of institutions and the capabilities of educators to help bridge the gap between high-level policy design and front-line educator behaviour.

The matrix maps educational institutions along two dimensions. The x-axis analyses the degree of centralization of the AI policy. At one end, full centralization implies a single, university-wide policy as a guiding rule on the use of AI tools. At the other end, a decentralized approach provides individual faculty members with the freedom to determine their own AI usage, guidelines, and teaching practices. The y-axis maps the educators’ AI capabilities. High capabilities indicate a good understanding of how to leverage AI  tools responsibly and integrate it effectively into teaching. Low capability suggests a lack of awareness or skill implying educators are likely to mis- or underuse AI tools. Combining the two dimensions, the matrix involves four distinct quadrants of implementation strategies.

Top-down Enablement describes centralized policies with highly capable educators. This strategy allows highly skilled individuals to develop a uniform best practice to boost organizational efficiency. However, the increased centralization can hinder organizational learning, potentially leading to frustration among faculty due to limited flexibility.

Policy-driven Innovation combines highly centralized policies with educators that express limited AI capabilities. While this approach limits the risk of misuse and non-compliance, higher information costs across the institution can be expected in this quadrant.

Empowered Innovation describes decentralized policies with highly capable educators. This setup offers a reduction of administrative resources and promotes autonomy among educators but increases the risk of potential unlawful or unethical conduct in relation to AI.

Unstructured Uncertainty involves decentralized policies with low-capability educators. Although this strategy promotes organizational learning while using limited administrative resources, the risk of unlawful or unethical conduct is increased compared to the Empowered Innovation quadrant.

Based on these observed benefits and risks across the quadrants, the research group formulated two hypotheses about future directions for AI governance in educational institutions.

Hypotheses

1.     Decentralized policies can only be effective when educators possess high individual AI capabilities.

This hypothesis implies a need for regular trainings and teaching of educators as well as required mechanisms for general oversight to identify best practices and ensuring accountability. Further than that, AI experts at different departments would have to be identified that can act as sparring partners for educators with lower AI capabilities. 

2.     When educator AI capabilities are low, centralization becomes essential to avoid misuse and ensure ethical compliance.

This strategy would imply enforcing a central policy with clear do’s and don’ts covering ethics, academic integrity, and legal compliance. Beyond that, educators should be encouraged to contribute and suggest ideas for improvement of central policy guidelines and trainings should be offered within the scope of central policy.

In the next phase of the research project, another group of students will evaluate and test the two hypotheses to analyse both their current application in practice as well as their implications for policy-making educational institutions.

Featured in this Article
Studyond

Studyond

Studyond connects students with companies through thesis collaborations, enabling real-world learning experiences and early talent engagement.

Showing Studyond, 1 of 1

See the platform in action. Discover how Studyond connects students, companies, and universities around thesis topics and talent.