Call for the Special Issue on "The Age of Artificial Intelligence: Higher Education, Research, and Social Transformation"
Guest editors
Maria Potes Barbas1, Andreia Teles Vieira1, and Cassio Santos1
Deadline: May 15th
The integration of Generative Artificial Intelligence (GenAI) into higher education represents a structural transformation that transcends mere technological adoption, fundamentally reconfiguring the dynamics of teaching, research governance, and social inclusion. As higher education institutions grapple with these rapid advancements (Sánchez-Caballé & Santos, 2025), they are compelled to move beyond reactive measures towards strategic frameworks that align scientific excellence with ethical responsibility. This Special Issue of Calitatea Vieții provides an important platform for exploring how the academic community can facilitate a responsible AI transition that improves quality of life and supports strong social outcomes.
The governance of AI in research has become a paramount concern, necessitating a rigorous examination of how integrity is maintained amid the proliferation of automated content generation. Major scientific publishers have responded to this challenge by establishing guidelines that delineate the boundaries of human and machine contributions. Elsevier, for instance, explicitly states that AI tools must be used only to support researchers by improving efficiency or readability, and never as a substitute for human critical thinking (Elsevier, 2026). Their policy mandates that authors retain full accountability for the accuracy and impartiality of their work and strictly prohibits listing AI tools as authors, as they cannot assume responsibility for the content. Springer Nature has implemented restrictions on the use of generative AI for image creation, citing unresolved copyright and integrity issues, and requires detailed documentation of AI use in the methods section of manuscripts (Springer, 2026). The Journal Nature has implemented strict policies under which Large Language Models (LLMs) cannot satisfy authorship criteria (Nature, 2025). They emphasise that authorship implies accountability that AI tools cannot possess.
Despite the nuances of these editorial and journal policies, a consensus is clear: AI lacks the requisite agency for authorship, human oversight is non-negotiable, and transparency in the use of these tools is essential to preserve the integrity of the scientific record (Elsevier, 2026; European Commission, 2025). This volume invites contributions that examine how higher education institutions can operationalise these global standards within local governance frameworks, moving from prohibition to a culture of 'AI literacy' and ethical compliance (Rughinis et al., 2025).
Beyond the regulatory landscape of authorship, Generative AI is already being operationalised as a methodological tool within the scientific workflow. Recent empirical studies have validated the utility of Large Language Models (LLMs) in qualitative research, demonstrating that tools such as NotebookLM and customised GPTs can conduct thematic analyses with congruence comparable to human coding (Bennis & Mouwafaq, 2025; Cevik & Abu-Zidan, 2025). In the quantitative domain, the Data Analyst GPT has demonstrated reliability comparable to standard statistical software packages such as SPSS and JAMOVI, offering a versatile interface for educational research (Santos, 2024). Furthermore, the application of AI extends to evidence synthesis, where tools like NotebookLM are being evaluated for their capacity to conduct 'source-grounded' literature reviews that mitigate hallucination risks, whilst other LLMs assist in refining complex search strategies for systematic reviews (Gitman et al., 2025; Shor et al., 2025).
Beyond governance, the pedagogical landscape is shifting towards models of 'Human-AI Collaboration'. The traditional transmission of knowledge is being replaced by experiential learning cycles in which AI serves as a cognitive partner rather than a replacement for student effort. This requires a re-evaluation of teacher competencies, moving towards frameworks such as the 'GenAI-TPACK', which integrates technological proficiency with pedagogical and ethical content knowledge (Belkina et al., 2025). By fostering environments where students co-produce analysis with AI while retaining judgment, educators can mitigate the risks of cognitive dependency and promote critical thinking (Zhou & Fang, 2025). This upskilling is vital not only for academic success but also for preparing graduates for a labour market increasingly shaped by algorithmic decision-making.
Furthermore, the deployment of AI in education must be scrutinised through the lens of social inclusion and risk mitigation. While AI offers potential for personalised learning, it carries significant risks of exacerbating existing inequalities if not designed with an intersectional approach to justice. The uncritical use of algorithms can reproduce systemic biases related to gender, race, and ability, penalising non-standard language or misinterpreting the needs of students with disabilities (Dumitru et al., 2025; Peña-García et al., 2026; UNESCO, 2023b). Therefore, this volume seeks investigations into how 'dialogic reflection' and inclusive prompt design can serve as pathways to algorithmic justice, ensuring that AI technologies bridge rather than widen the digital divide (Peña-García et al., 2026).
By addressing these intersecting dimensions, ethical governance, pedagogical innovation, and social inclusion, this Special Issue aligns directly with the mission of Calitatea Vieții to advance research on social policy and quality of life. We contribute by providing evidence-based strategies for navigating the 'Age of AI' and by demonstrating how higher education can lead the development of a technology that serves the common good, dignifies human labour, and promotes sustainable social development.
Navigating this complex landscape effectively requires a holistic approach that transcends the dichotomy of utopian enthusiasm and dystopian fear to address the systemic implications of AI deployment. It is imperative to examine how institutional strategies can move beyond mere compliance to foster a culture of critical engagement, in which AI adoption is driven by knowledge acquisition rather than by the avoidance of perceived threats (Al-Emran et al., 2025a). As universities transition from reactive measures to strategic governance, there is a pressing need to operationalise these challenges into actionable insights that safeguard educational value and social equity (Rughinis et al., 2025). Consequently, this Special Issue invites empirical and theoretical contributions that interrogate the intersections of policy, pedagogy, and social justice, demonstrating how higher education can mitigate risks while promoting the public good. To this end, submissions should align with, though are not limited to, the following three thematic axes:
- Ethical Governance and Responsible AI in Higher Education: This thematic axis addresses the urgent need for higher education institutions to move from reactive measures to strategic governance frameworks. Literature suggests that institutional policies must evolve from a logic of prohibition to the definition of new 'AI competencies' and integrity, with technology treated as an 'integrity test' that requires greater transparency (Rughinis et al., 2025). Submissions should analyse how universities can operationalise principles of trust and human oversight, aligning with guidelines that emphasise the researcher's ultimate responsibility for any AI-generated content (European Commission, 2025). It is essential to examine the preservation of academic integrity in a landscape where major publishers, including Elsevier, Springer Nature, Wiley, Taylor & Francis, and SAGE Publishing, have established that AI lacks the agency required for scientific authorship. We invite studies that evaluate the efficacy and ethical limitations of AI detection tools, which remain vulnerable to errors and manipulation (Pellegrina & Helmy, 2025). Furthermore, research is encouraged on the development of AI ecosystems that integrate fairness, accountability, and transparency (FAT approach) to mitigate privacy risks and ensure that data governance protects vulnerable populations from algorithmic surveillance or exclusion (Dumitru et al., 2025; UNESCO, 2023c).
- Pedagogical Innovation and Technological Upskilling: This thematic axis addresses the imperative to reconfigure curricula and assessment methodologies in response to the ubiquity of GenAI. We invite empirical and theoretical submissions that investigate the transition from traditional knowledge transmission to dynamic models of 'Human-AI Collaboration', in which technology serves as a cognitive partner to foster critical thinking and problem-solving rather than a substitute for student effort (Zhou & Fang, 2025). Contributions should examine the application of theoretical frameworks, such as the 'GenAI-TPACK' (Belkina et al., 2025) and the Supplement to the DigCompEDU Framework: Outlining the skills and competences of educators related to AI in education (Bekiaridis, 2024), to evaluate how educators can effectively integrate AI to redesign learning experiences. Furthermore, this section seeks research on the role of higher education in bridging the digital divide and enhancing employability through upskilling. Authors are encouraged to analyse how pedagogical strategies, such as flipped classrooms or active learning, can be revitalised by AI to develop transversal competencies essential for the future workforce (Panakaje et al., 2024). Crucially, we welcome studies that address the reformulation of assessment integrity, moving beyond policing plagiarism towards designing robust evaluation methods that validate the acquisition of future-proof skills and ensure equitable educational outcomes (Batista et al., 2024).
- Social Inclusion, Digital Competence, and Risk Mitigation: This thematic axis critically examines the intersection of AI deployment with social justice, emphasising the imperative to mitigate algorithmic bias and protect vulnerable populations. We invite submissions that investigate how higher education can counteract the risks of 'data poverty' and the 'digital divide', ensuring that AI technologies bridge rather than exacerbate existing inequalities in access and attainment (Al-Emran et al., 2025b; UNESCO, 2023a). Contributions should analyse the efficacy of AI-driven assistive technologies for students with disabilities and critically assess the risks of surveillance and automated exclusion inherent in some proctoring and predictive systems (Dumitru et al., 2025). Furthermore, we seek research on the operationalisation of GDPR-compliant data governance strategies that prioritise user privacy and consent in educational settings (European Commission, 2022; UNESCO, 2023b). Authors are encouraged to explore pedagogical approaches that foster 'algorithmic justice' and critical digital competence, empowering students and educators to identify and challenge the socio-technical biases embedded in Generative AI models (Peña-García et al., 2026; UNESCO, 2024).
Al-Emran, M., Al-Qaysi, N., Al-Sharafi, M. A., Khoshkam, M., Foroughi, B., & Ghobakhloo, M. (2025a). Role of perceived threats and knowledge management in shaping generative AI use in education and its impact on social sustainability. International Journal of Management Education, 23(1). https://doi.org/10.1016/j.ijme.2024.101105
Al-Emran, M., Al-Qaysi, N., Al-Sharafi, M. A., Khoshkam, M., Foroughi, B., & Ghobakhloo, M. (2025b). Role of perceived threats and knowledge management in shaping generative AI use in education and its impact on social sustainability. International Journal of Management Education, 23(1). https://doi.org/10.1016/j.ijme.2024.101105
Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review. In Information (Switzerland) (Vol. 15, Number 11). Multidisciplinary Digital Publishing Institute (MDPI). https://doi.org/10.3390/info15110676
Bekiaridis, G. (2024). Supplement to the DigCompEDU Framework: Outlining the skills and competences of educators related to AI in education. https://aipioneers.org/supplement-to-the-digcompedu-framework/
Belkina, M., Daniel, S., Nikolic, S., Haque, R., Lyden, S., Neal, P., Grundy, S., & Hassan, G. M. (2025). Implementing generative AI (GenAI) in higher education: A systematic review of case studies. In Computers and Education: Artificial Intelligence (Vol. 8). Elsevier B.V. https://doi.org/10.1016/j.caeai.2025.100407
Bennis, I., & Mouwafaq, S. (2025). Advancing AI-driven thematic analysis in qualitative research: a comparative study of nine generative models on Cutaneous Leishmaniasis data. BMC Medical Informatics and Decision Making, 25(1). https://doi.org/10.1186/s12911-025-02961-5
Cevik, A. A., & Abu-Zidan, F. M. (2025). Utilizing AI-Powered Thematic Analysis: Methodology, Implementation, and Lessons Learned. Cureus. https://doi.org/10.7759/cureus.85338
Dumitru, C., Muttashar Abdulsahib, G., Ibrahim Khalaf, O., & Bennour, A. (2025). Integrating artificial intelligence in supporting students with disabilities in higher education: An integrative review. Technology and Disability. https://doi.org/10.1177/10554181251355428
Elsevier. (2026). Generative AI policies for journals. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
European Commission. (2022). Guidelines for teachers and educators on tackling disinformation and promoting digital literacy through education and training. https://doi.org/10.2766/27820
European Commission. (2025). Living guidelines on the responsible use of generative AI in research. http://data.europa.eu/eli/dec/2011/833/oj
Gitman, V., Maxwell, C., & Gamble, J. M. (2025). Enhancing search strategies for systematic reviews on drug Harms: An evaluation of the utility of ChatGPT in error detection and keyword generation. Computers in Biology and Medicine, 193. https://doi.org/10.1016/j.compbiomed.2025.110464
Panakaje, N., Ur Rahiman, H., Parvin, S. M. R., Shareena, P., Madhura, K., Yatheen, M., & Irfana, S. (2024). Revolutionizing pedagogy: navigating the integration of technology in higher education for teacher learning and performance enhancement. Cogent Education, 11(1). https://doi.org/10.1080/2331186X.2024.2308430
Pellegrina, D., & Helmy, M. (2025). AI for scientific integrity: detecting ethical breaches, errors, and misconduct in manuscripts. Frontiers in Artificial Intelligence, 8. https://doi.org/10.3389/frai.2025.1644098
Peña-García, P., Jaime-de-Aza, M., & Feltrero, R. (2026). Dialogic Reflection and Algorithmic Bias: Pathways Toward Inclusive AI in Education. Trends in Higher Education, 5(1), 9. https://doi.org/10.3390/higheredu5010009
Rughinis, C., Matei, S., & Corcaci, A. (2025). Generative Content Analysis for Policy Research: Comparing LLM Reliability in Analyzing Institutional AI Discourse. Proceedings - 2025 25th International Conference on Control Systems and Computer Science, CSCS 2025, 596–603. https://doi.org/10.1109/CSCS66924.2025.00094
Sánchez-Caballé, A., & Santos, C. (2025). Perspectives of Higher Education in Spanish and Portuguese Institutions on Artificial Intelligence: A Content Analysis. Edutec, Revista Electrónica de Tecnología Educativa, (92), 253–269. https://doi.org/10.21556/edutec.2025.92.3879
Santos, C. (2024). Inteligência Artificial na Análise de Dados Quantitativos de Pesquisa Educacional. Nuances: Estudos Sobre Educação, e024013. https://doi.org/10.32930/nuances.v35i00.10682
Shor, R., Greene, E. A., Sumberg, L., & Weingrad, A. B. (2025). AI Tools in Academia: Evaluating NotebookLM as a Tool for Conducting Literature Reviews. Psychiatry (New York). https://doi.org/10.1080/00332747.2025.2541531
UNESCO. (2023a). Guidance for generative AI in education and research. UNESCO. https://doi.org/10.54675/ewzm9535
UNESCO. (2023b). Harnessing the Era of Artificial Intelligence in Higher Education: A Primer for Higher Education Stakeholders. UNESCO. http://en.unesco.org/open-access/terms-use-ccbysa-en
UNESCO. (2023c). Harnessing the Era of Artificial Intelligence in Higher Education A Primer for Higher Education Stakeholders. http://en.unesco.org/open-access/terms-use-ccbysa-en
UNESCO. (2024). AI competency framework for teachers. UNESCO. https://doi.org/10.54675/zjte2084
Zhou, X., & Fang, L. (2025). From Collaboration to Critique: Engaging With GenAI to Foster Critical Thinking in Business Analytics. Management Teaching Review. https://doi.org/10.1177/23792981251389862
[1] Instituto Politécnico de Santarém (IPSantarém), Polo de Literacia Digital e Inclusão Digital (PLDIS), Centro de Investigação em Artes e Comunicação (CIAC).