| Abstract | This case study investigates the adoption patterns and ethical implications of AI-powered tools among university
students for academic writing, aiming to bridge the gap between technological integration and pedagogical
responsibility. We conduct a qualitative survey-based analysis involving undergraduate, postgraduate, and research
scholars across disciplines at NGM College, focusing on their familiarity, usage frequency, and attitudes toward AI
tools such as ChatGPT, Gemini, and Perplexity. The findings reveal a high level of familiarity with AI tools, with
48.6% of participants being "very familiar" and 47.2% "somewhat familiar," while ChatGPT emerges as the
dominant tool (83.3%). Students primarily employ AI for writing assistance (81.9%), idea generation (75.0%), and
research (72.2%), yet ethical concerns persist, as only 2.7% directly accept AI-generated content without
modification. The study identifies a tension between efficiency gains and risks to academic integrity, with 51.4% of
respondents using AI suggestions as inspiration but rewriting content independently. Moreover, the data highlights
disciplinary variations in tool preferences and task-specific applications, underscoring the need for tailored
pedagogical strategies. The research contributes to the growing discourse on AI in education by providing
empirical evidence of student behaviors and proposing actionable recommendations for educators, such as
redesigning assessments to emphasize critical thinking and integrating transparency mechanisms. These insights
are particularly significant given the rapid proliferation of AI tools in academia, where balancing technological
assistance with the preservation of original thought remains a pressing challenge. The study ultimately calls for a
nuanced approach to AI integration, one that fosters responsible use while maintaining academic rigor and ethical
standards.
|