6 ก.พ. เวลา 13:58 • ไอที & แก็ดเจ็ต

GenAI in Qualitative Data Analysis: Analyzing or Just Chatting

Researchers are excited about using GenAI (like ChatGPT) for qualitative data analysis promising automated coding, thematic analysis at scale, time savings, and even acting as a “virtual research assistant” that interprets meaning and finds patterns.
The authors ask: What is GenAI really? And is it suitable for qualitative work?
Technically, GenAI is a sophisticated chatbot based on transformer autoregressive LLMs.
It predicts the next word probabilistically from huge training data, producing text that looks human-like but has no real understanding, reasoning, or truth seeking.
It hallucinates facts, fabricates content, is inconsistent across runs, full of biases, non-transparent (black box), and raises ethical issues like data leakage and high energy use. Add-ons like RAG help a bit but don’t fix core limits.
They tested it with real data (historical tech documents), various prompts on ChatGPT-4.0, compared to other studies, and checked new “Qual-AI” features in NVivo/ATLAS.ti. Results: frequent made up quotes, random/inaccurate codes, shallow themes, wild variability, endless fixing loops, no stable insight, poor summaries, missing key ideas, and no clear audit trail.
Core problem: Qualitative analysis is about interpreting social meaning an inter-subjective, human process requiring context, reflexivity, and scholarly validation. GenAI mimics the form but can’t do the essence; it risks misrepresentation, false results, and epistemic damage if adopted uncritically.
Conclusion: Current GenAI (transformer-based) is unsuited for qualitative data analysis and poses serious risks to research credibility. Scholars should validate it rigorously before normalizing use human interpretation remains irreplaceable.
#GenAI #QualitativeResearch #AIinResearch #EpistemicRisks #LLMs #ResearchMethods #AIHype
โฆษณา