DATA CRITIQUE

Students were asked to rank their agreement with a series of statements, with 1 indicating that they strongly agree, and 4 indicating that they strongly disagree. The statements were in 4 different categories: one revolving around their expectations of the model’s performance, one regarding the challenges they feel in using the model, one regarding their attitude towards the adoption of AI, one regarding their perception of AI adoption in the future, and final one regarding their motivation for using these systems.

The survey design reinforces that ideology. Responses are forced into a 4 point Likert scale with no neutral option, which pressures students to take a side and makes ambivalence harder to represent. The “types of AI” variable also predefines what counts as AI, while a large “Other” bucket hides the specific tools and practices of many respondents. Demographics are simplified in ways that shape what becomes “thinkable” in analysis (e.g., binary gender categories, limited education levels, and a single-country context).

If our dataset is the only source, we might have limited our sample population by the binary gender ideology, excluding members of the queer communities, other gender identities, instructor pressure, surveillance/discipline, socioeconomic access, language barriers, disability accommodations, and actual learning outcomes. Since the dataset includes online surveys with selective sampling and representation, the dataset may only include students who have access to technology, interested in the new technology and development of AI, and feel comfortable taking online surveys with the ideology set by the researchers. Students who do not have access to the internet, or are not up to date with taking online surveys may have missed out on answering these questions. As well as students who did not fit the categories of binary genders, level of education (Associates and undergraduates) and types of education (private and public) may have given up on answering the surveys.