- Introduction and How the Data was Generated
As a group, we were most interested in determining the attitude of students towards the adoption of artificial intelligence systems in school and university settings. Therefore, we chose a dataset titled “AI Adoption Usage, Expectation, Attitudes, Perceptions, and Motivations for Learning in Higher Education.” This dataset was put together by Universitas Negeri Padang, an Indonesian university, and comprises data from 535 students who are enrolled in either public or private universities. The students were pursuing either undergraduate or associate degrees and span several regional provinces of Indonesia. Other demographic information included in the dataset regarding the survey participants include which field of study they are in, the artificial intelligence model they are answering about.
Students were asked to rank their agreement with a series of statements, with 1 indicating that they strongly agree, and 4 indicating that they strongly disagree. The statements were in 4 different categories: one revolving around their expectations of the model’s performance, one regarding the challenges they feel in using the model, one regarding their attitude towards the adoption of AI, one regarding their perception of AI adoption in the future, and final one regarding their motivation for using these systems.
- What the Original Sources Are
Our data is sourced from an online questionnaire on Google Forms provided by the data repository Mendeley Data. The data consists of the responses of 535 university students from across 152 universities in 20 Indonesian provinces. These collected responses were stored in a Microsoft Excel and imported into IBM SPSS Statistics version 29 for further cleaning and analysis.
- Who Funded the Creation of this dataset
The main source of funding was from Padang State University (Universitas Negeri Padang), Indonesia. Their funding made the creation of this dataset and its publication possible. No other external sources or fundings were mentioned in the article.
- What information is left out of the spreadsheet
Participants answered by a large margin of 10.8% (58 people) on “Other” for which AI tools they use. There is also uneven gender representation of the dataset consisting of 69% female and 31% male. This dataset does not include any other gender identities other than the binary. It also has limiting categories with levels of education (Associates and Undergraduate) and types of education (private and public). This dataset has was collected in Indonesia, which could mean that this dataset cannot be generalized to global representations of student perceptions of AI.
- Ideological effects of the dataset
This dataset turns a messy, socially embedded set of experiences “using AI in school” into a neat set of measurable variables. The ontology is built around adoption/acceptance frameworks the article explicitly frames interpretation through models like TAM/SCT/HCI and adapts survey indicators from Viswanath Venkatesh’s work, so the data naturally emphasizes whether students agree/disagree with statements about expectations, challenges, attitudes, perceptions, and motivation. That framing can subtly push readers toward a “technology management” story. AI adoption becomes mainly a matter of perceptions and readiness rather than a story about power, policy, inequality, or institutional constraints.
The survey design reinforces that ideology. Responses are forced into a 4 point Likert scale with no neutral option, which pressures students to take a side and makes ambivalence harder to represent. The “types of AI” variable also predefines what counts as AI, while a large “Other” bucket hides the specific tools and practices of many respondents. Demographics are simplified in ways that shape what becomes “thinkable” in analysis (e.g., binary gender categories, limited education levels, and a single-country context).
If our dataset is the only source, we might have limited our sample population by the binary gender ideology, excluding members of the queer communities, other gender identities, instructor pressure, surveillance/discipline, socioeconomic access, language barriers, disability accommodations, and actual learning outcomes. Since the dataset includes online surveys with selective sampling and representation, the dataset may only include students who have access to technology, interested in the new technology and development of AI, and feel comfortable taking online surveys with the ideology set by the researchers. Students who do not have access to the internet, or are not up to date with taking online surveys may have missed out on answering these questions. As well as students who did not fit the categories of binary genders, level of education (Associates and undergraduates) and types of education (private and public) may have given up on answering the surveys.
