Methodology

How the AIS university rankings are constructed

Methodology

Built from structured OpenAlex metadata to keep the workflow consistent, auditable, and easy to replicate.

01

Research Design

This study uses a publication-based bibliometric approach to rank universities active in Accounting Information Systems (AIS) research. The goal is to measure institutional research productivity using article counts from a selected group of AIS-related journals. The approach is intended to be transparent, consistent, and easy to replicate because it relies on structured bibliographic data rather than manual collection.

02

Journal Selection

The analysis focuses on six AIS-related journals: AIS Educator Journal (AISEJ), International Journal of Accounting Information Systems (IJAIS), International Journal of Digital Accounting Research (IJDAR), Intelligent Systems in Accounting, Finance and Management (ISAFM), Journal of Emerging Technologies in Accounting (JETA), and Journal of Information Systems (JIS). These journals define the publication set used in the ranking framework.

03

Data Source and Collection

The dataset for this study is built using the OpenAlex API as the primary bibliographic source. Each journal is identified through its OpenAlex source ID, and article records are collected for analysis. For each paper, the extracted metadata includes the title, abstract, publication year, publication date, journal name, DOI, landing page URL, author information, and institutional affiliations. These data are then stored in a structured dataset for further analysis.

04

Unit of Analysis

The primary unit of analysis is the individual published paper. Each article contributes to the institutional ranking through the affiliations listed in its authorship metadata. As a result, the analysis focuses on institutional presence in published research output rather than citation impact or author order.

05

Institutional Ranking Procedure

Institutional rankings are generated using a publication-count method. For each article, all distinct U.S. institutions listed in the authorship affiliations are identified. Each institution receives one publication credit for that paper, regardless of how many co-authors from the same institution are included on the article. This prevents duplicate counting within a single paper while still giving credit to all institutions involved in collaborative research. The publication credits are then aggregated across the full dataset, and institutions are ranked in descending order based on their total publication counts. The top 50 institutions are retained for reporting.

06

Author-Level Dataset Construction

In addition to the institutional dataset, a parallel author-level dataset is created to support author rankings and institution-specific views. This dataset preserves author names and affiliation information for each paper. Author affiliation is determined using the most recent paper available in the dataset. These records are also used to support focused views, such as the Rutgers-affiliated publication subset.

07

Research Category Classification

To go beyond simply counting publications, each paper is also assigned to one of five research themes: Accounting and Financial AI, Business Intelligence and Decision Support, Information Systems and Applied Analytics, Engineering and Industrial AI, and Core AI and Data Science Methods. This classification is based on the paper’s title and abstract, which are used to identify the main research focus of the study. An OpenAI-based labeling process is then used to assign each paper to the most appropriate category. These labels are combined with the publication dataset so that the analysis can show not only how much an institution publishes, but also the areas in which its research is concentrated.

08

Basis for Publication Credit Assignment

The counting logic used in this study is consistent with established accounting research ranking methods, where publication credit is assigned based on authorship and institutional affiliation within a selected set of journals. Prior accounting ranking systems similarly define a journal basket and allocate publication credit to the institutions represented by article authors. This study follows the same general principle, but adapts it specifically to the AIS domain using OpenAlex-based data collection and AI-assisted thematic classification.

09

Output Generation

The final output of the methodology is a ranked list of universities based on total publication counts across the selected AIS journals. In addition to the overall ranking, the framework also supports category-wise analysis, author-level views, journal-specific filtering, and institution-specific subsets. This allows the system to provide both a broad measure of institutional research productivity and a more detailed view of research specialization within AIS.

10

Summary

Overall, this methodology provides a structured and reproducible framework for evaluating AIS research productivity at the university level. By combining article retrieval, institution-level publication counting, author-level enrichment, and thematic classification, the study offers both an overall ranking mechanism and a deeper understanding of the research profile of institutions within the AIS field.