Publications

Echo State and Band-pass Networks with aqueous memristors: leaky reservoir computing with a leaky substrate

DOI: 10.1063/5.0273574

Authors: T. M. Kamsma, J. J. Teijema, R. van Roij, C. Spitoni

Chaos: An Interdisciplinary Journal of Nonlinear Science • 2025/9/12

Recurrent Neural Networks (RNN) are extensively employed for processing sequential data such as time series. Reservoir computing (RC) has drawn attention as an RNN framework due to its fixed network that does not require training, making it an attractive for hardware based machine learning. We establish an explicit correspondence between the well-established mathematical RC implementations of Echo State Networks and Band-pass Networks with Leaky Integrator nodes on the one hand and a physical circuit containing iontronic simple volatile memristors on the other. These aqueous iontronic devices employ ion transport through water as signal carriers, and feature a voltage-dependent (memory) conductance. The activation function and the dynamics of the Leaky Integrator nodes naturally materialise as the (dynamic) conductance properties of iontronic memristors, while a simple fixed local current-to-voltage update rule at the memristor terminals facilitates the relevant matrix coupling between nodes. We process various time series, including pressure data from simulated airways during breathing that can be directly fed into the network due to the intrinsic responsiveness of iontronic devices to applied pressures. This is done while using established physical equations of motion of iontronic memristors for the internal dynamics of the circuit.

Makita—A workflow generator for large-scale and reproducible simulation studies mimicking text labeling

DOI: 10.1016/j.simpa.2024.100663

Authors: J. J. Teijema, R. van de Schoot, G. Ferdinands, P. Lombaers, J. de Bruin

Software Impacts • 2024/9/1

This paper introduces ASReview Makita, a tool designed to enhance the efficiency and reproducibility of simulation studies in systematic reviews. Makita streamlines the setup of large-scale simulation studies by automating workflow generation, repository preparation, and script execution. It employs Jinja and Python templates to create a structured, reproducible environment, aiding both novice and expert researchers. Makita’s flexibility allows for customization to specific research needs, ensuring a repeatable research process. This tool represents an advancement in the field of systematic review automation, offering a practical solution to the challenges of managing complex simulation studies.

SYNERGY-Open machine learning dataset on study selection in systematic reviews

DOI: 10.34894/HE6NAQ

Authors: J. De Bruin, Y. Ma, G. Ferdinands, J. J. Teijema, R. Van de Schoot

DataverseNL • 2023/04/24

SYNERGY is a free and open dataset on study selection in systematic reviews, comprising 169,288 academic works from 26 systematic reviews. Only 2,834 (1.67%) of the academic works in the binary classified dataset are included in the systematic reviews. This makes the SYNERGY dataset a unique dataset for the development of information retrieval algorithms, especially for sparse labels. Due to the many available variables available per record (i.e. titles, abstracts, authors, references, topics), this dataset is useful for researchers in NLP, machine learning, network analysis, and more. In total, the dataset contains 82,668,134 trainable data points. The easiest way to get the SYNERGY dataset is via the synergy-dataset Python package. See https://github.com/asreview/synergy-dataset for all information.

Large-scale simulation study of active learning models for systematic reviews

DOI: 10.1007/s41060-025-00777-0

Authors: J. J. Teijema, J. de Bruin, A. Bagheri, R. van de Schoot

International Journal of Data Science and Analytics • 2025/5/2

Despite progress in active learning, evaluation remains limited by constraints in simulation size, infrastructure, and dataset availability. This study advocates for large-scale simulations as the gold standard for evaluating active learning models in systematic review screening. Two large-scale simulations, totaling over 29 thousand runs, assessed active learning solutions. The first study evaluated 13 combinations of classification models and feature extraction techniques using high-quality datasets from the SYNERGY dataset. The second expanded this to 92 model combinations with additional classifiers and feature extractors. In every scenario tested, active learning outperformed random screening. The performance gained varied across datasets, models, and screening progression, ranging from considerable to near-flawless results. The findings demonstrate that active learning consistently outperforms random screening in systematic review tasks, offering significant efficiency gains. While the extent of improvement varies depending on the dataset, model choice, and screening stage, the overall advantage is clear. Since model performance differs, active learning systems should remain adaptable to accommodate new classifiers and feature extraction techniques. The publicly available results underscore the importance of open benchmarking to ensure reproducibility and the development of robust, generalizable active learning strategies.

Simulation-based Active Learning for Systematic Reviews: A Scoping Review of Literature

DOI: 10.31234/osf.io/67zmt

Authors: J. J. Teijema, S. Seuren, D. Anadria, A. Bagheri, R. van de Schoot

Journal of Information Science • 2023/6/29

Background: Active learning is a proposed method for accelerating the screening phase of systematic reviews. While extensively studied, evidence remains scattered across a fragmented body of literature. Objective: This scoping review investigates whether active learning is recommended for systematic review screening and identifies areas needing further research. Design: We screened 1887 records published since 2006 using ASReview, an active learning tool, and included 60 relevant studies. We also analyzed 238 of 336 collected datasets for study design, dataset usage, and implementation. Results: All 60 studies recommended active learning as a means to improve screening efficiency. Despite some methodological heterogeneity, consistent endorsement was found across the literature. Conclusions: Active learning shows strong potential to support systematic review screening. Standardizing evaluation metrics, encouraging open data practices, and diversifying model configurations are key priorities for advancing this field.

Active learning-based Systematic reviewing using switching classification models: the case of the onset, maintenance, and relapse of depressive disorders

DOI: 10.3389/frma.2023.1178181

Authors: J. J. Teijema, L. Hofstee, M. Brouwer, J. De Bruin, G. Ferdinands, J. De Boer, P. Vizan Siso, S. Van Den Brand, C. Bockting, R. Van De Schoot, A. Bagheri

Frontiers in Research Metrics and Analytics • 2023/5/16

Introduction: This study examines the performance of active learning-aided systematic reviews using a deep learning-based model compared to traditional machine learning approaches, and explores the potential benefits of model-switching strategies. Methods: Comprising four parts, the study: 1) analyzes the performance and stability of active learning-aided systematic review; 2) implements a convolutional neural network classifier; 3) compares classifier and feature extractor performance; and 4) investigates the impact of model-switching strategies on review performance. Results: Lighter models perform well in early simulation stages, while other models show increased performance in later stages. Model-switching strategies generally improve performance compared to using the default classification model alone. Discussion: The study's findings support the use of model-switching strategies in active learning-based systematic review workflows. It is advised to begin the review with a light model, such as Naïve Bayes or logistic regression, and switch to a heavier classification model based on a heuristic rule when needed.