Main research areas

Human-AI Collaboration
Human-AI Collaboration
Human-AI Collaboration constitutes a multidisciplinary research domain focused on designing, developing, and rigorously evaluating artificial intelligence systems intended for integrated human-AI partnerships. Grounded in theoretical frameworks from cognitive science, human-computer interaction, human-centered computing, and human-centered data science – with strong ties to Information Science – this field prioritizes the creation of adaptive and intuitive interfaces designed to augment human cognitive capabilities and optimize information processing workflows. A key principle within Human-AI Collaboration is the ethical deployment of AI, emphasizing transparency, fairness, and accountability to cultivate trust and realize the potential for transformative solutions across diverse application domains, ultimately contributing to advancements in human well-being and societal progress.
Keywords:
Human-AI Collaboration, Human-Centered Design, Trust and Transparency, Cognitive Augmentation, Ethical AI
Relevant Projects:
-
Future Cognitive Agents to Support Aeronautical Actors, a research project with Airbus AI research team, Airbus Central Research & Technology
- User Study of Data Discovery, Health Studies Australian National Data Asset (HeSANDA) User Interview, research project with ARDC (Australian Research Data Commons)
- AI and the Transformation of Metadata Research and Practices – Global and Regional Perspectives, research project with DCMI (Dublin Core Metadata Initiative) Education Committee
Publications:
-
Liu, Y.-H., Arnold, A., Dupont, G., Kobus, C., Lancelot, F., Granger, G., … Matton, N. (2023). User evaluation of conversational agents for aerospace domain. International Journal of Human–Computer Interaction, 40(19), 5549–5568. https://doi.org/10.1080/10447318.2023.2239544
- Liu, Y.-H., Wu, M., Power, M., & Burton, A. (2023). Elicitation of contexts for discovering clinical trials and related health data: An interview study (V1.0). Zenodo. https://doi.org/10.5281/zenodo.7839282
- Liu, Y.-H., Zeng, M. L., & MacDonald, A. (Eds.). (Forthcoming). AI and the Transformation of Metadata Research and Practices – Global and Regional Perspectives. Cambridge University Press & Assessment.
Research Topics:
Design and Development of AI Systems for Integrated Human-AI Partnerships: This involves the creation of adaptive and intuitive interfaces designed for human-AI interaction. This includes the development of AI systems intended for integrated human-AI partnerships, such as conversational agents and conversational search systems. Research also focuses on search interface design and evaluation to enhance user interaction with AI-driven search systems.
Augmenting Human Capabilities and Optimizing Workflows: A key aim is to design systems that augment human cognitive capabilities and optimize information processing workflows. This is exemplified by studies supporting professionals in complex, safety-critical environments, such as pilots in a cockpit, and marketing professionals and data analysts' use of AI-based persona systems in the workplace.
Impact of AI on Professional Roles and Information Practices: This research investigates how AI influences task completion, necessitates skill development (e.g., understanding AI tools and their limitations, critically evaluating AI-generated outputs, collaborating with AI developers), raises concerns about job security, and reshapes professional identities, particularly in areas like metadata creation and management.

Human-Centered AI (HCAI)
Human-Centered AI (HCAI)
Human-Centered AI (HCAI) is an interdisciplinary approach that integrates principles from Human-Computer Interaction (HCI), human-centered computing (HCC), user-centered information retrieval, and human information behavior to develop AI systems that are equitable, usable, and aligned with human information needs. It emphasizes designing technologies that adapt to user behaviors, prioritize accessibility and inclusivity, focus on how people seek and interpret information, and apply system design principles to ensure systems are intuitive and contextually relevant. By centering human values, ethical considerations, and participatory design practices, this approach ensures AI enhances user agency, facilitates trust, and addresses diverse societal needs while maintaining transparency and accountability throughout the design, deployment, and evaluation process.
Keywords:
Human-Centered AI, Intelligent User Interfaces, Adaptive Systems, Cognitive Modeling, User Modeling
Relevant Projects:
-
Intentional Forgetting through Cognitive-Computer Science Methods of Prioritization, Compression and Contraction of Knowledge, DFG-Project in the SPP 1921
- Computational Intelligence for Complex Structured Data, Australian Research Council (ARC), Linkage Projects scheme
-
Future Cognitive Agents to Support Aeronautical Actors, a research project with Airbus AI research team, Airbus Central Research & Technology
Publications:
-
Liu, Y.-H., Nürnberger, A., Rettstatt, J., & Ragni, M. (2024). Saccadic eye movements and search task difficulty as basis of modelling user knowledge in information seeking. Proceedings of the Annual Meeting of the Cognitive Science Society, 46. Retrieved from https://escholarship.org/uc/item/3ws2g8qm
- Spiller, M., Liu, Y.-H., Hossain, M. Z., Gedeon, T., Geissler, J., & Nürnberger, A. (2021). Predicting visual search task success from eye gaze data as a basis for user-adaptive information visualization systems. ACM Transactions on Interactive Intelligent Systems, 11(2), 1–25. https://doi.org/10.1145/3446638
- Liu, Y.-H., Spiller, M., Ma, J., Gedeon, T., Hossain, M. Z., Islam, A., & Bierig, R. (2020). User engagement with driving simulators: An analysis of physiological signals. In C. Stephanidis, V. G. Duffy, N. Streitz, S. Konomi, & H. Krömker (Eds.), HCI International 2020 –Late Breaking Papers: Digital Human Modeling and Ergonomics, Mobility and Intelligent Environments (pp. 130–149). Springer International Publishing. https://doi.org/10.1007/978-3-030-59987-4_10
- Salminen, J., Liu, Y.-H., Şengün, S., Santos, J. M., Jung, S., & Jansen, B. J. (2020). The effect of numerical and textual information on visual engagement and perceptions of AI-driven persona interfaces. Proceedings of the International Conference on Intelligent User Interfaces (IUI), 357–368. https://doi.org/10.1145/3377325.3377492
Research Topics:
-
Enhancing Human Cognitive Capabilities with Adaptive HCAI Systems
This research focuses on developing AI systems that adapt to individual users through multi-modal data (e.g., eye tracking, EEG, and physiological signals) to infer cognitive states, attention, and intent in real-time, enabling interfaces that dynamically adjust to user needs. It also explores how Human-Centered AI can support complex, exploratory tasks beyond simple query-response interactions, such as aiding problem recognition, planning, and sensemaking—processes that occur before and beyond traditional search interfaces—by integrating AI-driven insights into learning and information understanding.
-
HCAI for Augmenting Professional Roles and Reshaping Information Practices
This research examines how Human-Centered AI (HCAI) can enhance professional roles in information-intensive fields by optimizing workflows, augmenting cognitive capabilities, and supporting strategic tasks (e.g., in aerospace or information services). It emphasizes the critical role of human oversight and rigorous evaluation of AI outputs to ensure accuracy, ethical alignment, and accountability in AI-assisted decision-making.

Reasoning & Decision-Making
Reasoning & Decision-Making
Accurately predicting individual reasoning and decision-making behavior ranks among the most demanding goals of cognitive science.
While classical approaches in cognitive psychology have yielded valuable insights into average behavior, they face fundamental limitations when it comes to modeling and forecasting specific individual cognition and its considerable variability. Bridging this gap requires novel methodological approaches. Our research program addresses this challenge by developing integrative methods that systematically combine established cognitive theories of human thinking and judgment with modern machine learning techniques. In this context, adaptive recommender systems play a key role due to their capacity for personalization and continuous adjustment to individual data streams.
Our focus lies in constructing models that are not only descriptively accurate but also predictive. These models are designed to meet three central requirements:
-
Adaptivity – the ability to dynamically adjust to an individual's learning progress, changing preferences, or evolving abilities;
-
Interpretability – transparency of the underlying cognitive processes and decision rules, enabling not just predictions but also explanations;
-
Robustness – reliability even when individual data are sparse or noisy.
The overarching goal is to gain a deep understanding of the mechanisms underlying individual thought while also enabling practical applications, such as personalized cognitive support or the improvement of learning environments.
This research perspective also opens up important theoretical questions: How can universal cognitive principles be reconciled with modeling individual deviations? What role do factors such as cognitive styles, prior knowledge, or motivation play in predictive accuracy? And how can the often observed discrepancy between normative decision models and actual human behavior be better explained through predictive, data-driven models? Addressing these questions is essential for developing truly comprehensive and realistic models of the human mind.
Selected Publications:
Johnson-Laird, P.N., Ragni, M. Reasoning about possibilities. (2025). Modal logics, possible worlds, and mental models. Psychon Bull Rev 32, 52–79. https://doi.org/10.3758/s13423-024-02518-z.
Borukhson, David & Lorenz-Spreen, Philipp & Ragni, Marco. (2022). When Does an Individual Accept Misinformation? An Extended Investigation Through Cognitive Modeling. Computational Brain & Behavior. 5. 10.1007/s42113-022-00136-3.
Riesterer, N., Brand, D. and Ragni, M. (2020), Predictive Modeling of Individual Human Cognition: Upper Bounds and a New Perspective on Performance. Top Cogn Sci, 12: 960-974. https://doi.org/10.1111/tops.12501.

Human Problem Solving
Human Problem Solving
The ability to solve problems effectively and purposefully is one of the defining features of human intelligence. Understanding the mechanisms and processes involved in problem solving is therefore a central concern for psychology, cognitive science, and artificial intelligence. At PVA, we study human problem solving across a broad spectrum—from fundamental reasoning processes to complex problem solving and algorithmic thinking.
Human Reasoning
In this area of human problem solving, we investigate the fundamental inference mechanisms involved in syllogistic, conditional, and spatial reasoning. One key motivation is a bottom-up perspective: according to this view, more complex forms of thinking ultimately build upon these fundamental inference processes, which thus represent the core building blocks of human (rational) problem solving.
Selected Publications:
Brand, D., & Ragni, M. (2025). Using Cross-Domain Data to Predict Syllogistic Reasoning Behavior. In Press.
Brand, D., Todorovikj, S., & Ragni, M. (2024). Necessity, Possibility and Likelihood in Syllogistic Reasoning. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (pp. 2776–2782).
Ragni, M., Brand, D., & Riesterer, N. (2021). The Predictive Power of Spatial Relational Reasoning Models: A New Evaluation Approach. Frontiers in Psychology, 12. doi: [10.3389/fpsyg.2021.626292].
Complex Problem Solving
This area of research focuses on the human ability to understand complex systems and derive goal-directed actions accordingly. On one side, complex problem solving includes planning problems—such as many (sliding) puzzles or mathematical problems that require multiple steps to reach a solution. In these cases, the primary difficulty lies in the planning depth, which demands mental simulation of potential actions.
On the other side, complex problem solving also encompasses problems where the core challenge lies in understanding the underlying system itself. This includes complex dynamics, feedback loops, and the comprehension of mathematical relationships.
A particularly important special case within this domain is algorithmic thinking. This refers to tasks involving algorithms, where either an existing algorithm must be understood (e.g., predicting its output), or—as is essential in programming—an algorithm must be purposefully developed to solve a given problem. In both cases, mental simulation and a deep understanding of the system are crucial.
Selected Publications:
Brand, D., Todorovikj, S., & Ragni, M. (2024). Predicting complex problem solving performance in the tailorshop scenario. In Sibert, C. (Ed.), Proceedings of the 22th International Conference on Cognitive Modeling (pp. 30–36). University Park, PA: Applied Cognitive Science Lab, Penn State.
Todorovikj, S., Brand, D., & Ragni, M. (2022). Predicting Algorithmic Complexity for Individuals. In T. C. Stewart (Ed.), Proceedings of the 20th International Conference on Cognitive Modeling (pp. 240–246). Applied Cognitive Science Lab, Penn State: University Park, PA.
Kettner, F., Heinrich, E., Brand, D., & Ragni, M. (2022). Reverse-Engineering of Boolean Concepts: A Benchmark Analysis. In T. C. Stewart (Ed.), Proceedings of the 20th International Conference on Cognitive Modeling (pp. 164–169). Applied Cognitive Science Lab, Penn State: University Park, PA.

Modeling in Information Processing
Modeling in Information Processing
Modeling represents an indispensable class of methods in scientific research.
Our focus lies in applying diverse modeling approaches to understand human information processing, as well as in rigorous model evaluation and selection through benchmarking. The specific methodology is guided by the research question, and we employ the following modeling techniques:
- Computational cognitive modeling
- Cognitive process models
- Statistical models
- Machine learning and data-driven modeling
Our primary aim in modeling is to gain insight: computational cognitive process models are particularly well-suited for capturing human processes, as they allow theoretical assumptions to be tested directly. Working with such models thus enables inferences about the underlying cognitive theories. In contrast, more powerful machine learning approaches are ideal for maximizing predictive accuracy, depending on the application domain.
Modeling human information processing is a methodological foundation that runs through much of our research on human thinking and behavior—and is not confined to any single research field.
Selected Publications:
Todorovikj, S., Brand, D., & Ragni, M. (2024). Model verification and preferred mental models in syllogistic reasoning. In Sibert, C. (Ed.), Proceedings of the 22th International Conference on Cognitive Modeling (pp. 185–191). University Park, PA: Applied Cognitive Science Lab, Penn State.
Brand, D., Riesterer, N., & Ragni, M. (2023). Uncovering Iconic Patterns of Syllogistic Reasoning: A Clustering Analysis. In C. Sibert (Ed.), Proceedings of the 21th International Conference on Cognitive Modeling (pp. 57–63). University Park, PA: Applied Cognitive Science Lab, Penn State.
Brand, D., Riesterer, N., & Ragni, M. (2022). Model-Based Explanation of Feedback Effects in Syllogistic Reasoning. Topics in Cognitive Science, 14(4), 828-844. doi: [10.1111/tops.12624].

Knowledge Representation and Intentional Forgetting
Knowledge Representation and Intentional Forgetting
The exponential growth of stored data and knowledge structures within organizations over recent decades poses a fundamental challenge.
Although the volume of organizational knowledge is expanding at an impressive rate, systematic reduction or curation often does not take place. As a result, the effort required to identify and eliminate outdated, irrelevant, or rarely used information continues to increase—becoming an almost unmanageable task in the face of large-scale data and highly complex knowledge networks. To address this information overload effectively, we investigate the targeted approach of intentional forgetting. This approach focuses on the active discarding of irrelevant, redundant, or contradictory knowledge in order to keep the organizational knowledge base manageable and agile. Notably, forgetting is already recognized in cognitive science as an essential mechanism in human memory, supporting efficient learning and adaptability. Our research aims to transfer and formally operationalize these cognitive principles within the organizational context.
Central to this transfer is the question of knowledge representation: How can organizational knowledge be structured in such a way that forgetting is not seen as mere loss, but as intelligent selection? Here, we draw on cognitively inspired formal models that describe knowledge states through epistemic ranking functions. By applying the principle of conditional preservation as a general axiom for change, we implement specific forgetting operations within ranking functions.
The overarching goal of our research in this area is the scalable transfer of individual forgetting processes to organizational knowledge systems through novel modeling approaches. In addition, assessing the relevance of knowledge in dynamic environments requires identifying robust heuristics that systematically integrate context shifts and usage patterns. Furthermore, the development of a formal framework is essential—one that balances the stability of essential knowledge cores with the flexibility needed for continuous forgetting and recalibration. Tackling these challenges is a fundamental prerequisite for the development of intelligent, self-optimizing knowledge management systems of the next generation.
Selected Publications:
Sauerwald, K., Ismail-Tsaous, E., Ragni, M., Kern-Isberner, G., Beierle, C. Sequential merging and construction of rankings as cognitive logic. (2025). International Journal of Approximate Reasoning 176, 109321. https://doi.org/10.1016/j.ijar.2024.109321.
Dames, H., Brand, D., & Ragni, M. (2022). Evidence for Multiple Mechanisms Underlying List-Method Directed Forgetting. Proceedings of the Annual Meeting of the Cognitive Science Society, 44. Retrieved from https://escholarship.org/uc/item/46921378.
Beierle, Christoph; Kern-Isberner, Gabriele; Sauerwald, Kai; Bock, Tanja; Ragni, Marco (2019): Towards a General Framework for Kinds of Forgetting in Common-Sense Belief Management. KI - Künstliche Intelligenz: Vol. 33, No. 1. DOI: 10.1007/s13218-018-0567-3.