Attention & Perception Talk Series

Links to the speaker's website and and talk information will be added as the semester goes along. Talks are online (Zoom link) unless otherwise specified. In-person talks are held in Room 819. As scheduling is ongoing, some of the "free" slots may be filled.

Spring 2026

  • Yifan Ding (Illinois) — Understanding Inattentional Blindness — The Effects of Individual Differences and Load

    January 27th, 2026 (in-person)

    Inattentional Blindness (IB) occurs when observers fail to notice unexpected objects while engaged in an attention-demanding task. This talk synthesizes completed works and future proposals regarding the predictors of IB. First, I will discuss a large-scale registered report investigating whether stable individual differences, such as cognitive capacity or personality traits, can predict noticing of an unexpected object. I will then present a series of experiments examining the interplay between perceptual and cognitive load to determine how these two interact to affect noticing. Finally, I will propose a new line of research investigating the role of active suppression in multiple object tracking (MOT) and the influence of visual transients on attention capture.

  • Nicholas Gaspelin (University of Missouri) — The Signal Suppression Account: The Role of Inhibition in Avoiding Salient Distractions

    February 3rd, 2026 (online)

    Our attentional systems are constantly bombarded by salient stimuli that have been designed to attract our attention. From brightly colored advertisements on the roadside to pop-up notifications on our cell phones, our attentional systems must make split-second decisions to determine which stimuli in our environments are relevant to our immediate goals and which are just distractions. In psychology, there has been a longstanding debate about whether salient stimuli have the power to involuntarily capture attention, even when task irrelevant. The present talk will discuss evidence for a theory that aims to reconcile this debate, called the signal suppression account. According to this account, salient stimuli generate an "attend-to-me" signal which automatically attracts attention. However, this salience signal can be actively suppressed to prevent distraction. This account has been widely supported by converging evidence from psychophysics, eye movements, and event-related potentials (ERPs). This talk will review some of this evidence for this theory and also explain new directions in terms of learned control of attention.

  • Kirsten Adam (Rice) — Dynamics of attention and working memory

    February 10th, 2026 (online)

    We can attend to and remember just a subset of information in busy visual environments. Thus, to understand how we coherently navigate the world, we need to understand the factors that guide the allocation of limited cognitive resources. Typically, we measure attention and working memory by averaging across trials, but in this talk, I will show how trial-by-trial dynamics are critically important for characterizing cognition. The first part of the talk will address the relationship between fluctuations of ongoing attentional state and working memory. My work has shown how working memory performance fluctuates from trial to trial, and differences in consistency, rather than capacity per se, better explain individual differences in working memory ability. The second part of the talk will address how attentional selection is impacted by our constantly changing environment. For example, recent experiences with stimuli in our environment shape the neural "priority maps" that guide attention. Together, these findings illustrate the importance of both internal and external sources of trial-to-trial variation for understanding fundamental properties of attention and working memory.

  • Don Moore (Berkeley Haas) — Overconfidence in People and Machines

    February 17th, 2026 (online)

    Overconfidence is one of the most pervasive biases in human judgment. I present a theory proposing that overconfidence arises from fallibility, especially when we don't know we are wrong. This theory predicts that any fallible reasoning agent—human or artificial—will exhibit systematic overconfidence under specifiable conditions. Testing this prediction, I examine various AI systems across different tasks, including tests of logic, reasoning, knowledge, and probability estimation. The results show that they exhibit overconfidence patterns similar to those observed in humans, including a tendency toward being too confident they are right, strongly moderated by task difficulty. Self-critical reflection can help improve confidence calibration. These results suggest that overconfidence stems from fallibility and error neglect rather than uniquely human cognitive limitations.

  • No meeting on February 24th, 2026 (speaker moved to April 7th)
  • Steve Haroz (Google) — Precision and bias when decision-making based on data visualizations

    March 3rd, 2026 (online)

    Graphs and charts are often used to show comparisons and to help make decisions. But how accurately can we differentiate information visually? And what are potential sources of bias when making those decisions? To answer those questions, I will discuss some past visualization research that examined precision for single-item comparisons. I will also walk through a series of experiments that extend that past work into the comparison of sets. Moreover, I will introduce possible new sources of bias when performing perceptual decision-making for data visualizations.

  • Gary Lupyan (UW Madison) — Do LLMs distim the doshes? From next-token prediction to general(?) intelligence.

    March 10th, 2026 (in-person)

    What does it mean that a few hundred lines of code executed on lots of text can produce human-like performance on an ever-increasing number of tasks? Answers have ranged from "nothing to see here" to "welcome our new AGI overlords". I will argue that the successes (and failures) of large language models (LLMs) has deep lessons for cognitive science. Among these are: (1) The surprising power of learning via self-supervised prediction; (2) The specific role of predicting *language* for instantiating many core aspects of human cognition; (3) The success of "mere" pattern matching and what it means for the algorithms that support human thinking. I will conclude by showing you what pattern matching at scale looks like using a demonstration of LLMs performing a seemingly impossible task.

  • 03/14-03/22 — Spring Break
  • 03/24 — Viola Stoermer (Dartmouth)
  • 03/31 — Necdet Gurkan (U. of Missouri—St. Louis)
  • Alice O'Toole (UT Dallas) — Face, Body, and Person Recognition in Real-world Viewing Conditions

    April 7th, 2026 (online)

    People recognize others in multiple ways. When we see a person up close, we rely on the face for recognition, because it provides nearly unique information about identity. In natural viewing conditions, however, when the face is unusable or inaccessible, people rely on the body to constrain identity decisions. Face recognition algorithms based on deep learning have been available for over a decade and are now as accurate as humans. Body/person recognition networks are, by comparison, toddlers. That said, in just the last few years, body/person recognition models, based on deep networks, have been developed with impressive speed. These networks show promise for grounding psychological experiments that can move the field forward from face perception to person perception. In this talk, I will begin with what is known about the face representations generated by deep neural networks and will discuss how these representations relate to findings in human face perception. Next, I will overview the approaches taken recently in developing "person recognition" algorithms—with the caveat that these algorithms have yet to be compared to human perception. Finally, we will consider the complex challenge of integrating face and body information to achieve more accurate person identification. I will draw on lessons learned from the human visual system, which accomplishes this integration with remarkable flexibility and adaptability, modulating its reliance on the face vs. body depending on the viewing conditions.

  • Stephen Hupp (SIUE) — Secrets Revealed About Psychology, Fringe Science, and Skeptical inquirer Magazine

    April 14th, 2026 (online)

    Learn the psychology behind some of fringe science's greatest secrets. Get a sneak peek into what the future holds for Skeptical Inquirer, the magazine for science and reason. Be dazzled by the latest announcements from the Committee for Skeptical Inquiry during the year-long celebration of our 50th anniversary. Stephen Hupp, PhD, is the executive director of the Committee for Skeptical Inquiry and the editor-in-chief of Skeptical Inquirer. He is also a psychology professor at Southern Illinois University Edwardsville. His latest book is Science-Based Therapy: Raising the Bar for Empirically Supported Treatments.

  • 04/21 — Kushin Mukherjee (Stanford)
  • 04/28 — meeting pending
  • 05/05 — Alon Hafri (U. of Delaware)