Attention & Perception Talk Series

Links to the speaker's website and and talk information will be added as the semester goes along. Talks are online unless otherwise specified.

Spring 2025

  • Brady Roberts — Memorable by design: The intrinsic properties of effective symbols

    January 21st, 2025 (in-person)

    Recent work has begun to evaluate the memorability of everyday visual symbols as a new way to understand how abstract concepts are processed in memory. Symbols were previously found to be highly memorable, especially relative to words, but it remained unclear what was driving memorability. Here, we bridged this gap by exploring the visual and conceptual attributes driving the high memorability observed for symbols. Participants were tested on their memory for conventional symbols (e.g., !@#$%) before sorting them based on visual or conceptual features. Principal component analyses performed on the sorting data then revealed which of these features predict symbol memorability. An artificial image generator was then used to form novel symbols while accentuating or downplaying predictive features to create a set of memorable and forgettable symbols, respectively. Both recognition and cued-recall memory performance were substantially improved for symbols that were designed to be memorable. This work suggests that specific visual attributes drive image memorability and offers initial evidence that memorability can be engineered.

  • Thomas Langlois — Efficient Computations and Representations for Perceptual Inference and Communication

    February 4th, 2025 (in-person)

    In order to keep pace with a complex and ever-changing visual environment, the visual system must combine moment-to-moment sensory evidence with prior expectations that reflect predictable regularities in the environment. Although priors (and other subjective probability distributions) are key to visual perception, they are notoriously difficult to estimate because perception is an inherently private (subjective) experience. In this talk, I will highlight work using large-scale serial reproduction experiments to obtain stable estimates of subjective probability distributions in visual memory. I will also discuss recent work elucidating how neural population activity in the PFC integrates prior expectations with sensory signals during visual perception in macaque monkeys. Time permitting, I will highlight my current work investigating the relation between perceptual representations and emergent communication using the Information Bottleneck (IB) Principle.

  • Qi Lin — Individual differences in prefrontal coding of visual features

    February 11th, 2025 (online)

    Each of us perceives the world differently. What may underlie such individual differences in perception? In this talk, I will focus on characterizing the lateral prefrontal cortex (LPFC)'s role in vision with a specific focus on individual differences. Using a 7T fMRI dataset, I first show that encoding models relating visual features extracted from a deep neural network to brain responses to natural images robustly predict responses in patches of LPFC. Intriguingly, there are more substantial individual differences in the coding schemes of LPFC compared to visual regions. I will then present computational work showing how such amplification of individual differences could result from a neural architecture involving random reciprocal connections between sensory and high-level regions. Lastly, I will discuss ongoing work exploring the behavioral consequences of such individual differences in LPFC coding. Together, this work demonstrates the under-appreciated role of LPFC in visual processing and suggests that LPFC may underlie the idiosyncrasies in how different individuals experience the visual world.

  • Harini Sankar — Modeling the influence of semantic context in spoken word recognition

    February 18th, 2025 (in-person)

    Spoken word recognition is a context dependent process. Studies have shown that the semantic association between words not only influences behavioral responses to ambiguous speech sounds but also influences how we encode the sounds themselves. The influence of semantic context has also been shown to persist over longer spans of time. While earlier computational models of spoken word recognition have captured various aspects of speech perception, they have yet to integrate the role of long-distance semantic dependencies. Creating a model that learns these semantic associations in a self-supervised manner would also be able to demonstrate how humans learn and use these semantic associations between words in everyday speech. In this project, I created two models — a simple recurrent network (SRN) and long-short-term-memory (LSTM) networks that were trained on word pairs that varied in their degree of semantic association. The models were able to learn the semantic associations between word pairs in a self-supervised manner. I will also discuss how the models could use these learnt semantic associations to influence how they encode ambiguous phonemes that would reflect the results from the behavioral and electrophysiological data.

  • Michael Beyeler, March 4th, 2025
  • Keith Doelling, March 11th, 2025
  • Galit Yovel, March 25th, 2025
  • Cathleen Moore, April 1st, 2025
  • Clara Colombatto, April 8th, 2025
  • Daniel Albohn, April 15th, 2025 (in-person)
  • Lucy Cui, April 22nd, 2025
  • Michael Cohen, April 29th, 2025
  • Yiwen Wang, May 6th, 2025 (in-person)