RESEARCH

Prof. Sereno’s lab studies fundamental cognitive and neural processes underpinning the visual perception of objects in space. Our research has focused on the representation of shape and space in the primate brain, in particular, the representation of static and dynamic 3D shape, the interpretation of nature’s patterns (fractals), and spatial processing and navigation. These domains bridge 2D-to-3D representations – perceiving 3D shapes from 2D retinal images, drawing 2D representations given 3D percepts, or using 2D external or internal maps to navigate 3D environments. Our approach is comprehensive and systematic – spanning the perception of simpler human-made shapes to the rougher fractal shapes found in nature, investigating multiple, diverse cues in the brain that govern shape representation, and contrasting perceptual processes occurring in isolation to those within contextually rich settings. Our research is multimodal, synthesizing findings across studies employing behavioral, neuroimaging, image processing, neural modeling, and population decoding approaches. Our research is interdisciplinary and collaborative, with colleagues from multiple departments at the UO (e.g., Psychology, Physics, Architecture) as well as from national and international institutions.

3D Shape Perception and Artistic Skill

NeuronCover

This research examines the cognitive and neural underpinnings of our ability to perceive 3D shape from a myriad of static and dynamic 2D retinal cues, as well as its converse, our ability to depict a 3D scene on a 2D surface. We have shown that 3D shape from multiple cues is represented in both dorsal and ventral streams, 3D shape perception is influenced by context, and expertise in drawing is associated with the ability to use or ignore context as needed.

 

Robles, K.E., Bies, A.J., Lazarides, S., Sereno, M. E. (2022). The relationship between shape perception accuracy and drawing ability. Scientific Reports, 12, 1-12, 14900. [pdf]

Sereno, M.E., Robles, K.E., Kikumoto, A., Bies, A.J. (2020). The effects of 3-dimensional context on shape perception. Psychological Science. 31(4), 381-396. [pdf]

Peng, X., Sereno, M.E., Silva, A.K., Lehky, S.R., & Sereno, A.B. (2008). Shape selectivity in primate frontal eye field. Journal of Neurophysiology, 100, 796-814. [pdf]

Sereno, M.E., Augath, M., & Logothetis, N.K. (2005). Differences in processing of 3-D shape from multiple cues in monkey cortex revealed by fMRI. Society for Neuroscience Abstracts. [pdf]

Sereno, M.E., Trinath, T., Augath,M., & Logothetis, N.K. (2002). Three-dimensional shape representation in monkey cortex. Neuron33, 635-652. [pdf] [movie1] [movie2]

Fractal Perception

Humans are able to process complex natural stimuli with relative ease. Complex natural forms (mountains, trees, clouds, shore lines, rivers) are fractal (i.e., they possess structure that repeats at increasingly fine magnifications). Lower complexity fractals represent stimuli common in natural scenes. We investigated human behavioral and neural responses to abstract visual fractal stimuli in which complexity was varied systematically using simpler 2D stimuli, more complex 3D immersive virtual-reality environments, and stimuli embedded in the built environment. Our central hypothesis is a fluency model, in which the visual system is sensitive to the fractal complexity of luminance edges in images. We have found that the visual system processes low-to-mid complexity fractal patterns with relative ease. Our long-term goal is to understand how we process complex natural stimuli for perception, recognition, and navigation. The data will allow us to quantify the characteristics of fractal patterns to exploit the positive aesthetic and stress-reduction responses so that fractals can be utilized as design features in built environments.

Robles, K.E., Roberts, M., Viengkham, C., Smith, J.H., Rowland, C., Moslehi, S., Stadlober, S., Lesjak, A., Lesjak, M., Taylor, R.P., Spehar, B., Sereno, M.E. (2021). Aesthetics and psychological effects of fractal based design. Frontiers in Psychology, 12, 1-21, 699962. [pdf]

Hess, N. & Sereno, M.E. (2021). Phenomenological assessment of dynamic fractals. Optical Society of America Abstract.

Owen, E.K., Robles, K.E., Taylor, R.P., Sereno, M.E. (2020). The perception of composite fractal environments. Vision Sciences Society Abstract.

Robles, K.E., Liaw, N.A., Taylor, R.P., Baldwin, D., Sereno, M. E. (2020). A shared fractal aesthetic across development. Nature Humanities and Social Sciences Communications, 7(158), 1-8. [pdf]

Roe, E., Bies, A.J., Montgomery, R.D, Watterson, W.J., Boydston, C.R., Sereno, M.E., Taylor R.P. (2020). Fractal solar panels: Optimizing aesthetic and electrical performances. PLOS ONE, 15(3), 1-13, e0229945. [pdf]

Abboushi, B., Elzeyadi, I., Van Den Wymelenberg, K., Taylor, R.P., Sereno, M.E., & Jacobsen, G. (2020). Assessing the visual comfort, visual interest of sunlight patterns, and view quality under different window conditions in an open-plan office. LEUKOS, 17, 321-337. [pdf]

Van Dusen, B., Scannell, B.C., Sereno, M.E., Spehar, B., Taylor, R.P. (2019). The Sinai light show: Using science to tune fractal aesthetics. 1-24.  In: Wuppuluri, S., Wu, D. (Eds.) On Art and Science: Tango of an Eternally Inseparable Duo. The Frontiers Collection. Springer Nature Switzterland AG. [pdf]

Abboushi, B., Elzeyadi, I., Taylor, R.P., Sereno, M.E. (2019). Fractals in Architecture: the visual interest, preference, and mood response to projected fractal light patterns in interior spaces. Journal of Environmental Psychology, 61, 57-70. [pdf]

Taylor, R.P., Juliani, A.W., Bies, A.J., Boydston C.R., Spehar, B., Sereno, M.E. (2018). The implications of fractal fluency for biophilic architecture. Journal of BioUrbanism, 6, 23-40. [pdf]

Bies, A.J., Tate, W.M., Taylor, R.P., Sereno, M.E. (2018). A factor analytic approach reveals variability and consistency in perceived complexity ratings of landscape photographs. Vision Sciences Society Abstract.

Tate, W.M., Taylor, R.P., Sereno, M.E., Bies, A.J. (2018). Perceived complexity and aesthetic responses to landscape photographs. Vision Sciences Society Abstract.

Bies, A.J., Boydston, C. R., Taylor, R.P., & Sereno, M.E. (2016). Relationship between fractal dimension and spectral decay rate in computer-generated fractals. Symmetry, 8(66), 1-17. [pdf]

Juliani, A.W., Bies, A.J., Boydston, C.R., Taylor, R.P., & Sereno, M.E. (2016). Navigation performance in virtual environments varies as a function of fractal dimension. Journal of Environmental Psychology, 47, 155-165. [pdf] [movie1] [movie2] [jenvp-website]

Bies, A.J., Blanc-Goldhammer, D.R., Boydston C.R., Taylor, R.P., & Sereno, M.E. (2016). Aesthetic responses to exact fractals driven by physical complexity. Frontiers in Human Neuroscience, 10(210), 1-17. [pdf]

Bies, A.J., Kikumoto, A., Boydston C.R., Greenfield, A.L., Chauvin, K.A., Taylor, R.P., Sereno, M.E. (2016). Percepts from noise patterns: The role of fractal dimension in object pareidolia. Vision Sciences Society Abstract. [pdf]

Bies, A., Wekselblatt, J.B., Boydston C.R., Taylor, R.P., Sereno, M.E. (2015). The effects of visual scene complexity on human cortex. Society for Neuroscience Abstract. [pdf]

Bies A.J., Taylor R.P., Sereno, M.E. (2015). An edgy image statistic: Semi-automated edge extraction and fractal box-counting algorithm allows for quantification of edge dimension in natural scenes. Vision Sciences Society Abstract. [pdf]

Spatial Navigation and Map Cognition

Map use and navigation are fundamentally important human capacities which involve a translation between 2D and 3D representations. We can use a map to plan a route or determine a sequence of turns to reach a goal. Moreover, we can navigate an environment with the help of a map or by learning a map-like representation of the environment, known as a “cognitive map”. Physical and cognitive maps are explicit and implicit 2D allocentric (world-centered) representations, respectively, of the 3D world. We have completed work on map comprehension and goal-directed navigation using cognitive maps. To understand the cognitive and neural processes that underlie map comprehension, we have used fMRI to evaluate the relationship between different spatial abilities (identified by psychometric tests) and map reading tasks. To understand the neural underpinnings of the formation and flexible use of cognitive maps when learning to navigate novel environments, we tested navigation abilities in humans and machines using virtual environments.

Juliani, A.W., Barnett, S., David, B., Sereno, M. E., Momennejad, I.  (2022). Neuro-Nav: A library for neurally-plausible reinforcement learning. The 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM2022). 1-5. [pdf]  Github Repository: https://github.com/awjuliani/neuro-nav

Juliani, A.W. & Sereno, M.E. (2020). A biologically-inspired dual stream world model. NeurIPS Workshop on Biological and Artificial Reinforcement Learning. 1-13. [pdf]

Juliani, A.W., Bies, A.J., Boydston, C.R., Taylor, R.P., & Sereno, M.E. (2016). Navigation performance in virtual environments varies as a function of fractal dimension. Journal of Environmental Psychology, 47, 155-165. [pdf] [movie1] [movie2]

Bies, A.J. & Sereno, M.E. (2016). Understanding the relationship between specific spatial abilities and map reading skills using fMRI. Society for Neuroscience Abstract.

Representations of Visual Shape and Space using Population Coding

This PopCoding-NeuralComputation-2013_CoverImgseries of studies examines a fundamental question of how the brain represents locations in space using eye position information. A major focus of current theoretical thinking on the role of eye position modulations is that they may be involved in a transform of visual spatial coordinates from an eye-centered reference frame to a head-centered reference frame. The approach taken here is fundamentally different in that eye position modulations are used to directly decode gaze angle, rather than transform retinotopic spatial coordinates. We directly extract eye position, which is equivalent to the position of a stimulus at fixation. Successively fixating different objects, then, can determine the relative positions of multiple stimuli. For these studies, we used intrinsic population decoding methods (multidimensional scaling) to represent spatial location information from eye position signals in monkeys and machines. Intrinsic, as opposed to extrinsic, methods are unlabeled and relational. Unlabeled means that neural activities are not labeled by a set of parameters defined by external models or referenced to an external physical frame of reference. Rather, they are internally referenced relative to each other. We systematically define, compare and contrast these methods in which we argue that intrinsic approaches are more physiologically plausible, have inherent benefits for aspects of stimulus representation (e.g., invariances), and can have immediate impact for practical applications (such as brain-machine interfaces).

Sereno, A.B., Lehky S.R., Sereno, M.E. (2020). Representation of shape, space, and attention in monkey cortex. Cortex. 122, 40-60. [pdf]

Lehky S.R., Sereno M.E., & Sereno A.B. (2016). Characteristics of eye-position gain field populations determine geometry of visual space. Frontiers in Integrative Neuroscience, 9(72), 1-20. [pdf]

Sereno, A.B., Sereno, M.E., Lehky, S.R. (2014). Recovering stimulus locations using populations of eye-position modulated neurons in dorsal and ventral visual streams of nonhuman primates. Frontiers in Integrative Neuroscience, 8(28), 1-20. [pdf]

Lehky, S.R., Sereno, M.E., & Sereno, A.B. (2013). Population coding and the labeling problem: extrinsic versus intrinsic representations. Neural Computation, 25, 2235-2264. [pdf]

Lehky, S.R., Sereno, A.B., & Sereno, M.E. (2013). Monkeys in space: Primate neural data suggest volumetric representations. Behavioral and Brain Sciences, 36, 555-556. [Commentary on BBS target article “Navigating in a three dimensional world” by Jeffrey K.J., Jovalekic, A., Verriotis M., & Hayman, R. (2013). Behavioral and Brain Sciences, 36, 523-543.] [pdf]

Perception-Language Interactions

This project investigates the interaction between perception and language. We show that processing words that denote large things in the world (e.g., “ocean”) is faster than processing words that denote small things (e.g., “apple”) and that this semantic size effect plays a role in the recognition of words expressing abstract (e.g., “eternal” vs. “impulse”) as well as concrete concepts.

Yao, B., Vasiljevic, M., Weick, M., Sereno, M.E., O’Donnell, P.J., & Sereno, S.C. (2013). Semantic size of abstract concepts: It gets emotional when you can’t see it. PLOS ONE, 8, 1-13, e75000. [pdf]

Sereno, S.C., O’Donnell, P.J., & Sereno, M.E. (2009). Size matters: Bigger is faster. The Quarterly Journal of Experimental Psychology, 62, 1115-1122. [pdf]

Motion Perception

BookCoverOne aspect of this research involves building a partially pre-specified, multistage model of the visual system in which response properties of higher stages develop as the model “learns from experience.” Such a structured learning system (like developing biological systems) can extract environmental regularities by combining information from lower levels to represent complex, abstract properties of the input array to reveal features of the environment represented at intermediate and higher-level stages of visual processing. One project has focused on neural models of motion perception – determining large-scale object motion from spatially localized motion signals. These models have made counter intuitive predictions about the perception of the speed and direction of simple patterns and the anatomical basis of position-invariant responses to rotation and dilation in the visual system. Another project investigates the influence of 2D center-surround neural mechanisms (e.g., in neurons in area MT) on the perception of 3D structure-from-motion.

Sereno, M.E., & Sereno, M.I. (1999). 2-D center-surround effects on 3-D structure-from-motion. Journal of Experimental Psychology: Human Perception and Performance, 25, 1834-1854. [pdf]

Sereno M.E. (1993). Neural Computation of Pattern Motion: Modeling stages of motion analysis in the primate visual cortex. Cambridge: MIT Press/Bradford Books. (187 pp.)

Zhang, K., Sereno, M.I., & Sereno, M.E. (1993). How position-independent detection of sense of rotation or dilation is learned by a Hebb rule:a theoretical analysis. Neural Computation5, 597-612. [pdf] [html]

Sereno, M.I. & Sereno, M.E. (1991). Learning to see rotation and dilation with a Hebb rule. In Lippmann, R.P., Moody, J., Touretzky, D.S. (eds.) Advances in Neural Information Processing Systems 3. San Mateo, CA: Morgan Kaufmann Publishers. 320-326. [pdf]

Kersten, D.K., O’toole, A.J., Sereno, M.E., Knill, D.C., & Anderson, J.A. (1987). Associative learning of scene parameters from images. Applied Optics, 26, 4999-5006. [pdf]

Sereno, M.E.  (1987). Implementing stages of motion analysis in neural networks.  In Proceedings of the Ninth Annual Conference of the Cognitive Science Society. 405-416. [pdf]