Convolutional neural network-based encoding and decoding of visual object recognition in space and time

  • K. Seeliger (Corresponding author)
  • , M. Fritsche
  • , U. Guclu
  • , S. Schoenmakers
  • , J.-M. Schoffelen
  • , S.E. Bosch
  • , M.A.J. van Gerven

Research output: Contribution to journalArticleAcademicpeer-review

80 Citations (Scopus)
95 Downloads (Pure)

Abstract

Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy
Original languageEnglish
Pages (from-to)253-266
Number of pages13
JournalNeuroimage
Volume180
Issue numberPart A
DOIs
Publication statusPublished - 15 Oct 2018
Externally publishedYes

Keywords

  • Visual Neuroscience, deep learning, encoding, decoding, magnetoencephalography

Fingerprint

Dive into the research topics of 'Convolutional neural network-based encoding and decoding of visual object recognition in space and time'. Together they form a unique fingerprint.

Cite this