(Test) Exploring electroencephalography with a model inspired by quantum mechanics

Go to top
Free resource
Title (Test) Exploring electroencephalography with a model inspired by quantum mechanics
Authors Nicholas J. M. Popiel · Colin Metrow · Geofrey Laforge · Adrian M. Owen · Bobby Stojanoski · Andrea Soddu
Source Document
Date 2021
Journal Sci Rep
DOI 10.1038/s41598-021-97960-7
PUBMED https://pubmed.ncbi.nlm.nih.gov/34611185
PDF copy
Crystal Clear mimetype pdf.png
License CC BY
This resource has been identified as a Free Scientific Resource, this is why Masticationpedia presents it here as a mean of gratitude toward the Authors, with appreciation for their choice of releasing it open to anyone's access

This is free scientific content. It has been released with a free license, this is why we can present it here now, for your convenience. Free knowledge, free access to scientific knowledge is a right of yours; it helps Science to grow, it helps you to have access to Science


This content was relased with a 'CC BY' license.
You might perhaps wish to thank the Author/s

Exploring electroencephalography with a model inspired by quantum mechanics

Open Access logo green alt2.svg
Free resource by Nicholas J. M. Popiel  · Colin Metrow · Geofrey Laforge  · Adrian M. Owen · Bobby Stojanoski  · Andrea Soddu




Nicholas J. M. Popiel,1,2 Colin Metrow,1 Geoffrey Laforge,3 Adrian M. Owen,3,4,5 Bobby Stojanoski,#4,6 and Andrea Soddu#1,3 1The Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 5B7 Canada

2Cavendish Laboratory, University of Cambridge, Cambridge, CB3 0HE UK

3The Brain and Mind Institute, The University of Western Ontario, London, ON N6A 5B7 Canada

4The Department of Psychology, The University of Western Ontario, London, ON N6A 5B7 Canada

5The Department of Physiology and Pharmacology, The University of Western Ontario, London, ON N6A 5B7 Canada

6Faculty of Social Science and Humanities, University of Ontario Institute of Technology, 2000 Simcoe Street North, Oshawa, ON L1H 7K4 Canada

Abstract

An outstanding issue in cognitive neuroscience concerns how the brain is organized across different conditions. For instance, during the resting-state condition, the brain can be clustered into reliable and reproducible networks (e.g., sensory, default, executive networks). Interestingly, the same networks emerge during active conditions in response to various tasks. If similar patterns of neural activity have been found across diverse conditions, and therefore, different underlying processes and experiences of the environment, is the brain organized by a fundamental organizational principle? To test this, we applied mathematical formalisms borrowed from quantum mechanisms to model electroencephalogram (EEG) data. We uncovered a tendency for EEG signals to be localized in anterior regions of the brain during “rest”, and more uniformly distributed while engaged in a task (i.e., watching a movie). Moreover, we found analogous values to the Heisenberg uncertainty principle, suggesting a common underlying architecture of human brain activity in resting and task conditions. This underlying architecture manifests itself in the novel constant , which is extracted from the brain state with the least uncertainty. We would like to state that we are using the mathematics of quantum mechanics, but not claiming that the brain behaves as a quantum object.


Subject terms: Computational science, Quantum mechanics

Methods

Data acquisition

Twenty-eight healthy subjects were recruited from The Brain and Mind Institute at the University of Western Ontario, Canada to participate in this study. Informed written consent was acquired prior to testing from all participants. Ethics approval for this study was granted by the Health Sciences Research Ethics Board and the Non-Medical Research Ethics Board of The University of Western Ontario and all research was performed in accordance with the relevant guidelines/regulations and in accordance with the Declaration of Helsinki.

Two suspenseful movie clips were used as the naturalistic stimuli in this study. A video clip from the silent film “Bang! You’re Dead” and an audio excerpt from the movie “Taken” were shown to 13 and 15 subjects respectively in both their original intact and scrambled forms. Prior to the two acquisitions, a section of rest was acquired where the subjects were asked to relax, without any overt stimulation. Stimulus presentation was controlled with the Psychtoolbox plugin for MATLAB [1][2][3] on a 15″ Apple MacBook Pro. Audio were presented binaurally at a comfortable listening volume through Etymotics ER-1 headphones.

EEG data were collected using a 129-channel cap (Electrical Geodesics Inc. [EGI], Oregon, USA). Electrode impedances were kept below 50 kΩ with signals sampled at 250 Hz and referenced to the central vertex (Cz). Using the EEGLAB MATLAB toolbox[4], noisy channels were identified and removed, then interpolated back into the data. A Kolmogorov–Smirnov (KS) test on the data was used to identify regions that were not Gaussian. Independent components analysis (ICA) was then used to visually identify patterns of neural activity characteristic of eye and muscle movements which were subsequently removed from the data. EEG pre-processing was performed individually for each subject and condition.

Of the two movie clips tested, the first was an 8-min segment from Alfred Hitchcock’s TV silent movie “Bang! You’re Dead”. This scene portrays a 5-year-old boy who picks up his uncle’s revolver. The boy loads a bullet into the gun and plays with it as if it were a toy. The boy (and viewer) rarely knows whether the gun has a bullet in its chamber and suspense builds as the boy spins the chamber, points it at others, and pulls the trigger. As an alternative to visual stimulation, a 5-min audio excerpt from the movie “Taken” was also used. This clip portrays a phone conversation in which a father overhears his daughters’ kidnapping.

Furthermore, two “scrambled” control stimuli were used—one for each movie. This separates the neural responses elicited by the sensory properties of watching or listening to the movies from those involved in following the plot. The scrambled version of “Bang! You’re Dead” was generated by isolating 1 s segments and pseudorandomly shuffling the segments, thereby eliminating the temporal coherence of the narrative[5] [6]. The scrambled version of “Taken” was created by spectrally rotating the audio, thus rendering the speech indecipherable[6][7]. The scrambled movie clips were presented before the intact versions to prevent potential carry-over effects of the narrative. Prior to subjects watching/listening to the scrambled stimulus a short segment of resting EEG was acquired.

Model

Each of the j electrodes is described by an ordered pair () in 3-dimensional space. To complete this analysis, the electrodes were first projected onto the () plane, removing the depth of the head. Figure 1A shows the locations of each electrode in this 2d-space. Following this projection, the time courses for each of the 92 electrodes were Hilbert transformed and then normalized following the procedure listed using Eq. (2). A probability was defined in this electrode-position space as the square of the Hilbert transformed time course (Eq. 3), analogous to the wavefunctions of quantum mechanics. Eight regions Anterior L/R, Posterior L/R, Parietal L/R, Occipital L/R) were then defined by grouping the 92 electrodes, and the frequencies of entering each region fG were obtained by summing the probabilities electrodes within the group, then integrating in time.

 

where each of the eight groups denoted by the subscript have a different number of constituent electrodes N. In the occipital left and right there are 10 electrodes each, in the parietal left and right there are 17 electrodes each, in the posterior left and right there are 10 and 11 electrodes respectively, and in the anterior left and right there are 8 and 9 electrodes respectively.

Upon getting the group level frequencies average values for position and momentum were calculated using Eqs. (4) and (5) (with identical expressions for y). Finally, to ascertain our analogous uncertainty principle, we sought expressions of the form

 


The expression for can be readily applied to the probabilities and positions as defined above, resulting in the first term given by

 

And the second term given by the square of Eq. (4). The second term of is given by the square of Eq. (5), but the first term is more nuanced. This is owing to the complex number returned when acting the derivative operator twice on the probability. To overcome this, Fourier transforms were used to change Eq. (5) into the momentum basis which then allowed for the efficient calculation of .

Denoting as the momentum-space probability obtained through a 2-dimensional, non-uniform Fourier transform of the position space pseudo-wavefunction, Eq. (5) can be rewritten as,

 

Leading to the first term in the expression to be written as,

 

The FINUFFT python wrapper was used to take the Fourier transform using a type 3, 2d non-uniform FFT[8][9], and the minimum value in time of the uncertainty relation was found. Points in momentum space were sampled on and along with the two additional points () and ().

Figure 4 shows the position and momentum probabilities respectively in their own basis. An animation showing how these evolve in time for the different conditions is presented in Supplementary Material 2.


Figure 4: (A) Probability distribution for a single subject in the position basis. (B) Momentum basis probability distribution for a single subject. The momentum values used for the Fourier transform are indicated by the point locations. Points are colour-/size-coded to represent the probability value at that location.

To compute the values reported in Table 2, the corresponding value was found for each subject, and these were used to calculate the group average reported here.

Supplementary Information

Supplementary Figures.(28M, docx)

Supplementary Information.(375K, docx)

Acknowledgements

We would like to thank Silvano Petrarca for his continued assistance in devising the model. This study was funded by the NSERC Discovery Grant (05578–2014RGPIN), CERC (215063), CIHR Foundation Fund (167264). AMO is a Fellow of the CIFAR Brain, Mind, and Consciousness Program.

Author contributions

N.J.M.P., C.M. and G.L. performed the analysis. A.S., B.S. and N.J.M.P. developed the model. A.S. and B.S. supervised the analysis. N.J.M.P., A.S., G.L. and B.S. wrote the manuscript. A.M.O. revised the manuscript.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

These authors contributed equally: Bobby Stojanoski and Andrea Soddu.

Supplementary Information

The online version contains supplementary material available at 10.1038/s41598-021-97960-7.

Article information

Sci Rep. 2021; 11: 19771.

Published online 2021 Oct 5. doi: 10.1038/s41598-021-97960-7

PMCID: PMC8492705

PMID: 34611185

Nicholas J. M. Popiel,1,2 Colin Metrow,1 Geoffrey Laforge,3 Adrian M. Owen,3,4,5 Bobby Stojanoski,#4,6 and Andrea Soddu#1,3

1The Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 5B7 Canada

2Cavendish Laboratory, University of Cambridge, Cambridge, CB3 0HE UK

3The Brain and Mind Institute, The University of Western Ontario, London, ON N6A 5B7 Canada

4The Department of Psychology, The University of Western Ontario, London, ON N6A 5B7 Canada

5The Department of Physiology and Pharmacology, The University of Western Ontario, London, ON N6A 5B7 Canada

6Faculty of Social Science and Humanities, University of Ontario Institute of Technology, 2000 Simcoe Street North, Oshawa, ON L1H 7K4 Canada

Andrea Soddu, Email: asoddu@uwo.ca

Corresponding author.

  1. Contributed equally.

Received 2021 Apr 28; Accepted 2021 Aug 30.

Copyright © The Author(s) 2021

Open Access

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Articles from Scientific Reports are provided here courtesy of == Nature Publishing Group ==

Bibliography & references
  1. Brainard DH. The psychophysics toolbox. Spat. Vis. 1997;10:433–436. doi: 10.1163/156856897X00357. [PubMed] [CrossRef] [Google Scholar]
  2. Kleiner M, et al. What’s new in psychtoolbox-3. Perception. 2007;36:1–16. [Google Scholar]
  3. Pelli DG. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spat. Vis. 1997;10:437–442. doi: 10.1163/156856897X00366. [PubMed] [CrossRef] [Google Scholar]
  4. Makeig, S. & Onton, J. ERP features and EEG dynamics: An ICA perspective. In The Oxford Handbook of Event-Related Potential Components (Oxford University Press, 2012). 10.1093/oxfordhb/9780195374148.013.0035.
  5. Cite error: Invalid <ref> tag; no text was provided for refs named :0
  6. 6.0 6.1 Laforge G, Gonzalez-Lara LE, Owen AM, Stojanoski B. Individualized assessment of residual cognition in patients with disorders of consciousness. NeuroImage Clin. 2020;28:102472. doi: 10.1016/j.nicl.2020.102472. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  7. Naci L, Sinai L, Owen AM. Detecting and interpreting conscious experiences in behaviorally non-responsive patients. Neuroimage. 2017;145:304–313. doi: 10.1016/j.neuroimage.2015.11.059.[PubMed] [CrossRef] [Google Scholar]
  8. Barnett AH, Magland J, Klinteberg LAF. A parallel nonuniform fast Fourier transform library based on an “Exponential of semicircle” kernel. SIAM J. Sci. Comput. 2019;41:C479–C504. doi: 10.1137/18M120885X. [CrossRef] [Google Scholar]
  9. Barnett, A. H. Aliasing error of the  kernel in the nonuniform fast Fourier transform. arXiv:2001.09405 [math.NA] (2020).