Brain & Mind Computational Seminar
Artificial intelligence, neuroscience, human behavior, and digital humanities.
Stay tuned for the next announcement!
Subscribe to the mailing list for updates directly to your email!
Richard Frackowiak (ENS-FR, EPFL-CH, CHUV-CH, UCL-UK)
Neuroimaging, informatics and brain simulation
The human brain is massively redundant in its organisation. When brain systems are damaged, or when reinforced by learning mechanisms, they reorganise at the level of synaptic connection strength, or by selection of new preferred connections between cortical regions. In adults, up to 50% of brain cell loss can be accommodated, if gradual, with little obvious effect on clinically observed features. This fact generates hypotheses of potential clinical interest. Can we monitor these mechanisms or their effects precisely? Can they be enhanced or modulated? What are the implications after damage and for any recovery that occurs? Clinical scientists have deployed new non-invasive functional imaging techniques to examine brain reorganisation and to detect pre-clinical neuronal loss. MR images are analysed with artificial intelligence techniques in a standard anatomical space making integration with clinical and biological data possible. The eventual clinical ambition of the medical informatics component of the Human Brain Project is to link genetic and proteomic levels of brain organisation with rules that govern the cellular segregation of protein expression. From protein expression rules that determine cellular morphology should predict connectivity and so on, until a constructive process of predictive simulation discovers the mechanisms of emergent behaviours.
Frackowiak RSJ, Markram H. (2015) The future of human cerebral cartography: a novel approach. Phil.Trans. R. Soc. B 370: 20-32.
Miguel Nicolelis(Duke University Medical Center)
A View of the Future for BMI Basic Research and Clinical Application
In this talk I will initially discuss how BMI experiments will continue to play a major role in basic research by showing how they have already allowed us to demonstrate the existence of a variety of neurophysiological functions, such as space coding and social interaction mapping, not commonly associated with the motor cortex of non-human primates. I will also describe a combination of approaches that will allow BMI to fulfill its long-anticipated mission of providing new therapies for patients suffering from severe spinal cord injuries. In this context, I will describe the clinical advantages of a protocol that combines multiple non-invasive techniques into a single neurorehabilitation approach for such patients.
Jean-Rémi King
Language in brains and algorithms
Deep learning has recently made remarkable progress in natural language processing. Yet, the resulting algorithms fall short of the language abilities of the human brain. To bridge this gap, we here explore the similarities and differences between these two systems using large-scale datasets of magneto/electro-encephalography (M/EEG, n=1,946 subjects), functional Magnetic Resonance Imaging (fMRI, n=589), and intracranial recordings (n=176 patients, 20K electrodes). After investigating where and when deep language algorithms map onto the brain, we show that enhancing these algorithms with long-range forecasts makes them more similar to the brain. Our results further reveal that, unlike current deep language models, the human brain is tuned to generate a hierarchy of long-range predictions, whereby the fronto-parietal cortices forecast more abstract and more distant representations than the temporal cortices. Overall, our studies show how the interface between AI and neuroscience clarifies the computational bases of natural language processing.
Stéphane Deny
Bio-inspired Approaches to Disentanglement of Factors of Variations in Images
A core challenge in Machine Learning is to learn to disentangle natural factors of variation in data (e.g. object shape vs. pose). A popular approach to disentanglement consists in learning to map each of these factors to distinct subspaces of a model's latent representation. However, this approach has shown limited empirical success to date. Here, we show that, for a broad family of transformations acting on images--encompassing simple affine transformations such as rotations and translations--this approach to disentanglement introduces topological defects (i.e. discontinuities in the encoder). Inspired by 'mental rotation' in the brain, we study an alternative, more flexible approach to disentanglement which relies on recurrent latent operators, potentially acting on the entire latent space. We theoretically and empirically demonstrate the effectiveness of this approach to disentangle affine transformations.
Anne Urai (Universiteit Leiden)
Choice history bias as a window into cognition and neural circuits
Perceptual choices not only depend on the current sensory input, but also on the behavioral context, such as the history of one’s own choices. Yet, it remains unknown how such history signals shape the dynamics of later decision formation. In models of decision formation, it is commonly assumed that choice history shifts the starting point of accumulation towards the bound reflecting the previous choice. I will present results that challenge this idea. By fitting bounded-accumulation decision models to behavioral data from perceptual choice tasks, we estimated bias parameters that depended on observers’ previous choices. Across multiple animal species, task protocols and sensory modalities, individual history biases in overt behavior were consistently explained by a history-dependent change in the evidence accumulation, rather than in its starting point. Choice history signals thus seem to bias the interpretation of current sensory input, akin to shifting endogenous attention towards (or away from) the previously selected interpretation. MEG data further pinpoint a neural source of these biases in parietal gamma-band oscillations, providing a starting point for linking across species
Jan Willem de Gee (Baylor college of medicine)
Catecholamines reduce the impact of previous choices on current behavior
Observers tend to systematically repeat (or alternate) their previous choices, even when judging sensory stimuli presented in a random sequence. Subcortical neuromodulatory systems, e.g., the noradrenergic locus coeruleus (LC), have widespread projections to the cortex. These projections release neuromodulators which change the "cortical arousal state", and thereby sculpt the information processing in cortical decision networks and the resulting behavior in profound ways. The cortical arousal state can be read out non-invasively by tracking pupil size at constant luminance. During large pupil-linked arousal fluctuations observers’ choices tend to be less biased, including less dependent on the recent history of choices. Here, we asked if this is a causal relationship. We pharmacologically elevated catecholamine levels and show that observers' choice history biases are reduced compared to placebo sessions. With a mathematical model of behavior we show that this effect is due to a reduction in a bias that exists in the way observers accumulate decision-relevant sensory evidence.
Michael S. A. Graziano - Princeton University
Neuroscientists understand the basic principles of how the brain processes information. But how does it become subjectively aware of at least some of that information? What is consciousness? In my lab we are developing a theoretical and experimental approach to these questions that we call the Attention Schema theory (AST). The theory seeks to explain how an information-processing machine could act the way people do, insisting it has consciousness, describing consciousness in the ways that we do, and attributing similar properties to others. ASTbegins with attention, a mechanistic method of handling data. In the theory, the brain does more than use attention to enhance some signals at the expense of others. It also monitors attention. It constructs information – schematic information – about what attention is, what the consequences of attention are, and what its own attention is doing at any moment. Both descriptive and predictive, this “attention schema” is used to help control attention, much as the “body schema,” the brain’s internal model of the body, is used to help control the body. The attention schema is the hypothesized source of our claim to have consciousness. Based on the incomplete, schematic information present in the attention schema, the brain concludes that it has a non-physical, subjective awareness. In AST, awareness is a caricature of attention. In addition, when people model the attention of others, we implicitly model it in a schematic, magicalist way, as a mental energy in people’s heads. Our deepest intuitions about consciousness as a hard problem, or as a mystery essence, may stem from the brain’s sloppy models of attention.
Anil Seth - University of Sussex
Consciousness is, for each of us, the presence of subjective experience. Without consciousness there is no world, no self: there is nothing at all. In this talk, I will illustrate how the framework of predictive processing (or active inference) can help bridge from mechanism to phenomenology in the science of consciousness. I will advance the view that predictive processing, precisely because it is not itself a theory of consciousness, is an excellent theoretical resource for consciousness science. I will illustrate this view first by showing how conscious experiences of the world around us can be understood in terms of perceptual predictions, drawing on examples from psychophysics and virtual reality. Then, turning the lens inwards, we will see how the experience of being an embodied self rests on control-oriented predictive (allostatic) regulation of the interior of the body. This approach implies a deep connection between mind and life, and provides a new way to understand the subjective nature of consciousness as emerging from systems that care intrinsically about their own existence. Contrary to the old doctrine of Descartes, we are conscious because we are beast machines
Michael Graziano is an American scientist, novelist, and composer, who is currently a professor of Psychology and Neuroscience at Princeton University. He has proposed the "attention schema" theory, an explanation of how, and for what adaptive advantage, brains attribute the property of awareness to themselves. His previous work focused on how the cerebral cortex monitors the space around the body and controls movement within that space. Notably he has suggested that the classical map of the body in motor cortex, the homunculus, is not correct and is better described as a map of complex actions that make up the behavioral repertoire. His novels rely partly on his background in psychology and are known for surrealism or magic realism.
Anil Seth is Professor of Cognitive and Computational Neuroscience and Co-Director of the Sackler Centre for Consciousness Science at the University of Sussex, Co-Director of the Canadian Institute for Advanced Research (CIFAR) Program on Brain, Mind and Consciousness, an ERC Advanced Investigator, Editor-in-Chief of the journal Neuroscience of Consciousness, and a Wellcome Trust Engagement Fellow. He has published more than 180 papers and is listed in the Web of Science ‘Highly Cited Researcher’ index (2019, 2020, 2021). His 2017 TED talk has been viewed more than twelve million times, and his new book Being You: A New Science of Consciousness was an instant Sunday Times top 10 bestseller, a Guardian book of the week, and a Financial Times best science book of the year. @anilkseth.
Marco Congedo - CNRS (French National Centre for Scientific Research), Grenoble Alpes University
In this talk we will review recent advances in the use of Riemannian Geometry for BCI classification. The Riemannian manifold of positive definite matrices will be briefly reviewed from an applicative point of view. Then, the general functioning of Riemannian classification algorithms for electroencephalography (EEG) will be explained. The talk is concluded with a demonstration of one such algorithm in action to allow control of a brain-computer interface based on event-related potentials.
Pedro L.C. Rodriguez - Parietal team, Inria-Saclay institute, Paris
In this talk, I will use the Riemannian geometric framework presented by Marco to present a transfer learning method for BCI datasets with different dimensionalities, coming from different experimental setups but representing the same physical phenomena. I will show some results on time series obtained from different experimental setups (e.g. different number of electrodes, different electrode placement) which indicate that our proposal can indeed be used to transfer discriminative information between datasets that, at first glance, would be incompatible.
Authors: Guillaume Dumas & Suzanne Dikker
Progress in neuroimaging has allowed social neuroscientists to simultaneously capture the brain activity of multiple persons as they engage in real-time social interactions. Findings from such ‘hyperscanning’ studies have suggested that the extent to which neural activity becomes coupled between individuals, in a dyad or group, is indicative of a range of socially relevant factors, including personality traits as well as interpersonal factors such as the relationship between interlocutors and the ‘communicative success’ of an interaction. Our work ranges from careful laboratory studies to naturalistic real-world group investigations, from simulations and virtual agents to real-life dancers, and includes a range of (non)clinical populations. We layout how these different approaches may (or may not) help us characterize the neurocognitive processes that give rise to inter-brain coupling, and discuss the history, hopes, and hypes of hyperscanning research.
Bios
Guillaume Dumas is an interdisciplinary researcher combining cognitive neuroscience and systems biology to study human cognition across biological, behavioral, and social scales. He is an Assistant Professor in Computational Psychiatry of the Faculty of Medicine at the University of Montréal and the Director of the Precision Psychiatry and Social Physiology laboratory in the CHU Sainte-Justine research center. He holds the IVADO Chair in "AI and Mental Health", the FRQS J1 grant in "AI and Digital Health", and is an affiliated academic member of Mila – Quebec Artificial Intelligence Institute. His team studies the neurobehavioral mechanisms of social cognition and develops new approaches to psychiatry, from digital tools for assessment and rehabilitation to mathematical modelling for clinical decision-making and precision medicine.
Suzanne Dikker’s work merges cognitive neuroscience, performance art and education. She uses a ‘crowdsourcing’ neuroscience approach to bring human brain and behavior research out of the lab, into real-world, everyday situations, with the goal to characterize the brain basis of dynamic human social communication. As a senior research scientist at the Max Planck — NYU Center for Language, Music and Emotion (CLaME), affiliate research scientist at the Department of Clinical Psychology at the Free University Amsterdam, and member of the art/science collective OOSTRIK + DIKKER, Suzanne leads various research projects, including MindHive, a citizen science platform that supports community-based initiatives and student-teacher-scientist partnerships for human brain and behavior research.
Department of Computer science, University of Helsinki
Computational creativity is the art, science, philosophy and engineering of computational systems which, by taking on particular responsibilities, exhibit behaviours that unbiased observers would deem to be creative. In this short talk, I will introduce key concepts in computational creativity. The focus is on how to talk about creativity in the context of computers, not on technical implementations of creativity. I will also give examples that illustrate computational creativity and conceptual issues related to creativity of computers.
School of Biological and Chemical Sciences, Queen Mary, University of London
Creativity involves coming up with novel ideas. But where does it come from, and how can we nurture it? Often, we feel that ideas come out of the blue and that we have no control over them. But is that true and why? In this talk, you will learn about the brain mechanisms behind the generation and evaluation of novel ideas. You will also learn the neural mechanisms behind overcoming obvious ideas and how to apply this knowledge to become more creative.
Functional MRI has been used to map relatively slow hemodynamic responses following neuronal activity changes with BOLD signal. Traditionally physiological signals like cardiorespiratory pulsations have been regarded as noise that aliases and masks the slow hemodynamic responses following neuronal activity. With the invention of the glymphatic brain clearance system, the physiological signals have quickly risen to the scientific spotlight as a source of highly valuable signal with regards to brain pathology predating diseases such as Alzheimer’s disease and epilepsy. In my talk, I will run through our experience with ultrafast MREG imaging that can capture the physiological signal sources and how the data has been used to reveal novel biomarkers for major brain diseases.
The MR-encephalography sequence enables the acquisition whole-brain datasets with 3 mm isotropic resolution in less than 100 ms. It is based on the principles of one-voxel-one-coil imaging (OVOC), where the small sensitive volumes of individual coil elements in a multiarray-coil are used as the primary source of spatial encoding. MREG is being used for highly sensitive detection of functional activity in task-based fMRI as well as for dynamic observation of resting-state activity. In addition to a detailed investigation of BOLD-based fMRI, it also allows the dynamic observation of cardiorespiratory pulsatility. In my talk, I will present the basic principles as well as current methodological developments and applications.
Probabilistic machine and biological learning under resource constraints
In this talk, I will cover two areas of research in my group whose common thread is the ability of intelligent systems to deal with extreme resource constraints.
In the first part of the talk, I discuss the problem of performing Bayesian inference when only a small number of model evaluations is available, such as when a researcher is fitting a complex computational model. To address this issue, I recently proposed a sample-efficient framework for approximate Bayesian inference, Variational Bayesian Monte Carlo (VBMC) [1,2].
In the second part of the talk, I present joint theoretical work on how rational but limited agents should normatively allocate expensive memory resources in reinforcement learning [3], with predictions for human and animal behavior.
[1] Acerbi L (2018). Variational Bayesian Monte Carlo. NeurIPS '18. Code: https://github.com/lacerbi/vbmc
[2] Acerbi L (2020). Variational Bayesian Monte Carlo with Noisy Likelihoods. NeurIPS '20.
[3] Patel N, Acerbi L, Pouget A (2020). Dynamic allocation of limited memory resources in reinforcement learning. NeurIPS '20.
Bio:
Luigi Acerbi is an Assistant Professor at the Department of Computer Science of the University of Helsinki, where he leads the Machine and Human Intelligence research group. He is a member of the Finnish Center for Artificial Intelligence (FCAI), an affiliate researcher of the International Brain Laboratory, and an off-site visiting scholar at New York University.
Group website: http://www.helsinki.fi/machine-and-human-intelligence
Artificial Intelligence may beat us in chess, but not in memory
A large body of work in the theory of neural networks (artificial or biological) has been performed on neural networks comprised of simple activation functions, prominently, binary units. Analysing such networks has led to some general conclusions. For instance, there is long held consensus that local biological learning mechanisms such as Hebbian learning are very inefficient compared to iterative non-local learning rules used in machine learning. In this talk, I will show that when it comes to memory operations such a conclusion is an artefact of analysing networks of binary neurons: when neurons with graded response, more reminiscent of the response of real neurons, are considered, memory storage in neural networks with Hebbian learning can be very efficient and close to the optimal performance.
Ref: Schönsberg, F., Roudi, Y., & Treves, A. (2021). Efficiency of local learning rules in threshold-linear associative networks. Physical Review Letters, 126(1), 018301.
Bio:
Yasser Roudi is a Professor at the SPINOr group of the Kavli Institute for Systems Neuroscience and Centre for Neural Computation in Trondheim, Norway. His research is focused on understanding information processing in biological and artificial systems primarily by using methods from statistical mechanics and information theory. He studied at Sharif University of Technology, Tehran and at SISSA, Trieste. Prior to joining the Kavli Institute, he worked at the Gatsby Units, University College London, NORDITA in Stockholm, and Weil Medical College of Cornell University in New York.
For more info see: www.spinorkavli.org
Jaakko Lehtinen
Self-supervised learning for medical imaging
In this talk, I’ll present an overview of self-supervised learning – using known invariants of the data to learn useful representations without the need for gathering large amounts of hand-labeled data – and its current and potential applications in several imaging modalities. In particular, approaches that combine self-supervision principles with simulators show promise for learning high-quality inversion models with easily obtainable data. I will present examples from our past work on (toy) MRI reconstruction, ongoing work on CT reconstruction, as well as hypothesize about potential similar approaches for MEG.
This talk is more a position statement & call for ideas and collaboration rather than a polished description of work already done.
Bio:
Jaakko is a tenured Associate Professor at Aalto University, and a Distinguished Research Scientist at NVIDIA Research. He works on computer graphics, computer vision, and machine learning, with particular interests in generative modelling, realistic image synthesis, and appearance acquisition and reproduction. Overall, Jaakko is fascinated by the combination of machine learning techniques with physical simulators in the search for robust, interpretable AI.
Webpage:
https://users.aalto.fi/~lehtinj7/
Petri Ala-Laurila
Resolving Neural Coding from Single Photons to Visually-guided Behavior
All sensory information gets encoded into spike trains by sensory neurons and gets sent to the brain. It has been exceedingly hard to resolve how the brain decodes these spike trains to drive animal behavior. We now use vision at its ultimate sensitivity limit to link the retinal output spike code to behavioral decisions. We show that behavioral detection of light increments relies on the most sensitive ON-type ganglion cells and behavioral detection of light decrements relies on the most sensitive OFF-type ganglion cells. These results have fundamental consequences for understanding how the brain integrates information across parallel information streams originating in the retina.
Bio:
Petri is a tenured Associate Professor at Aalto University, and Principal Investigator at Faculty of Biological and Environmental Sciences at University of Helsinki. His lab aims to break new frontiers by revealing fundamental principles of how visual information is encoded in the retina and how the brain reads this neural code to drive animal behavior. Petri’s lab combines cutting-edge optical imaging, electrophysiological recording techniques, precise manipulations of retinal circuit function, mathematical modelling and state-of-the-art and beyond behavioural assays to reach these goals.
Web: http://ala-laurila.biosci.helsinki.fi
Twitter: https://twitter.com/AlaLaurilaLab
Prof. Aapo Hyvärinen, Department of Computer Science, University of Helsinki
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using additional information either in the form of temporal structure or an additional observed variable. Our methods are closely related to "self-supervised" methods heuristically proposed in computer vision, and also provide a theoretical foundation for such methods in terms of estimating a latent-variable model. Application of such methods to E/MEG analysis is promising, and will be pursued, for example, in the accompanying talk by Alex.
Dr. Alexandre Gramfort, Université Paris-Saclay, Inria, Palaiseau, France
Good representations are the building blocks of signal processing. While the Fourier representation reveal sinusoids buried in noise, wavelets better capture time-localized transient phenomena. Processing neuroimaging data is also based on representations. Morlet wavelets or short time Fourier transform (STFT) are common for electrophysiology (M/EEG) and spherical harmonics are used for diffusion MRI signals. In this talk I will cover some recent works that aim to learn non-linear representations of EEG and MEG time series in order to estimate good predictive models. Works that I will present will cover recent works such as [1, 2] which can be seen as variants of ICA and [3, 4] which are based on self-supervised learning using deep non-linear networks.
[1] Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding
Jas, M., Dupré la Tour, T., Simsekli, U. and Gramfort, A. (2017)
Advances in Neural Information Processing Systems (NIPS) 30
[2] Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals
Dupré la Tour, T., Moreau, T., Jas, M. and Gramfort, A. (2018)
Advances in Neural Information Processing Systems (NeurIPS)
[3] Self-supervised representation learning from electroencephalography signals
Banville, H., Albuquerque, I., Moffat, G., Engemann, D. and Gramfort, A. (2019)
Proc. Machine Learning for Signal Processing (MLSP).
[4] Uncovering the structure of clinical EEG signals with self-supervised learning
Banville, H., Chehab, O., Hyvärinen, A., Engemann, D. and Gramfort, A. (2020)
Journal of Neural Engineering (JNE).
Onerva Korhonen, Postdoctoral researcher, Aalto University | Visiting researcher, Universidad Politécnica de Madrid
Human brain function relies on interactions between different brain areas. Therefore, a complex network appears as a natural model for the brain. Indeed, network science offers many tools for better understanding the brain function. However, constructing a functional brain network is far from trivial. Here, we will concentrate on a question that strongly affects the obtained network properties: how to define the network nodes? Using multiple fMRI datasets, we show that the currently used node definition strategies, including atlases of brain areas defined based on anatomy, function, or connectivity, produce nodes with low functional homogeneity. This may lead to data losses and spuriosity in the obtained network structure. Further, we show that the functional homogeneity of nodes changes in time, reflecting the roles the nodes play in network topology. Together these results highlight the need for developing new, flexible node definition strategies that take into account the time-dependent nature of brain function.
Massimiliano Zanin, Postdoctoral researcher, Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), Palma de Mallorca, Spain
One of the aspects of brain functional networks that has most frequently been neglected is their intrinsic uncertainty. Functional connectivity is assessed through some synchronisation metric calculated between pairs of physiological time series, and these are necessarily of a limited (usually short) duration, due both to the characteristic temporal scales of cognitive tasks and to the non-stationarity of the brain, even at rest. Such limitedness implies that any functional relationship (i.e. link in the network) can only be known up to some level of uncertainty. We will here discuss how can such uncertainty be estimated, and what consequences it has on our understanding of the underlying topological structure, through the analysis of multiple EEG and fMRI data sets. Is uncertainty only a problem? Not really. We will end by showcasing how it can codify information about the presence of pathological conditions and thus improve the performance of a diagnostic system.
Prof. Satu Palva
Neuroscience Center, Helsinki Institute of Life Science, University of Helsinki
Center for Cognitive Neuroscience, Institute of Neuroscience and Psychology, University of Glasgow, UK
Perception, attention and working memory are fundamental cognitive functions, which are based on parallel processing in many brain areas. Neuronal oscillations and their phase correlations a.k.a. phase-synchronization are putative mechanisms for the coordination of neuronal processing and communication spread across brain thereby supporting the emergence these functions. In humans magnetoencephalography (MEG) has been used to study brain activity and the role of oscillatory activities in cognition. However, several challenges hinder the mapping of oscillatory network interactions from MEG data. I will discuss novel approaches for mapping complex oscillatory networks within and across frequencies and the relationship of network oscillations with perception and memory.
Dr Anton Tokariev
Baby Brain Activity (BABA) Center, University of Helsinki, Neuroscience Center
The first weeks of life are critical for the development of the human brain. During this period brain establishes major structural and functional networks that provide foundation for later higher neurocognitive functions. Early wiring of brain networks is guided by intrinsic neural activity as well as modulated by external factors. This process is very dynamic and sensitive to various environmental and medical factors. In this talk I will discuss early development of human brain networks and potential risk factors, present analytical frameworks to assess and analyze functional networks in newborns from electroencephalographic (EEG) data.
Anu-Katriina Pesonen, Leader of the Sleep & Mind Research Group, Chair of the BA and MA programs of Psychology, Faculty of Medicine, Research Program Unit, University of Helsinki, Finland
Many studies have shown that sleep is a powerful enhancer of human information processing, and closely involved in emotion regulation as well.
I will present some essential questions in the current sleep investigation field related to these two areas of research. I will also provide examples of the most used, and also emerging sleep research methods. I will also give a snapshot to studies from the Sleep & Mind Research Group at the University of Helsinki, focusing on both cohort studies and experimental sleep research.
Kimmo Alho, Department of Psychology and Logopedics, University of Helsinki, Advanced Magnetic Imaging Centre, Aalto University
Few brain imaging studies have used real-life-like audiovisual situations to elucidate the remarkable ability of humans to listen selectively to a particular speaker in noisy situations. In our recent experiments, participants attended to video clips of dialogues between two persons at the presence of an irrelevant speech stream in the background. Auditory quality of the dialogues was manipulated by noise-vocoding and their visual quality was manipulated by masking speech-related facial movements. In control conditions, the participants were instructed to attend to a fixation cross and to ignore the speech streams. Functional magnetic resonance imaging (fMRI) indicated higher activity in auditory cortical areas for higher auditory or visual quality. As expected, also attention to the dialogues enhanced activity in these areas. Unexpectedly, however, attention to the emotionally neutral dialogues resulted also in activity enhancements in brain areas which according to previous studies are involved in social judgement. Moreover, one of our studies compared brain activity during contextually coherent and incoherent dialogues. This semantic manipulation resulted in modulations of attention-related activity in auditory cortical areas, including the primary and adjacent auditory cortex. This finding appears to contradict the traditional attention models proposing semantic processing of attended speech to occur only after its low-level auditory processing.
Language in the brain: from mapping processes to modelling mechanisms
Prof. Riitta Salmelin, Department of Neuroscience and Biomedical Engineering, Aalto University
Neuroimaging study of human cognition has so far primarily focused on group-level summary descriptions of presumed processing stages. Based on this type of essential groundwork, we know what kind of activation patterns to expect in basic language paradigms, such as spoken and written word perception and picture naming. More recently, with a lot of help from machine learning, it has become possible to move forward to address representations and mechanisms, and to elucidate brain organization of meaning and knowledge. Furthermore, machine learning techniques are beginning to facilitate individual-level cortical fingerprinting and estimation of interindividual functional similarity.
Social networks, time, and individual differences
Prof. Jari Saramäki, Department of Computer Science, Aalto University
I will show how digital communication records—from call metadata to emails—reveal characteristic, persistent patterns of social behaviour as well as individual differences in this behaviour. I will focus on fine-grained, time-stamped communication data that allow constructing temporal social networks. I will first talk about the similarities and differences in how we maintain our personal networks on timescales from months to years. Then, I will focus on the shorter timescales of daily patterns, and show how various data sets can be used to reveal people’s chronotypes (morning/evening-active persons); I will also discuss how people’s chronotypes are related to their personal networks and their network centrality.
Bonus: Special presentation by Aalto IT Services (ITS): IT Services for Researchers
Artificial intelligence, neuroscience, human behavior, and digital humanities.