Based on fundamental psycholinguistic insights into the nature of situated language comprehension, we derive a set of performance characteristics facilitating the robustness of language understanding, such as crossmodal reference resolution, attention guidance, or predictive processing. To integrate them into a single, coherent system solution is still a challenge, which could profit from inspiration by human crossmodal processing. However, language and vision are two different representational modalities, which accommodate different aspects and granularities of conceptualizations. The relationship between linguistic and visual stimuli provides mutual benefit: While vision contributes, for instance, information to improve language understanding, language in turn plays a role in driving the focus of attention in the visual environment. By focusing on cross-modal learning, the proposed Research Topic can bring together recent progress in artificial intelligence, robotics, psychology and neuroscience.Ĭrossmodal interaction in situated language comprehension is important for effective and efficient communication. We anticipate an accelerating trend towards integration of these areas and we intend to contribute to that integration. Instead, there are many separate fields, each tackling the concerns of cross-modal learning from its own perspective, with currently little overlap. Cross-modal learning is a broad, interdisciplinary topic that has not yet coalesced into a single, unified field. Building a system that composes a global perspective from multiple distinct sources, types of data, and sensory modalities is a grand challenge of AI, yet it is specific enough that it can be studied quite rigorously and in such detail that the prospect for deep insights into these mechanisms is quite plausible in the near term. In contrast, even the very best systems in Artificial Intelligence (AI) and robotics have taken only tiny steps in this direction. In fact, the broad range of acquired human skills are cross-modal, and many of the most advanced human capabilities, such as those involved in social cognition, require learning from the richest combinations of cross-modal information. ![]() In all these examples, visual, auditory, somatosensory or other modalities have to be integrated, and learning must be cross-modal. Cross-modal learning is a crucial component of adaptive behavior in a continuously changing world, and examples are ubiquitous, such as: learning to grasp and manipulate objects learning to walk learning to read and write learning to understand language and its referents etc. The term cross-modal learning refers to the synergistic synthesis of information from multiple sensory modalities such that the learning that occurs within any individual sensory modality can be enhanced with information from one or more other modalities. The purpose of this Research Topic is to reflect and discuss links between neuroscience, psychology, computer science and robotics with regards to the topic of cross-modal learning which has, in recent years, emerged as a new area of interdisciplinary research.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |