Ïã¸ÛÁùºÏ²Ê¹ÒÅÆ×ÊÁÏ

My country:

Perception and Action

Discoveries

Understanding human tool use: information from vision and touch is integrated

When we use tools our brains receive information about the sizes, locations, etc of objects from both vision and touch. An ‘optimal brain’ would combine or ¾±²Ô³Ù±ð²µ°ù²¹³Ù±ðÌýthese signals because that would allow us to estimate properties of the world with the greatest possible precision, over a wide range of situations. For ‘multisensory integration’ to be effective, however, the brain must integrate related signals, and not unrelated ones (imagine, for example, if the apparent size of the coffee cup in your hand was affected by irrelevant objects that you also happened to look at, like buildings, people and so on).

The brain could achieve this by considering the statistical similarity of estimates from vision and touch (haptics) in terms of magnitude, spatial location, time of occurrence, etc? This would be effective because in normal grasping visual and haptic signals arising at the same time/place etc. are in fact more likely to originate from the same object than more dissimilar signals. The problem is complicated by using tools, however, because they systematically perturb the relationship between (visual) object size and location, and the positioning and opening of the hand required to feel an object. Said another way, our visual estimates are derived from the object itself, but our hands can only feel the object via the handles of the tool. Our research investigated whether the brain is able to understand the resulting sensory remapping, and appropriately integrate information from vision and haptic touch when using tools.

We used a stereoscopic display, and force-production robots, to create a virtual environment in which we could independently control the visual and haptic properties of objects, and create ‘virtual tools’. Psychophysical results showed that the brain does integrate information from vision and haptics during tool use in a near-optimal way, suggesting that when we use tools, the haptic signal is treated as coming from the tool-tip, not the hand.

As well as extending our understanding of human tool use, these results have practical implications for the design of visual-haptic interfaces such as surgical robots. This work is published in the Journal of Vision.

Building 3D stereoscopic displays that better suit the human visual system

Stereoscopic 3D (S3D) displays are enjoying a huge surge in popularity. In addition to entertainment, they have become common in a range of specialist applications that includes operation of remote devices, medical imaging, scientific visualisation, surgery and surgical training, design, and virtual prototyping. There have been significant technological developments— the red-green glasses are long gone—but some fundamental problems remain. Specifically, S3D can induce significant discomfort and fatigue and give rise to unwanted distortions in-depth perception.

As S3D viewing goes from an occasional to a routine activity, and more of us become users of the technology, there is a pressing need to better understand how our visual system interacts with S3D media. An EPSRC-funded project in Simon Watt’s lab has investigated this issue. In the real world, when we move our eyes to look at near and far objects (vergence), muscles in the eye also change the shape of the lens (accommodation), to focus at the same distance. In S3D viewing, however, we have to look at ‘objects’ that can be in-front-of or behind the screen, while remaining focused at the screen surface (see picture). This ‘decoupling’ is a key cause of problems, and the researchers have constructed a novel display (initially developed with researchers in the United States) designed to address this by simulating a continuous range of focal distances, as experienced in the real world, using a small number of discrete focal planes.

The researchers have carried out extensive experiments to develop and evaluate this approach, including measuring eye movements and the focusing response of the eye. The results have provided information on how to build improved S3D displays, and produce effective S3D content. This has resulted in invited talks/workshops not only for specialists in the S3D industry, but also at more general display-industry venues, and international standards-setting bodies (the Society for Motion Picture and Television Engineers, or SMPTE).

This research also has a significant basic science component, and the results have provided new insights into the mechanisms underlying how our eyes focus.

Example of published work from this project:

Paralinguistic cues and social interaction

Social interactions involve more than "just" speech. Similarly important is a, perhaps more primitive, non-linguistic mode of communication. While what we say is usually carefully and consciously controlled, how we say things, i.e. the sound of our voice when we speak, is not. The sound of our voice may therefore be seen as a much more "honest" signal.  The researchers are interested in the effects of these paralinguistic cues on social interaction and how these are processed in the brain. One such signal is vocal attractiveness, which is known to influence the speaker's success at job applications, elections or short-term sexual relations. In a series of experiments we were interested in what makes a voice attractive.

Dr. Patricia Bestlemeyer and colleagues found that the voices which were perceived as more attractive tended to be more similar to an average voice composite in terms of its formant frequencies and sound "smoother".

Next, she measured brain activity while participants listened to the same vocal sounds ("ah") and performed an unrelated task. Activity in voice-sensitive auditory cortex and inferior frontal regions were strongly correlated with implicitly perceived vocal attractiveness. While the involvement of auditory areas reflected the processing of acoustic contributors to vocal attractiveness (e.g. frequency composition and smoothness), activity in inferior prefrontal regions (traditionally involved in speech) reflected the overall perceived attractiveness of the voices despite their lack of linguistic content. Dr. Bestlemeyer's results provide an objective measure of the influence of hidden non-linguistic aspects of communication signals on cerebral activity.