Tutor HuntResources Psychology Resources

Is Self Face Recognition Affected By Current Multi-sensory Inputs? 

My final year project: Testing the effets of multisensory integration on facial recognition.

Date : 07/06/2012

Author Information

Edward

Uploaded by : Edward
Uploaded on : 07/06/2012
Subject : Psychology

Is self face recognition affected by current multi-sensory inputs? 

Edward Barber Department of Psychology, Royal Holloway, University of London, Egham, Surrey, United Kingdom.

Abstract

One may argue that the most important face in our lives is our own, essential to communication, relationships and countless aspects of everyday life. This importance is somewhat unusual, as we never directly view our own faces. Nonetheless, we each have a sense of what we look like, and previous studies have shown people very capable of determining their own faces far faster than they can determine someone else's. Do we know we are looking at ourselves based on visual experience, seeing ourselves move in mirrors, touching our own features or otherwise? A recent facial recognition study shows that multisensory input is relevant in order to identify our own face. This study attempts to determine whether the direct application of tactile stimulation, coupled with an unfamiliar face being touched in synchrony and asynchrony, is enough to convince participants they are looking at themselves when in fact they are looking at someone else. Before and after this visuo-tactile stimulation, participants performed a self recognition task. Our results show that certain conditions of multisensory stimulation do have an effect on self-face perception.

Introduction

The issues of self-identification are often overlooked in everyday life, a phenomenon taken for granted amongst human adults as self identification is so easily done. However, observing animals and young babies, we can determine that not all brains are so readily equipped for self-identification. Therefore we are led to consider, what mechanisms are at work that influence self-identification? We considered the possibility that self-identification is not simply a visual process, but relies on a variety of sensory inputs, such as vision, touch, and movement. We sought to determine how these methods could be influenced by multisensory stimulation, in order to alter participants' ability to recognise themselves. A common and natural assumption regarding self identification is that vision is our primary means of recognising ourselves. Then of course we must question whether we simply are aware of what we look like due to a lifetime of experience with mirrors, photographs and videos, or whether additional information is required for this knowledge.

At present, our understanding reaches little beyond the belief that through a lifetime of experience, we learn what we look like and recognise ourselves in a more immediate sense. This is present when observing a picture of oneself. A lifetime of experience has taught you what you look like and as a result we are able to determine photographs, on the whole, are visual representations of ourselves. When looking in a mirror, due to the context of the situation and having experienced mirrors before, most human adults immediately self recognise. Knowing this information, we can be sure that sight plays a key role in self identification. Certainly a factor of corresponding movements, facial expressions, mouth movements and otherwise all add to the determination that we are observing a reflection. One may be curious what part of the brain has developed and for what reason in human adults that allows self-identification. More importantly, we may seek to determine which inputs are essential to this process. If it is not simply a case of sight, but rather a culmination of several sensory inputs, then an opportunity is presented to manipulate the visual representation of the self. What is not entirely understood is how a sense of ownership is determined by the brain. This question splits into two categories, how we determine what is a part of our body, and how we determine what is our own face. Indeed they are both influential to our identity, but studies suggest the two are not grouped together within the brain. When considering the sense of body ownership there are other experiments that we can consider in order to identify human means of body recognition. Botvinick and Cohen(1998), as pioneers of the rubber hand illusion, discovered that when a prosthetic hand is touched in synchrony with the participants own hand, it is perceived as actually being a part of their body. When the stimulation is asychnronous, however, there is no effect. It was suggested by Botvinick and Cohen that a similarity between the participants hand and the rubber hand is essential to a successful rubber hand illusion. If true, this could be potentially damning to attempts to create self face attribution. Longo et al. (2009) undertook a study of the rubber hand illusion, in which they attempted to identify whether size and shape of the hand being viewed held a significant factor when stimulating the rubber hand in synchrony with the participants own hand. They found that what the hand looked like, with factors such as hand shape, skin luminance, and third person ratings, were not significant in creating a more successful rubber hand illusion. However, Longo did note that "incorporation of the rubber hand into the body image did affect the similarity that participants perceived between their own hand and the rubber hand." This is significant to our study as we may infer that when looking at our bodies, we determine a sense of ownership through a range of multisensory inputs, not simply through sight alone. If this logic can be applied to self face recognition, then perhaps synchronous stimulation during morphing can create a perceived similarity between the participants own face and the unfamiliar face. Going above and beyond simply identifying oneself in a mirror with our eyes, we must also consider the sensory inputs from other parts of the body that allow self-recognition to happen. For example, when we touch ourselves on the face we are able to see this movement reflected in a mirror. This allows us, like a spot on the face, to determine that it is in fact ourselves we are looking at. Additionally, if someone else touches us on the face, we feel this stimulus as well as observing it, and again determine it is indeed ourselves we are looking at. Therefore we had to consider not only the multisensory input altering self face recognition, but also the delusions of misidentification. Breen et al. (2000) studied a variety of cases in an effort to understand the means by which people recognise their own, and familiar faces. Several different conditions were observed in which damage to certain areas of the brain had resulted in the inability to recognise a familiar face, or mistakenly identifying faces as familiar that were not. Whilst no participants intended to be in our study had any form of known brain damage or psychological condition, some of the insights by Breen et al. can be taken into consideration. It seems that psychological problems, or lesions to the fusiform face area, amongst others, are responsible for the misidentification of ones own face. Therefore there are reasons to suggest that our hypothesis will not be proven accurate when testing healthy patients, as accurate means of self identification will still be intact. The extent of the effect of tactile stimulation is relatively uninvestigated, and as a result we rely purely on theory in our experiment to predict the result. Here we can observe Tsakiris (2008), who described multisensory integration as suggesting "that a strong correlation between synchronous visual and tactile signals influences self-face recognition over and above the mere presence of multisensory stimulation, and it may alter internal representations of one's own face, analogous to the effects of multisensory stimulation for bodyownership." This is significant because a firm study exists to suggest influencing one sensory input may have a direct influence on another, and allow the incorrect interpretation of ones own face. Moreover, the differences between face and body ownership do pose interesting possibilities, such as how even though the face and body have a sense of ownership, perhaps the two are not considered by the brain in the same way. The face, for example, is a bigger sense of self with regards to communication and identification, as well as expression and interpretation of the outside world. The body, on the other hand, could be argued to be less individual, but remaining a strong part of 'the self'.

Case studies concerning those suffering from mental health issues do indicate that the brain can mistakenly identify faces, often resulting in violent incidents, however outside of healthcare in the general population, possibilities are more unclear.

When studying facial identification, particularly self face identification, we must ask ourselves what causes a sense of ownership when looking at a part of our body. Longo and Haggard (2009) observed the influence of motor reactions on the human hand primed by a sense of agency. In their experiment they used video images of their hands on a computer screen, during which time their fingers were touched or moved. The key finding of this experiment was that when there was a sense of agency in the hand, that is to say, the hand was deemed to be their own on the screen, reaction time was significantly increased. When there was a delay in their own movements, reaction times decreased as the hand was not felt to be a part of their body. This is significant to our study as the possibility of merging tactile stimulation and visual stimuli is presented. If a participant sees a hand on a screen and can associate it to themselves with synchronous active movement, then the possibility of associating an unfamiliar face seemed possible when coupled with synchronous stimulation. This was the basis of logic for our hypothesis, in which partipants would be stroked as though looking into a mirror. As a result, an illusion of looking into a mirror would hopefully be achieved, possibly resulting in self face identification in a previously unfamiliar face. Lenggenhager et al. (2008) used illusory self location techniques in order to deceive participants as to the location of their bodies. Using synchronous and asynchrnonous stroking conditions on various locations in the body, participants were significantly misled as to the location of their extremities. Lenggenhager's results confirmed that physical location can be confirmed by the brain where touch is seen. The findings of this study are important to our own investigations as in asynchronous visuo-tactile conditions, stroking was perceived where it was felt, however in synchronous stroking conditions the sensation was perceived where it was seen. This suggests that during asynchronous stroking conditions tactile sensation prevails over visual stimulation, however in synchronous stimulation conditions, vision confirms a location even if it is not spatially accurate.

Bredart (2003) investigated the influence of people recognising the usual orientation of their own face when considering asymmetrically located details. That is to say, he investigated the viewing of ones own face in views different to the usual horizontal orientation we are used to observing in the mirror. Bredart found that the way in which we identify our own face is different to how we identify other peoples. He noted that specific features when looking at ourselves, such as moles, blemishes or scars, are more significant than when identifying familiar faces. Bredart was more focused on mirrored representations of faces than our experiment, which hinders the significance of all of his findings in relation to this study. However, significant factors include the means by which we determine we are in fact looking at ourselves. Bredart discusses Tong (1999) who discovered that we are faster at identifying our own face than the face of a stranger, and this is significant to our study as when observing the change from other face to self face, we may infer that we would identify a change faster than in the self to other face. With the belief that we have strong representations of our own facial features stored in our brain, we may struggle to believe that it will be a straightforward process to manipulate participants into seeing themselves as someone else. Arguments such as those by Tong go against our hypothesis, and would suggest that multisensory integration may confuse the brain in some regards, however a more deep set sense of self is likely to override this illusion. We may enquire as to the possibility of creating our illusion if practiced on an individual from birth. Possible areas for consideration include specifically which senses influence self identification. For example in a commentary on cross-modal facilitation Platek (2004) investigated influences that allow us to consider the human determinants of self. For example, it was discovered that hearing or seeing ones name or smelling one's own odour is a significant prime to faster self identification. This is particularly interesting, as we never truly see our own faces except in a reflection, or in photographs. We are also less aware consciously of our own scent than of other peoples, for example a person who has not bathed in several days may be unaware of any body odour, whilst others are very aware of it. However, some sort of prime exists that encourages self recognition faster than identification of someone else. We may surmise this is an evolutionary reason concerning genes, in order to better procreate we are drawn to people who look similar to us, or one may simply infer that it is human egotism that encourages a love of oneself and image. The significance to our study is that we may believe when observing our own faces, we are quick to react to the sight of ourselves. Therefore in a test situation, it would be logical to hypothesise that when the participants face is turning into someone else's, they would react more slowly as they are still aware of some of their own features. Conversely, when observing someone else's face turn into their own, we may hypothesise that their own facial features would be recognised more quickly and as a result react faster.

The idea that multisensory stimulation may produce surprising effects is not a new one at this point. Carriere et al. (2008) investigated multisensory stimulus combinations presented to the brain, attempting to determine locations within which areas multisensory stimuli were involved. They determined that when combinations of stimuli were presented at weakly effective locations, there were often large, super additive response enhancements present. These findings also added to the belief that with minor, subtle stimulation, it may in fact be possible to create an illusion of ownership in an unfamiliar face.

Research has been undertaken investigating the relationships between different sensory modalities. However, large areas of this research have surrounded relationships between audio and visual integration. Portois et al found that there is significant pairing between audio and visual areas in the brain regarding face recognition. What we sought to determine was how touch is implied with visual stimulation.

Many other studies, such as those by Meredith & Stein (1986), focused on the investigation of the interplay between auditory and visual stimulation. It was determined here that there is a visual dominance in our sensory systems, for example a loud sound from a different location cannot confuse us to the location of an object we see with our eyes. However, when information from one sensory system correlates with our vision, it becomes more accepted. Meredith & Stein also determine that the timing of these stimuli is important when being processed by the brain. For example, what we are seeing and what we are hearing is more likely to result in multisensory integration when the information arrives in synchrony. This was further inspiration to determine the influences of touch on vision. Therefore, instead of playing sounds in specific locations, rather tactile stimuli would be presented both synchronously and asynchronously, in an effort to create a spatial and temporal illusion.

As of yet, only one study seems apparent in which facial identification coupled with visuo-tactile stimulation is present, and this is the research which this study attempts to replicate. Tsakiris (2008) attempted to identify whether multisensory integration would affect participants ability to self recognize. Tsakiris determines that there is a significant effect in manipulating the human ability to self recognize when coupled with self-face recognition. This piece of information is the primary source to believe that our experiment may have a significant effect. Whether multi sensory stimulation will have an effect or not, with all our available theory, can only be determined through experimentation. Ultimately the prime motivation for investigating self face recognition in this experiment came from an effort to determine the effects of multisensory inputs. Having researched the field, and with this information at hand, it was hypothesised that participants would attribute more frames of an unfamiliar face to themselves after a synchronous tactile stimulation. This is to say, individuals would recognise another individuals face as their own. Conversely, they would associate fewer frames when being touched in asynchrony.

This resource was uploaded by: Edward