Building a soundscape vs processing it

I never realized that our brain is doing much more than just processing sounds, where by processing, I’m referring to identifying voices/instruments, correlating with memories, building models for anticipating what would follow and so on…it’s also building the acoustic scene / simulation / reality in the first place for us to perceive.

I’m inclined to bet that building the acoustic/visual/(insert sense) scene takes a major portion of the brain’s resources, so the processing itself takes a while as well (listening to the same song over and over before the structure becomes clear, or saying that you have heard something before without being able to place the reference).

Besides, I feel that just as we can zoom in/focus on a part of a visual scene – we can also listen to just a part (directional/content) of the audio landscape. I am assuming that HRIRs have a role here, and we should look for / propose a psychology experiment which studies how subjects move their eyes/heads/ears to better process a sound. Eye/head motion might be triggering some alternative processing of the incoming sound. There could be a function here that maps head orientation to which HRIRs the brain picks on (if it does).

All the postulating!

Leave a comment