A Method for Spatial Upsampling of Directivity Patterns of Human Speakers by Directional Equalization
* Presenting author
Directivity patterns of human speakers are required for various applications in virtual acoustics. They can either be measured sequentially for an arbitrary number of directions or simultaneously using a surrounding microphone array. In the latter, the resolution of the directivity pattern is limited by the number of array microphones, and appropriate spatial upsampling is required, for example by interpolation in the spherical harmonics (SH) domain. However, as the number of measured directions limits the maximal accessible SH order, the SH-transformed directivity pattern shows restricted spatial resolution and suffers from order-truncation errors.Recently, we presented a method for spatial upsampling of sparse head-related transfer function (HRTF) sets. The approach bases on a spectral division (equalization) of the sparse HRTF set with a simplified model of a human head, spatial upsampling of the equalized set by an inverse spherical harmonics transform on a dense grid, and a spectral multiplication (de-equalization) with the same head model. Now we apply this method to human speaker directivity measurements to reduce the spatial complexity of the SH-transformed directivity patterns. Based on measurements of a dummy head with integrated mouth simulator, we evaluate the approach and compare it to a reference measured on a dense grid.