The Sound Planetarium got its second unveiling at the 2016 Frontiers of Innovation Scholars Program (FISP) at UCSD on October 18th. FISP is the primary funding source for this project.
Today the cosmOcosm Sound Planetarium had its first demonstration at the UCSD Summer Research Conference. Jake, Yuka & Melisa set up our demonstration rig in the oversized meeting room we were assigned in the afternoon, which piqued quite a bit of interest among the audience members. They’d have to wait; we were last on the docket:
Fortunately the set up is pretty compact and fits in ye olde beach trolley:
Here’s the system set up off to the side of the room: 3 speakers on chairs close to the ground, 3 up high on stools; the area is about a 10′ diameter circle.
This being a conference, Jake, Yuka & Melisa first gave a 15 minute talk about the motivation, design and testing of the test rig…
… which included a nice audio demonstration of audio timbre…
… and a healthy dose of stellar astrophysics
Here’s the presentation slides:
Then we were ready for the demonstration. With Jake & Yuka at the steering wheel, we brought the audience over to experience the sounds of stars by playing the 10 brightest stars in the Bright Star Catalog, with speeded-up nightly transits as they would be heard from San Diego in August.
Here’s the perspective from the audience, where you can hear the stars passing by:
In addition to our 10 star transit demo, we also ran the 100-star fixed demo (actually just 90 to remove the rather seriously loud Sirius), bringing stars in 10 at a time. Here’s a video showing Jake manipulating the interface while Melisa describe a little more detail about how we mapped data to moving sound:
All-in-all, and amazing success! To be able to set up the hardware and run a demonstration in an arbitrary space, we clearly have proven that this is a portal, accessible system. Now we just need more time to play….
Here’s the happy team after a job well done (we’re missing you Tara!!)
During our thanksgiving break, my father Al and I were talking about the various sound-listening ideas related to our sound planetarium idea, and he suggested turning the problem around. Rather than hearing a sound from a particular direction, how about directing ourselves using sound. In other words, could we define a two-dimensional spherical coordinate system where we could, with a single utterance, orient ourselves? After some trial and error, we came up with the following mapping:
In a nutshell: four vowel sounds are used to indicate azimuthal direction, with blends to assign intermediate angles (e.g., pure “ay” would be 0º, an “ay-ee” blend would be around 45º), while pitch indicates altitude. We played around with various vocal ranges that would fit both men and women, coming up with a nominal range of E4 (330 Hz) to E5 (659 Hz), somewhere in the countertenor range, although in principle the equatorial altitude could be set to any pitch and the poles set to a half octave above and below. The vowel tones could also be modified to those of a person’s native language.
I imagine this tonal directional language being used to control a telescope in a hands-free method, but I expect the degree of precision to be pretty poor – with practice I estimate no better than 10º precision – but that would be sufficient to match the angular precision of our hearing, so perhaps well suited to controlling the direction of a sound planetarium!