Web Wall Whispers
“Web Wall Whispers” (www) is an interactive web-based soundwork, conceived as a part of the “Segni per la Speranza” (spls) multimodal artwork: a virtual high-quality exploration of a monumental mural, generating a unique musical composition at every access, based on user movements.
WWW relies on WebAudio API. This is a beta version. As of today, it is supported on these desktop browsers:
- Chrome for Desktop 14+ (suggested for the best experience)
- Firefox for Desktop 23+
- Edge for Windows 10+
- Opera for Desktop
- Safari for Mac 8+
It will be soon supported on mobile devices (both Android and iOS).
Audio stream and processing
- BinauralFIR - Binaural processing node developed by IRCAM.
Note: Safari browser does not support streaming integration in WebAudio API nodes. Because of this bug, Safari version implements a different audio resource loading method, not based on HLS streaming.
- OpenSeadragonZoomLevels - An OpenSeadragon plugin to allow restricting the image zoom to specific levels.
AUDIO PREPROCESSING AND CODING
Original audio tracks are PCM WAV 16bit 44.1 kHz, and are subsequently encoded to AAC 128 kbps with Apple afconvert tool.
afconvert <inputFile>.wav -d aac -f m4af -b 128000 -q 127 <outputFile>.m4a
HLS stream segments are generated with Apple mediafilesgmenter tool.
mediafilesegmenter -a -t 6 -i <outputMasterFileName>.m3u8 -B <segmentFileName> -f <outputDir> <inputFile>.m4a
As already written, Safari does not support HLS streaming in WebAudioAPI, so the AAC coded files are entirely loaded before the user enters the app.
To provide deep zoom navigation, the original picture was processed with gdal2tiles python script, in order to obtain a set of tiles organized with the TMS scheme. OpenSeadragon provides support for TMS tile scheme. We generated 10 zoom levels of tiles starting from a non-compressed TIFF 135743x135743 pixels image of the mural artwork.
gdal2tiles.py -p raster -z 0-10 <inputImage> <outputDir>
- Interactive navigation through the deep zoom image
- Real-time streaming of audio content, based on user actions
- Real-time audio processing with effects such as reverb, binaural spatialization, LPF, HPF and distortion, based on user actions
We suggest the use of headphones for the best audio experience
Audio streaming and processing is real-time and based on the user actions and movements. Our system chooses which tracks to fade in or out and modulates effects parameters, depending on position and zoom level.
Here's a list of navigation controls (Note: controls could vary under certain naviation conditions. In such case the user will be notified):
- Left arrow / 'A' key -> pan left
- Right arrow / 'D' key -> pan right
- Up arrow / 'W' key -> pan up
- Bottom arrpw / 'S' key -> pan down
- '+' key / '=' key -> zoom in
- '-' key -> zoom out
- '0' key -> back to home view
- Single click -> pan to click position
- Click and drag -> pan
- Double click -> zoom in
- Scroll down -> zoom in
- Scroll up -> zoom out
RUNNING THE APP LOCALLY
This beta version is available online at this link.
If you want to run the app locally, follow these instructions:
- Download this repository
- Extract and enter the downloaded directory
- Start a local http server (such as NPM http-server or Python SimpleHTTPServer)
- Open your browser (if it's in the supported browser list)
- Visit the URL specified by your server (usually
WebWallWhispers is released under the GPL-3.0 license. For details, see the LICENSE.txt file.
hls.js is released under Apache 2.0 License.
BinauralFIR module is released under the BSD-3-Clause license.
OpenSeadragon is released under the New BSD license.