Procedural Ecology: Modeling Natural Soundscapes Through Digital Synthesis
Procedural Ecology is a sound design project I worked on as a send-off for my time at UC Santa Cruz. With digital synthesis techniques, I was able to model a woodland environment of the upper Santa Cruz region with respect to the ecology that resides there.
Introduction
As a joint project for both my Workshop in Electronic Music and Technical Writing for Computer Science and Engineering course in my Winter 2025 quarter at UCSC, I developed a soundscape model of UC Santa Cruz’s mountain woodlands using procedural digital synthesis in Ableton Live 12. The model is procedural in that each moment of playback is unique and randomized, with no two listening experiences being the same. The model is digital in that no samples are employed; only additive, subtractive, and frequency modulation synthesis are used, along with necessary processing.
The research I pursued in the process of making my model is documented as a formal thesis chapter, detailing the steps one should take to develop their own soundscape model. The document details proper methods of field recording, how to research a region’s ecology, how to utilize various visual representations of audio for sound design, and the synthesis of the causes and consequences of biological (biophony) and geophysical (geophony) sounds.

Gathering References & Analysis
To model the soundscape that I intended to recreate, I first needed reference recordings of the area to get an idea of the space and the creatures that inhabited it.
I was able to make my way to the Cowell Redwoods several times to set my trusty Zoom H4n Pro up for hours-long field recording sessions at a time, leaving and coming back to retrieve my equipment.
Upon analysis of my recordings with online tools like Cornell University's BirdNET, a birdsound identifier that pinpoints the calls of different species heard in audio, I could then take what was identified and find them on eBird, a site designed for hobbyists to document their bird sightings for help the process of identifying birds. eBird listings also include a repository of bird calls respective to a given species, which I was able to use to gather cleaner reference recordings of bird calls to later do spectral analysis on.
Sound Design Process
As stated previously, only synthesis techniques such as additive, subtractive, FM, etc. could be used for this model. No samples could be employed, meaning that granular synthesis was out of the equation.
For ambient noise such as wind and rain (geophony), I utilized a tried and true method of noise oscillators with intricate modulation using LFOs and EQs with careful shaping.
For the noise of water (geophony), I was able to use the "modulator" setting on Ableton Live's vocoder on top of white noise. This mode has the carrier self-modulate, allowing for some fascinating sounds. When paired with the spectral quality of pure noise, though, I discovered it can sound eerily like water, which I took advantage of.
For animal sounds (geophony), particularly those of different birds in the region, I used additive synthesis as my main technique. By analyzing the frequency content of individual bird calls using a spectrogram, I was able to identify individual harmonics and recreate the same sounds in Serum and Phase Plant's waveform editors. Using LFOs mapped to volume, I could match the rhythms of the calls.
For post-processing, I spatialized the individual sounds with panning, reverb, volume attenuation, and by dampening the high-end for particularly distant noise.



