Humans naturally classify the sounds they hear into different categories,
including sounds produced by animals. Bioacousticians have supplemented this type of
subjective sorting with quantitative analyses of acoustic features of animal sounds.
Using neural networks to classify animal sounds extends this process one step further
by not only facilitating objective descriptive analyses of animal sounds, but also by
making it possible to simulate auditory classification processes. Critical aspects of
developing a neural network include choosing a particular architecture, converting
measurements into input representations, and training the network to recognize inputs.
When the goal is to sort vocalizations into specific types, supervised learning
algorithms make it possible for a neural network to do so with high accuracy and speed.
When the goal is to sort vocalizations based on similarities between measured
properties, unsupervised learning algorithms can be used to create neural networks that
objectively sort sounds or that quantify sequential properties of sequences of sounds.
Neural networks can also provide insights into how animals might themselves classify
the sounds they hear, and be useful in developing specific testable hypotheses about the
functions of different sounds. The current chapter illustrates each of these applications
of neural networks in studies of the sounds produced by chickadees (Poecile
atricapillus), false killer whales (Pseudoorca crassidens), and humpback whales
(Megaptera novaeangliae).
Keywords: Adaptive filter, Computational modeling, Connectionism, Learning
algorithm, Parallel distributed processing, Perceptron, Self-organizing.