Andy GSi
Member
Registered: 24th Mar 02
Location: Shropshire, Drives 2.0l 16v Corsa
User status: Offline
|
has any1 got info on these?? for uni assignmnet
1. FOF and Chant
2. Formant synthesis
3. Granular Synthesis.
can any1 HELP???
come on CS ppl we can do this together 
|
Icy
Member
Registered: 31st Jan 01
Location: Edinburgh Drives: Mk3 Golf Gti
User status: Offline
|
speak english
|
Andy GSi
Member
Registered: 24th Mar 02
Location: Shropshire, Drives 2.0l 16v Corsa
User status: Offline
|
well has any1 got info on these, i dont even know what the fcuk its on abt, never went to the lessons!! its in today, WOOPS!!!
|
Jill
Premium Member
Registered: 8th Jun 01
Location: Aylesbury, BUCKS
User status: Offline
|
What exactly do u need to know for this assignment?
And what is FOF?
JILL
|
Russ2k2
Member
Registered: 23rd May 02
Location: Sheffield
User status: Offline
|
Formant Synthesis
The vocal tract (the throat from the vocal cords to the lips) has certain major resonant frequencies. These frequencies change as the configuration of the vocal tract changes, like when we produce different vowel sounds. These resonant peaks in the vocal tract transfer function (or frequency response) are known as "formants".
It is by the formant positions that the ear is able to differentiate one speech sound from another. Here are a few examples of English vowels with their corresponding lowest three formants for an average male speaker. Vowel sounds are in bold type and all values are in Hertz.
beet
270, 2300, 3000
bit
400, 2000, 2550
bet
530, 1850, 2500
bat
660, 1700, 2400
but
640, 1200, 2400
boot
300, 870, 2250
The SoftVoice synthesizer simulates the human speech production mechanism using digital oscillators, noise sources, and filters (formant resonators) just like an electronic music synthesizer. Because of this, we have the same flexibility as a music synthesizer to create different voice "patches", or presets. SoftVoice TTS comes with 20 preset voices which can be modified by the programmer or user.
|
Russ2k2
Member
Registered: 23rd May 02
Location: Sheffield
User status: Offline
|
Granular synthesis was first suggested as a computer music technique for producing complex sounds by Iannis Xenakis (1971) and Curtis Roads (1978) and is based on the production of a high density of small acoustic events called 'grains' that are less than 50 ms in duration and typically in the range of 10-30 ms. Typical grain densities range from several hundred to several thousand grains per second where the grain itself may come from a wavetable (e.g. sine wave), FM synthesis or sampled sound. Such high densities of events made the technique difficult to work with because of the large amount of calculation required, and therefore until recently few composers have experimented with it. Using a digital signal processor controlled by a microcomputer, Barry Truax implemented the technique with real-time synthesis in 1986 and incorporated it within an interactive compositional environment, the PODX system, at Simon Fraser University. This technique was exclusively used to realize his work Riverrun.
What is most remarkable about the technique is the relation between the triviality of the grain (heard alone it is the merest click or 'point' of sound) and the richness of the layered granular texture that results from their superimposition. The grain is an example of British physicist Dennis Gabor's idea (proposed in 1947) of the quantum of sound, an indivisible unit of information from the psychoacoustic point of view, on the basis of which all macro-level phenomena are based. In another analogy to quantum physics, time is reversible at the quantum level in that the quantum grain of sound is reversible with no change in perceptual quality. That is, if a granular synthesis texture is played backwards it will sound the same, just as if the direction of the individual grain is reversed (even if it is derived from natural sound), it sounds the same. This time invariance also permits a time shifting of sampled environmental sound, allowing it to be slowed down with no change in pitch. This technique is usually called granulation.
|
Russ2k2
Member
Registered: 23rd May 02
Location: Sheffield
User status: Offline
|
Description
Chant was originally designed for the analysis and synthesis of the singing voice and was then found well suited for instrument simulation and synthesis in general. This technology is available in the jMax, Diphone Studio and Max/MSP (IRCAM/Cycling'74) environments and can be easily ported to other hardware or software systems. Independant from Chant, the Resonance Models provide unbeaten sound quality and effects for percussive sounds
Application
Real-time sound synthesis
Synthesis with Chant is flexible enough to be used in real-time. For instance Chant has been extensively used in virtual reality installations produced by IRCAM such as "Le Messager" and "Alex" (Catherine Ikam, Jean-Baptiste Barrière). In this context, visitors were able, with their motion, to interact with an avatar processed in real-time and to control the parameters of the avatar voice generated with Chant. Chant has also been used at NCSA (University of Illinois) in order to produce and control 3D sound spectra in real-time
Sound design
the analysis process for Chant is fully accessible within IRCAM's Diphone Studio in order to create unusual sound effects starting from different kind of sounds.. Live interaction : for concerts, Chant patches are available in the jMax and Max/MSP environments.
Features
Chant Synthesis
In order to explain Chant, the best way is to get back to the sound production model of the voice. The vocal track is composed of an energy source and a set of resonators. The lungs produce a flow of air which is turned into a signal by periodic (vowels) or chaotic (consonants) modulation. This signal is then filtered by the system (larynx-pharynx-mouth) which gives a timbre to the phoneme.
Chant simulates this with an excitation-resonance model. To each resonance or formant is associated a basic response simulated with a specific synthesis technique, the Formant Wave Functions (FOF). Chant produces the resulting sound by adding the FOF corresponding to each formant for a given pseudo-periodic source. In parallel, filters can be used in the same way to filter a sound.
The Chant analysis allows to define the response of a certain number of phonemes and resonant instruments and to characterize their specific temporal variations. This relieves the composer from searching the proper parameters and provides him a simple and intuitive control.
Resonance models
First conceived as an application of Chant, it is an analysis/synthesis method developed for modeling impulsive sounds (percussion, buzzs, pizzs, etc.). This method extends previous studies on timbral interpolation, and brings a continuity between synthesis and processing. These models (which can be considered as filter banks where frequencies, amplitudes and bandwidths are controlled) can be used to control synthesis (FOFs or filters with a noise impulse), and transformations (filters with an external source). The ResAn plug-in in Diphone Studio is meant for precise analysis according to this model. Then, synthesis can be performed in Diphone Studio or in Max/MSP using CNMAT's Resonating filters.
Analysis and edition with Diphone Studio
Diphone Studio brings access to a set of parameter for the analysis and provides a breakpoint function editor allowing very accurate interpolation of analysis segments (called Diphones). Diphone is the graphical interface for editing and interpolating FOF segments in the Chant plug-in.
Participants
Design and development: Xavier Rodet, Yves Potard (Analysis/synthesis team)
jMax version : Norbert Schnell (Real-time systems team)
Resonance models : Pierre-François Baisnée, Yves Potard, Jean-Baptiste Barrière
Diphone Studio version : Xavier Rodet, Adrien Lefèvre, Dominique Virolle
Configuration
Chant and Resonance Models, as Diphone plug-ins, runs on MacOS9. In real-time, FOF objects and Chant patches are available in jMax (Irix, Linux, MacOSX and Windows) and Max/MSP (MacOS9).
|
Russ2k2
Member
Registered: 23rd May 02
Location: Sheffield
User status: Offline
|
there you go!
|
Andy GSi
Member
Registered: 24th Mar 02
Location: Shropshire, Drives 2.0l 16v Corsa
User status: Offline
|
cheers ppl!! sorry Russ2k2 already got that 
i need info on FOF (Formant Wave Function Synthesis)
keep it going ppl
|
Russ2k2
Member
Registered: 23rd May 02
Location: Sheffield
User status: Offline
|
what course is it your studying?
|
Andy GSi
Member
Registered: 24th Mar 02
Location: Shropshire, Drives 2.0l 16v Corsa
User status: Offline
|
well im studing COMPUTING, and i have to do multimedia, WHICH involves music, i hate music!!
so can u help?
|
Russ2k2
Member
Registered: 23rd May 02
Location: Sheffield
User status: Offline
|
|
Andy GSi
Member
Registered: 24th Mar 02
Location: Shropshire, Drives 2.0l 16v Corsa
User status: Offline
|
thats helped, THANX mate
|
Russ2k2
Member
Registered: 23rd May 02
Location: Sheffield
User status: Offline
|
lol, thought it would!
|
Andy GSi
Member
Registered: 24th Mar 02
Location: Shropshire, Drives 2.0l 16v Corsa
User status: Offline
|
quote: Originally posted by Russ2k2
Description
Chant was originally designed for the analysis and synthesis of the singing voice and was then found well suited for instrument simulation and synthesis in general. This technology is available in the jMax, Diphone Studio and Max/MSP (IRCAM/Cycling'74) environments and can be easily ported to other hardware or software systems. Independant from Chant, the Resonance Models provide unbeaten sound quality and effects for percussive sounds
Application
Real-time sound synthesis
Synthesis with Chant is flexible enough to be used in real-time. For instance Chant has been extensively used in virtual reality installations produced by IRCAM such as "Le Messager" and "Alex" (Catherine Ikam, Jean-Baptiste Barrière). In this context, visitors were able, with their motion, to interact with an avatar processed in real-time and to control the parameters of the avatar voice generated with Chant. Chant has also been used at NCSA (University of Illinois) in order to produce and control 3D sound spectra in real-time
Sound design
the analysis process for Chant is fully accessible within IRCAM's Diphone Studio in order to create unusual sound effects starting from different kind of sounds.. Live interaction : for concerts, Chant patches are available in the jMax and Max/MSP environments.
Features
Chant Synthesis
In order to explain Chant, the best way is to get back to the sound production model of the voice. The vocal track is composed of an energy source and a set of resonators. The lungs produce a flow of air which is turned into a signal by periodic (vowels) or chaotic (consonants) modulation. This signal is then filtered by the system (larynx-pharynx-mouth) which gives a timbre to the phoneme.
Chant simulates this with an excitation-resonance model. To each resonance or formant is associated a basic response simulated with a specific synthesis technique, the Formant Wave Functions (FOF). Chant produces the resulting sound by adding the FOF corresponding to each formant for a given pseudo-periodic source. In parallel, filters can be used in the same way to filter a sound.
The Chant analysis allows to define the response of a certain number of phonemes and resonant instruments and to characterize their specific temporal variations. This relieves the composer from searching the proper parameters and provides him a simple and intuitive control.
Resonance models
First conceived as an application of Chant, it is an analysis/synthesis method developed for modeling impulsive sounds (percussion, buzzs, pizzs, etc.). This method extends previous studies on timbral interpolation, and brings a continuity between synthesis and processing. These models (which can be considered as filter banks where frequencies, amplitudes and bandwidths are controlled) can be used to control synthesis (FOFs or filters with a noise impulse), and transformations (filters with an external source). The ResAn plug-in in Diphone Studio is meant for precise analysis according to this model. Then, synthesis can be performed in Diphone Studio or in Max/MSP using CNMAT's Resonating filters.
Analysis and edition with Diphone Studio
Diphone Studio brings access to a set of parameter for the analysis and provides a breakpoint function editor allowing very accurate interpolation of analysis segments (called Diphones). Diphone is the graphical interface for editing and interpolating FOF segments in the Chant plug-in.
Participants
Design and development: Xavier Rodet, Yves Potard (Analysis/synthesis team)
jMax version : Norbert Schnell (Real-time systems team)
Resonance models : Pierre-François Baisnée, Yves Potard, Jean-Baptiste Barrière
Diphone Studio version : Xavier Rodet, Adrien Lefèvre, Dominique Virolle
Configuration
Chant and Resonance Models, as Diphone plug-ins, runs on MacOS9. In real-time, FOF objects and Chant patches are available in jMax (Irix, Linux, MacOSX and Windows) and Max/MSP (MacOS9).
HAVE U GOT the REFERANCE for this mate??
|
|