搜索 Search
这样就明白了许多,不过ears的语言真抽象,没有ears2的基础很难理解。
When working on Max workshop translation was the first time I saw Karplus Strong. In the Computer Music Tutorial (Roads), it is described in detail in Physical Modelling chapter.
在翻译有关Max工作坊的文字过程中,是我第一次看见卡普拉斯-斯特朗(Karplus Strong)这个词。在《计算机音乐教程》一书中物理建模部分有详细说明。
MySQL (beta) at CHEARSdotinfo.co.uk Pitch-shifting In an inexpensive sampler it may not be possible to store every note played by an acoustic instrument. These samplers store only every third or fourth semitone and obtain intermediate notes by shifting the pitch of a nearby stored note. If you record a sound into a sampler memory and play it back by pressing different keys, the sampler carries out the same pitch-shifting technique. A side effect of simple pitch shifting is that the sound's duration increases or decreases, depending on the key pressed. Two methods of simple pitch shifting exist. Both of these methods are called time-domain techniques, since they operate directly on the time-domain waveform. This is different from the frequency-domain pitch-shifting techniques discussed. Pitch-shifting by sample-rate conversion with a constant playback sampling frequency. (Top) If every other sample is skipped on playback,the signal is decimated and the pitch is shifted up an octave. (Bottom) If twice the number of samples are used by means of interpolation on playback, the signal is shifted down an octave. Sample-rate Conversion Without Pitch-shifting Many digital audio recorders operate at the standard sampling rates of 48 or 44.1 KHz. How can we resample a recording at one of these frequencies so as to play it back at the other frequency with no pitch shift?To convert a signal between the standard sampling rates of 44.1 and 48 KHz without a pitch change, a rather elaborate conversion process is required. These ratios can be implemented as six stages of interpolations and decimations by factors of 2, 3, 5, and 7. 1. Interpolate by 4 from 44,100 to 176,400 Hz 2. Decimate by 3 from 176,400 to 58,800 Hz 3. Interpolate by 4 from 58,800 to 235,200 Hz 4. Decimate by 7 from 235,200 to 33,600 Hz 5. Interpolate by 10 from 33,600 to 336,000 Hz 6. Decimate by 7 from 336,000 to 48,000 Hz The signal can then be played back at a sampling rate of 48 KHz with no change of pitch.
MySQL (beta) at CHEARSdotinfo.co.uk Looping Looping extends the duration of sampled sounds played by a musical keyboard. If the musician holds down a key, the sampler should scan "seamlessly" through the note until the musician releases the key. This is accomplished by specifying beginning and ending loop points in the sampled sound. After the attack of the note is finished, the sampler reads repeatedly through the looped part of the wavetable until the key is released; then it plays the note's final portion of the wavetable. Creating a seamless but "natural" loop out of a traditional instrument tone requires care. The loop should begin after the attack of the note and should end before the decay. The beginning and ending points of a loop can either be spliced together at a common sample point or crossfaded. A splice is a cut from one sound to the next. Splicing waveforms results in a click, pop, or thump at the splice point, unless the beginning and ending points are well matched. Crossfading means that the end part of each looped event gradually fades out while the beginning part slowly fades in again. The crossfade looping process repeats over and over as the note is sustained.
MySQL (beta) at CHEARSdotinfo.co.uk Sampling Synthesis The term "sampling" derives from established notions of digital samples and sampling rate. Sampling instruments, with or without musical keyboards, are widely available. All sampling instruments are designed around the basic notion of playing back prerecorded sounds, shifted to the desired pitch. Instead of scanning a small fixed wavetable containing one cycle of a waveform, a sampling system scans a large wavetable that contains thousands of individual cyclesseveral seconds of prerecorded sound. Since the sampled waveform changes over the attack, sustain, and decay portion of the event, the result is a rich and time-varying sound. The length of the sampling wavetable can be arbitrarily long, limited only by the memory capacity of the sampler. Musique Concrète and Sampling: Background After experiments with variable-speed phonographs in the late 1940s, Pierre Schaeffer founded the Studio de Musique Concrète at Paris in 1950 (see figure 4.1). He and Pierre Henry began to use tape recorders to record and manipulate concrète sounds. Musique concrète refers to the use of microphone-recorded sounds, rather than synthetically generated tones as in pure electronic music. But it also refers to the manner of working with such sounds. Composers of musique concrète work directly with sound objects (Schaeffer 1977; Chion 1982). Their compositions demand new forms of graphic notation, outside the boundaries of traditional scores for orchestra (Bayle 1993). The Fairlight Computer Music Instrument (CMI) was the first commercial keyboard sampler (1979, Australia).
MySQL (beta) at CHEARSdotinfo.co.uk Aliasing (Foldover) figure 1.16g shows a waveform with eleven cycles per ten samples. This means that one cycle takes longer than the interval between samples. This relationship could also be expressed as 11/10 cycles per sample. figure 1.16i, the resynthesized waveform is completely different from the original in one important respect. Namely, the wavelength (length of the cycle) of the resynthesized waveform is different from that of the original. This kind of distortion is called aliasing or foldover. The frequencies at which this aliasing occurs can be predicted. Suppose, just to keep the numbers simple, that we take 1000 samples per second. Then the signal in figure 1.16a has a frequency of 125 cycles per second (since there are eight samples per cycle, and 1000/8 = 125). The frequency of the original signal in figure 1.16g has been changed by the sample rate conversion process.This represents an unacceptable change to a musical signal, which must be avoided if possible.
MySQL (beta) at CHEARSdotinfo.co.uk The rate at which samples are taken the sampling frequencies expressed in terms of samples per second. This is an important specification of digital audio systems. It is often called the sampling rate and is expressed in terms of Hertz. The digital signal does not show the value between the bars. The duration of a bar is extremely narrow, perhaps lasting only 0.00002 second (two hundred-thousandths of a second). This means that if the original signal changes "between" bars, the change is not reflected in the height of a bar, at least until the next sample is taken. In technical terms, we say that the signal is defined at discrete times, each such time represented by one sample (vertical bar). Part of the magic of digitized sound is that if the signal is band limited, the DAC and associated hardware can exactly reconstruct the original signal from these samples! This means that, given certain conditions, the missing part of the signal "between the samples" can be restored. This happens when the numbers are passed through the DAC and smoothing filter. The smoothing filter "connects the dots" between the discrete samples. Thus, a signal sent to the loudspeaker looks and sounds like the original signal.
MySQL (beta) at CHEARSdotinfo.co.uk Basics of Sound Signals -> Frequency and Amplitude: Time-domain and Frequency-domain Representation -> Phase --- Analog Representations of Sound Digital Representations of Sound -> Analog-to-digital Conversion -> Binary Numbers -> Digital-to-analog Conversion -> Digital Audio Recording versus MIDI Recording -> Sampling -> Aliasing (Foldover) -> Anti-imaging Filters -> The Sampling Theorem: Ideal Sampling Frequency
I believe that Martijn had done a lot of preparation before the class and I got some keywords like phenomenology, acoustic space, formant. he dares some pictures to explain these theory, that made me understand them clearly and I love pictures. these keywords not only let me know something about the theory, but also I begin to consider the relationship between these theory and my composition, think about how to use these factors when making music and the mixing. The contents of that morning was wonderful by our interactive. The composer use the nature characteristic of a specific room, kept saying a word "I`m sitting a room…" That real-time experiment made meunderstand the word Martijn said: "…we are listening to the room, not to hear the sound…" It told me that not all music made by sound materials, any environment around us can be the material with my work. It also makes me want to improve the quality of my room..terribly, I had recorded many songs in that environment. I think I must consider the recording environment and make it best as far as possible before recording activities in the future. Martijn reconstructed the history to do the experiment with Max/msp, he made a patch to increase the resonant frequency to the extreme, until we can hear the sound only with rhythm. I think it`s pretty meaningful for my five-days study of Max/msp before. By the way. I found that Max/msp is so powerful, it can do whatever “effect” people want, it`s interesting..
In my opinion, theses words are not only the course summary, also the good way to absorb the important theory that beyond commercial music making. The courses in the past, we were just listening to Martijn, but today we had some interaction during the lesson. I think the course is becoming more and more interesting. After we got the meaning of some theory, new knowledge easier for me to accept. To the word "Alpha", I have to talk something about the video Martijn shown to us. It uses the alpha signal from the brain wave of the sleeping man (Alvin Lucier himself) on stage to control the resonant frequency which can produce resonances with percussion. The performance gave me a lot of inspirations and questions. Our brain is so complicated and how can Lucier to do that amazing work. I think he must use many physical theory into this performance and It is necessary to study some knowledge about physical.
最近参加了张睿博老师筹备的max课程,我学到了很多东西,不像以前用商业软件做音乐,用商业化音色做音乐有利有弊,当然有便捷的条件,但是因为音色花样繁多,选择音色时候也会需要很多时间。但是了解max写音乐的方法之后,了解到这样的编程式软件可以有很多想法和手段曲写音乐,根据自己想要的音乐形式和效果用不同的程序和公式编写出来,既锻炼了我的逻辑思维又加强了分析能力。开始我们只是针对程序中必须的专用名词和这些名词在程序中的作用以及连接方法,渐渐的汇集到所有名词后跟随graham的课件用所学到的理论去改程序,通过改变各部分参数和连接方式,我得到了不同的音色效果,感觉很欣慰,同时在短短的5天课程中也使对max这种互动式音乐产生了很大的兴趣。唯一的遗憾就是课程中都是在根据原有的程序改参数,却没有完全独立编写一个属于自己想做互动的程序。课程虽然结束了,但我还会继续努力写程序的。