搜索 Search
之前的翻译过于肤浅,Mungo称之为“译味深长”
Vectors一词在数学领域中翻译为向量,在物理领域中翻译为矢量。
MySQL (beta) at CHEARSdotinfo.co.uk Unit 6 Notice that there is considerable increase in loudness because the two sine waves have a time relationship described as being in phase. Because the two tones are in phase, they add constructively and the resulting combination has twice the amplitude of either tone alone. Those knowledgeable in electronics will recognize this as producing a 6-dB increase in signal level. Now, let’s see what happens if the same two 500-Hz sine waves are combined out of phase or in phase opposition, that is, when one waveform goes positive, the other goes negative, and vice versa. That four-second dead spot in the middle was caused by adding the second equal 500-Hz tone out of phase. When the two signals are in phase opposition, one cancels the other out and the resultant output is zero. The frequency of the beat is determined by the difference between the frequencies of the two tones which are beating together. As the difference between the two tones is increased so that the beat frequency increases to about 20 Hz, the ear becomes unable to discern the individual beats. As the beat frequency is increased beyond 20 Hz, a harsh, rattling sound is heard. Note this roughness well! It is the secret ingredient of what we consider to be unpleasant musical effects. As with so many other factors of human hearing, the critical band seems to be involved in how we hear two tones which are sounded together. If the two tones are a critical bandwidth apart, they are heard not as beats or roughness but resolved harmoniously as two separate tones. To avoid the distraction of the beats and the region of roughness, and for the ear to separate the two tones, they must be at least a critical bandwidth apart. All this leads us to the conclusion that when several tones are sounded simultaneously, the result may be considered as either pleasant or unpleasant. Another way of describing these sensations is with the terms consonant and dissonant. In this psychoacoustical context, when we say consonance, we mean tonal or sensory consonance. This is distinguished from the musician’s use of the word, which is dependent on frequency ratios and musical theory. Here, we are referring to human perception. Of course, in an ultimate sense, the two definitions must come together. The audibility of these roughness effects does not depend on musical training. This puts the effect of combining two tones in proper perspective. If their frequencies are separated by a critical bandwidth or more, the effect is consonant. If less than a critical band separates the tones, varying degrees of dissonance are heard. The most dissonant (that is, the least consonant) spacing of two tones is about one-fourth of a critical bandwidth. Musicians define an octave as a musical interval whose two tones are separated by eight scale tones. Tones separated by an octave have an essential similarity recognized by everyone. There is a very good reason for the octave’s consonance, which directs our attention once more to the critical band. An octave represents a frequency ratio of two to one. This means that the harmonics of both are either well separated or coincident up through the audible spectrum when the two are played together. In fact, the sound of the higher note reinforces that of the lower one. The result is consonance- full, rich, and complete. The perfect fifth is only slightly less pleasant than the octave interval. we note that they are either separated more than a critical bandwidth or they are at essentially the same frequency, both factors contributing to consonance. Comparing the frequencies of the fundamentals and harmonics of middle C and B-flat, we fail to find coincident pairs as we did with the perfect fifth interval. For a minor seventh interval, we find numerous harmonics of C and B- flat close enough together to result in some roughness. Evaluating the separation of harmonics, we find many near misses which are not coincident but less than a critical bandwidth apart contributing to the roughness. We see that the perfect fifth is close to perfect-that is, close to the consonance of the octave interval. The minor seventh has some intervals separated less than a critical bandwidth, hence, somewhat dissonant. We conclude that the critical band approach has value in explaining, or even predicting, the degree of consonance an interval exhibits. Dissonance can be considered another dimension of musical creativity to be explored. Our purpose in this analysis is only to relate consonance and dissonance to the critical bands of the human auditory system.
MySQL (beta) at CHEARSdotinfo.co.uk Unit 5 The echo is no longer distinct because of the amazing integrating effect of our auditory system. This is called the Haas Effect by audio engineers and the precedence effect by psychologists. In a modest-sized space like your living room, a classroom, a studio, or a control room, if someone speaks to you from across the room, you have no difficulty sensing the direction of the voice, even if you are blindfolded. That is because the direct sound, which arrives first, gives the directional cue even though followed by an avalanche of reflected sounds. The first sound to arrive tells us from which direction it comes: This is the Law of the First Wavefront. And it all happens in a fraction of a thousandth of a second. while discrete echoes of speech become discernible with a delay of around 40 milliseconds, echoes of single, short-duration impulse sounds will be audible with delays as short as four milliseconds. Sustained, or slowly varying sounds, on the other hand, may require a delay of as much as 80 milliseconds before discrete echoes are noticeable. A famous musician once said, "There is no such thing as good music outdoors." He had in mind the reflections from the walls and other surfaces of the concert hall which become very much a part of the music. The lack of such reflected energy outdoors, in his opinion, degraded the quality of the music.
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 7 Reverberation may be either friend or enemy; it can improve our program material or degrade it. Because we normally are not conscious of reverberation as a separate entity, it is well that we pause to dissect and define it. If reverberation is almost totally eliminated, the same speech is understandable, but it sounds rather "dry" and uninteresting. Too little reverberation is as unpleasant and unnatural as too much. Reverberation is a direct result of the relatively slow speed at which sound is propagated. Some sound energy is lost at each reflection and it might take several seconds for successive bounces to reduce the level of the sound to inaudibility. In other words, in a confined space, it takes a little time for the sound to die away. Reverberation time is, roughly, the time it takes a very loud sound to die away until it cannot be heard anymore. To be more precise and scientific about it, reverberation time is defined as that time it takes a sound suddenly cut off to decay 60 dB. If an orchestra were to play outdoors, its sound would tend to be thin and dry. The same orchestra playing in a hall having a reverberation time of about one second sounds natural and pleasing. The quality of speech is also greatly affected by the amount of reverberation present. The understandability and naturalness of speech is actually better at shorter reverberation times. Anything that interferes with these low-level consonants reduces the intelligibility of the words. Reverberation is one thing but not the only thing that can seriously impair the understandability of speech by covering up these low-level consonants. The slow trailing off of the sound of the first part of each word interferes with our hearing the consonants at the end of each word, and the identification of each word depends upon identifying this consonant. Reverberation is very much a part of our enjoyment of music and our understanding of speech. It affects the quality of both speech and music, but it is very important to have the right amount.
中文版: http://pages.uoregon.edu/emi/chinese/index.php English version: http://pages.uoregon.edu/emi/index.php
Send effect Send-FX 1: Kinds of reverb: Small room/Ambience Area of application: Drums and some bass Pre-delay: 0 to little Treble roll-off: slight Send-FX 2: Kinds of reverb: medium-sized room(gate) Area of application: snare Pre-delay: 0 Treble roll-off: no Send-FX 3: Kinds of reverb: medium-sized room 1.8s + StereoExpander Area of application: Perc. Pre-delay: 14 Treble roll-off: medium Send-FX 4: Kinds of reverb: large reverb effect Area of application: background events (choir/guitar) Pre-delay: 30ms Treble roll-off: strong Send-FX 5: Kinds of reverb: Vocal Chamber Area of application: lead vocals Pre-delay: 0 Treble roll-off: no Send-FX 6: Kinds of reverb: modulation delay Area of application: keyboard & triangle Pre-delay: - Treble roll-off: - Send-FX 7: Kinds of reverb: delay Area of application: Rhodes & triangle Pre-delay: - Treble roll-off: -
Compressor type: Gate Expander Compressor Limiter Threshold Ratio Transient:Attack, Hold, Release
Sub-bass sector: 0 to 25 Hz (subharmonic- / infra sector) A good mix requires at least the same number of low-cut filters as there are tracks! Bass sector: 25 to 120 Hz Low mid range: 120 to 350Hz (mid range misery) Mid range: 350 to 2000 Hz Upper mid range: 2 kHz to 8kHz Trebles: 8 kHz to 12 kHz Upper trebles: 12 kHz to 22kHz (airband) Familiarize yourself with the sweet-spot frequenceies of the various instruments. First lower, then raise. Lower rather steeply, raise over abroad frequency band. Almost any change in one band will affect the sound in the other bands.
Create an outline of your panning strategy prior to mixing, taking into account the frequency distribution in the left, center, and right sectors. Anything that is not bass, bass drum, snare or lead vocal has no place at the center. Instruments present in the same or overlaying frequency sectors should be placed at opposite ends - complimenting each other - within the panorama. Once you have established the basic static panning, but still find parts within the song that are unsatisfactory in their transparency, it is always worth it to try to remedy this with panning automation. Well planned and carefully automated panning often creates greater clarity in the mix than the use of EQ and should be preferred to unnecessary EQing. If an event is drowned in the sound mush, your first step in trying to find a solution should therefore be to go for the panning knob, and only after that for EQ. Used in a controlled fashion, widening the sound basis can create extra space in the horizontal dimension and thus ensure a clearer sound. Check using the correlation meter and the mono switch on your monitoring boared.
“……直到今天上了这堂课我才知道Nono的作品中有电子音乐的元素。他的作品《为大号和电子音乐而作》(1987)非常独特。我不确定这是否是第一部在电子音乐中使用现场乐器的作品,但是我相信它一定对现今的电子音乐产生了深远的影响。今日的科技远胜从前,尽管这部作品使用了极其有限的技术手段,比如混响和延迟,却有令人耳目一新的效果!……” “……I have never known about Nono could compose electroacoustic music until I had this lesson today. His piece, For Solo Tuba and Live electronics (1987), is very unique .I'm not sure it is the first piece that uses live instrument an electroacoustic music, but I certainly believe it influences electroacoustic music at present. Science and technology of today is better than past, I have to say although this piece used extremely limited method, such as reverb and delay, it did an amazing things!……”
“……在2007年北京现代音乐节, 中央音乐学院邀请了Alvin Lucier……他的作品《为脑波和打击乐而作》……是通过脑电波产生的一种特殊声音的现场表演,这部作品是他用这种方法创作的第一部作品。而这种所谓的生物刺激在Lucier后来的创作跟表演中都起着越来越重要的作用…… “……In 2007 Beijing modern music festival, Central Conservatory of Music invited Alvin Lucier……Music for Solo Performer for Enormously Amplified Brain Waves and Percussion……was the first piece to feature sounds generated by brain waves in live performance and the biological stimuli played an increasing role in Lucier's subsequent work……”
“……我们是要倾听房间,而不是听声音……并非所有的音乐都是由声音材料发展而来的,我们周围的任何环境因素都能作为音乐中的材料。这也使得我想要改善我房间的声学品质……” “……we are listening to the room, not to hear the sound……it told me that not all music made by sound materials, any environment around us can be the material with my work. It also makes me want to improve the quality of my room……”
我觉得我正在尝试着让人们再次拿起贝壳放在耳边,倾听大海的声音。 音乐其实是自然的声学现象,即声音的表现方式,声音的自然存在。而所谓制作音乐,其实就是以声音现象为基础,揭示声音的本质在自然界传播的过程。 ‘I guess I’m trying to help people hold shells to their ears and listen to the ocean again’ Music about natural acoustical phenomena; the way sounds act, the way sounds ARE. The phenomenology of sound and the revelation of its natural characteristics and processes are music-making.
“我并没有坐在这个屋子里,坐在这里的是我的化身。” 这是一个虚拟版本:我坐在这个屋子里,虚拟人物和虚拟空间。 “……使用普通的数字音频工作站,在音轨中加一个IR(脉冲响应)混响,再将其加入到第二个音轨……依次在各个音轨中重复……但听起来只有IR混响在起作用,其他混响类型的存在只能让混响效果更大……当重复次数不断增加,IR混响的声音会越来越大,而非IR混响的声音会越来越小……” "I'm NOT sitting in a room, my avatar does" Here is a virtual version of "I'm sitting in a room" - by a virtual man, in a virtual room. "……with ordinary DAW. Add an IR (impulse response) reverb module on a track, routing it to a second track……Repeat the process through many tracks……It seems ONLY IR reverb works. Any other reverb module only make the reverb larger……as number of generations increased, IR version gets louder but non-IR version gets quieter……"