搜索 Search
同时,我认为,Intelligent是双关,其既有“人工智能”的含义,同时又有“心灵”的含义。暗示这种音乐是:电子的(数字的),让心灵(而不是身体)跟着跳舞的音乐。
很多IDM的作曲家都不喜欢IDM的这个说法(由于其很容易让人误解),其中,Aphex Twin认为,更好的叫法应该是"Braindance"。
我个人更倾向于把IDM和Brain Eno提出的Ambient Music进行比较。虽然他们的起源不同(IDM受到EDM启发;Brain Eno提出的Ambient Music受到Muzak启发)。但是他们的创作手法,听觉感受,在很多方面有相似之处。 虽然Ambient Music有时会与Erik Satie提出的Furniture music进行比较,认为两者都提出被动聆听的概念。但是Brain Eno在他的文章中提到,Ambient Music是既可以很容易的被忽略掉,同时又可以去主动聆听的音乐。这样来看,就很像IDM。
"The (intelligent) tag stems from the Artificial Intelligence compilations, released by Warp in the early 1990s." IDM的名称来自于Warp的"Artificial Intelligence"唱片集(compilations)。并不是“人工智能编辑”,IDM也是由作曲家创作,并不是机器自动创作!该唱片的封皮是一个放在客厅的沙发,并且在下面注明"electronic listening music",其暗示,这个系列是在家里,坐在沙发上听的电子音乐(不是用来跳舞的电子舞曲),或者(个人观点),某种程度上来说,是一种在家播放的背景音乐。
Hitlike在CCCS邮件列表中提出:“‘独立艺人、或者先锋艺人’,这本身不就是个标签?” 我认为,独立是相对于商业而言,而先锋则是相对于当下的流行艺术而言。这里所说的独立和先锋,是从类别的维度和时间的维度来看的,是一个大的范围,而不是具体的标签。
我曾与一位中国艺术家谈起流派的问题。她说:“她不知道她的流派,可能也不属于任何一个流派”。她认为,艺术作品的流派就像一个标签,一种情况是后人的总结与归纳;另一种情况是作者为了更好地宣传自己的作品,所以给自己贴一个风格与流派的记号。同时,她还认为,作为独立艺人、或者先锋艺人,不必太在意自己是什么流派,更不需要去跟着哪个流派去创作自己的作品。
The study of the nature and impact of recent and emerging digital technologies and our interactions with them in everyday life, and in cultural, economic and political terms. 这句话描述了新媒体理论所包含的内容,但是却省略了主语。
“磁带”在原文中写的是autonomous tapes,但是autonomous一词,在常规的英文词典中的解释是“自治的;有自主权的”,显然,该词在此处并不是这个含义。具体如何翻译该词,还有待推敲。
对于ElectroAcoustic的界定范围应该如何定义? Improvisation可以在Ears和CHEARS上面找到相关词条,但是Minimalism同样属于风格流派,却找不到相关记载。
过去这个词被翻译成了“遮掩”。确实CMT中的翻译更好。已经改成“掩蔽”。
Functions: --Defining a Function: “When someone gives you some money, add the amount given to you to the amount that you already have and report that number back to me.” There are a few parts there that can be broken out: * An amount of money that you are given * Adding the amount given to you to the amount that you already have * Reporting that number Then you create a function that takes money, adds it to myBank, and returns that value: int moneyReceived(int money){ myBank += money; return myBank; } --Passing Parameters to a Function: Once the function is defined, it’s time to call it. To call the function, you need to pass the correct kinds of parameters to it, which requires knowing what parameters the function requires. In the moneyReceived()function example, the required parameter is a single integer. moneyReceived(4);
在Supercollider中,如果想循环一个进程,可以使用"do";如果想自动完成一系列进程,可以使用"Task"。 下面是一个简单的例子: ( // 定义一个声音,该例子中为一个正弦波,并含有有两个变量,频率(freq)和振幅(amp)。 SynthDef("sine", { arg freq, amp = 0.2; var osc; osc = SinOsc.ar(freq, 0, amp).dup; Out.ar(0, osc); }).send(s); // 下面为自动进程,该例子中自动运行上面的声音,并且从100Hz,200Hz,300Hz中随机抽取频率,间隔3秒,重复3次。 t = Task({ a=Synth("sine"); 3.do({ a=Synth("sine"); a.set(\freq, [100,200,300].choose.postln); 3.wait; a.free; 3.wait;} ) }); ) t.start; t.pause; t.resume; t.reset; t.stop;
在SuperCollider中,如果想切换声卡驱动以及声卡输出通道,可以使用以下代码: //以Fireface UFX音频接口为例,现在选择的驱动为MME,输出通道为3和4。 s.options.device_("MME : Analog (3+4) (RME Fireface UFX)"); 如果想要ASIO多通道,可以尝试一下代码: s.options.device_("ASIO : ASIO Fireface") ( {Out.ar( [0,1,2,3,4,5], PinkNoise.ar(0.1))}.play ) // for 5.1 system in Room 310 ( {Out.ar( [0,1], PinkNoise.ar(0.1))}.play; {Out.ar( [2], PinkNoise.ar(0.2))}.play; {Out.ar( [3], SinOsc.ar(55, 0.5))}.play; {Out.ar( [4,5], PinkNoise.ar(0.1))}.play; )
Arrays: An arraycontains one or more variables in a list. int[] numbers = new int[3]; (in Processing) int: The type that the array contains []: Signifies that this is an array numbers: The name of the array variable [3]: How many variables the array will contain int[3] numbers = {1,2,3}; (in C++) int: The type that the array contains []: Signifies that this is an array [3]: How many variables the array will contain numbers: The name of the array variable Operators: Operatorsare the symbols that a compiler uses to perform commands and calculations in your program. Operators let you set variables like the =operator, compare variables like the ==operator, add variables like the +operator, and so on. There are three major types of operators. The first operators are the mathematical operators that perform mathematical operations. The second are assignment operators that change values. The third are comparison operators, which determine whether two variables are equal, different, greater than, or less than another variable. Control Statements: You’ll often want to control how the logic of your program flows. If one thing is true, then you’ll want to do something different if it’s false. You might also want to do something a certain number of times or until something becomes false. There are two kinds of control statements that you can use to create this kind of logic flow in your application: 1, conditional logic statements, which check whether something is trueor false. 2, loop statements, which do something a set number of times or until something becomes false. --if/then: int myWeight = 72; if(myWeight > 100){ print(" you're getting heavy! "); } else { print(" you're still doing ok "); } --for Loop: The forstatement lets us do things over and over again, for a specified number of repetitions. Loops are frequently used to loop through arrays and examine, use, or alter each element within them. This is going to be particularly useful when dealing with the pixels in images or frames of video, as well as sounds, data from the Internet, and many other kinds of information that needs to be sorted through: for(i=0; i<10; i++){ print("i is" + i); } in=0: Initialize the counter i<10: Continue until this statement is false i++: What to do on each pass through the loop --continue --break
-Variables: x = 440 "There’s something called x, and it is equal to the number 440." -Simple Types: --int: --float: --char: This type can contain characters, that is, single letters or typographic symbols such as A, d, and $. --bool or boolean: Booleanvalues store two possible values: trueand false. In C++ and Arduino, the trueis actually a 1, and the falseis actually a 0. It’s usually clearer to use trueand false, but you can use 1 and 0 if you like. --string: A stringis a sequence of characters. Here are some strings in Processing: String f = "foo"; String b = "bar"; String fb = f+b; // this will be "foobar" Here are some strings in C++: string f = "foo"; string b = "bar"; string foobar = f+" "+b; // this will be "foo bar" note the space w/in quotes -Arrays -Operators: +, −, *, / Adds, subtracts, multiplies, and divides. % Modulo; returns the remainder of a division. = Assignment; assigns the value on the right to the variable on the left. +=, −=, *=, /= Mathematical assignment; adds, subtracts, multiples, or divides the value on the left by the value on the right and sets the value on the right to that result. ++ Adds 1 to the value to the left. −− Subtracts 1 from the value to the left. == Compares the value on the left with the value on the right. If they are equal, then the expression is true. != Compares the value on the left with the value on the right. If they are not equal, then the expression is true. >, >= Compares the value on the left with the value on the right. If the value on the left is greater than or greater than or equal to the value on the left, the expression is true. <, <= Compares the value on the left with the value on the right. If the value on the left is less than or greater than or equal to the value on the left, the expression is true. && Checks the truth of the expression to the left of the operator and to the right; if both are true, the entire expression is true. || Checks the expression to the left of the operator and to the right of the operator; if either is true, the entire expression is true. -Control Statements: --if/then --for Loop --continue --break -Functions: A functionis a name for a grouping of one or more lines of code and is somewhat like a variable in that it has a type and a name. It’s very much unlike a variable in that it doesn’t just store information; A function is something more like an instruction with something that is given at the beginning of the instruction and something that is expected in return. --Defining a Function --Passing Parameters to a Function
Code: Code is a series of instructions that a computer will execute when the code is run.~ Writing code is typing code instructions into a text file that will later be passed to a compiler of some sort. To write a program can mean writing source code from scratch, or it can mean putting several programs together and creating a way for them to communicate. Files: Code is stored in text files that usually any text editor can open.~ Arduino projects use .ino files and sometimes .c files. Processing projects use .pde files and sometimes .javafiles. openFrameworks projects use .cpp and .h files. Compiler: A compiler is a program that takes a code file (or many code files) and turns it (or them) into a series of instructions that a computer will run as a program. ~ Most modern computers do not directly process any instructions that you write; instead, you ask a compiler to turn your code into machine instructions. The compiler optimizes machine instructions for the computer to run very quickly, but they would be very difficult for a person to write, so the better step is to write code in a more human-friendly way and convert it to machine-friendly instructions. /You can imagine/ /the process of writing code as a series of translations/ /in which you tell the compiler what you want to do with the program/ /that (the program) it will create in a high-level programming language like Processing or C++ (and)/ /the compiler then creates a machine language file that will run that file (the program like Processing or C++)./ Executable: An executable is a file that can be run as an application. It is the result of writing code and compiling it. An application may consist of many executable files, or it may consist of only one.
这本书翻译的是Processing的官方入门教程,其中个别用词稍微有一点生硬,不过相当适合英语并不是很过硬,并且又希望了解Processing的读者。
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 9 In recording a voice in a room, there is no escaping the acoustical effect of the surroundings. The sound is contained by the surfaces of the room; and size, shape, and proportions of the room, and the absorbing and reflecting characteristics of the surfaces determine the sound field in the room. With the microphone close to my lips, the direct sound dominates. The sound reflected from the walls, floor, and ceiling is weaker because it travels farther and some sound energy is lost at each reflection. The greater the microphone distance, the more the room effect dominates. I will hold the microphone at a constant distance from my lips so that the direct sound will be unchanged. By walking toward the plywood, the sound reflected from it will increase the closer I get. The entire effect can be simulated for easy study by using a delay device. In the following example, a voice signal is combined with the same signal delayed one-half of a thousandth of a second (or one-half millisecond) with respect to the direct sound. Voice colorations result any time a sound component is combined with itself delayed a bit. The plywood reflector provided such a delay with a single microphone. A hard table top close to the microphone can do the same thing. If the same sound strikes two microphones separated at a distance, and the outputs of the two are combined, wild frequency response variations will result. At frequencies at which the two components are in phase, the signals add, giving a 6-dB peak. At frequencies at which they are out of phase, they cancel, resulting in a 30- or 40-dB dip in response. Down through the audible spectrum, these peaks and dips drastically change our normally uniform response, and this is what changes the character of the sound. This is commonly called a comb filter because the frequency response peaks and dips look like a comb when plotted.
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 8 The program material we are interested in listening to or recording we shall call the signal. Any sound that interferes with enjoyment or comprehension of the signal is called noise. If the signal dominates, excellent. But if noise dominates, the signal is useless. The relative strength of the signal compared to that of the noise is a very important factor in all communication systems. We even have a name for it: the signal-to-noise ratio. If the desired signal dominates, the signal-to-noise ratio is high, and all is well. At a signal-to-noise ratio of 40 dB, it becomes more difficult to hear the noise. Reverberation is a kind of noise, so we can expect white noise to affect the understandability of speech in a similar way. The inevitable noise that often interferes with the desired signal is less familiar to us. Electrical current flowing in every transistor, every piece of wire, generates a hissing sound like we have just heard. Fortunately, it is quite weak, but certain circuit faults can make it a problem. Radio frequency signals, such as those from nearby radio and television broadcasting stations, can easily penetrate audio circuits if there is improper shielding. The noise of heating, ventilating, and air conditioning equipment is often of high enough level to degrade a recording or interfere with listening. In listening to live or reproduced music or speech, signal quality can be affected by environmental noise. The mere presence of people in an audience results in noises of breathing, movement, coughing, rustling paper.
MySQL (beta) at CHEARSdotinfo.co.uk Unit 7 The most dependable cues are those obtained by comparing sounds reaching the two ears. Directional information is contained in a comparison of the relative levels of sound falling on the two ears. Another cue used by the ear to localize sound sources is based on the time of arrival of sound at the two ears. If a sound arrives at one ear later than the other, we say that there is a phase difference between the two signals. In the previous unit, we discussed beats and their relationship to consonance and dissonance of sounds. Those beats are strictly a physical phenomenon, occurring outside our bodies as the two tones pull in and out of phase. There are also so-called binaural beats which are strictly subjective or psychophysical. These give evidence that our ears do indeed perceive phase differences.
MySQL (beta) at CHEARSdotinfo.co.uk Unit 4 A non-linear system alters the input wave-form and delivers a distorted signal at the output. The distorted output contains frequency components not in the input signal. Non-linearity always means distortion, and distortion always adds to the input signal something new and undesirable that wasn’t there before. As with amplifiers and other audio equipment, this is also true of the human auditory system. Another method for detecting the presence of aural harmonics is by playing the fundamental frequency into the left earphone and injecting a probe tone at the frequency of a harmonic in the right ear and listening for a binaural beat. These aural harmonics are produced by non-linearities of the ear and do not exist as external signals. Thus, they are inaccessible to any physical measuring instruments. However, their presence can be verified through these binaural beats. In fact, knowing that strongest beats are produced when the two signals are close to the same amplitude, scientists are able not only to detect the presence of aural harmonics but also to estimate their amplitudes. When two tones are introduced into a non-linear system, a series of so-called combination tones is generated. If the higher tone has a frequency H and the lower tone the frequency L, a difference tone of frequency H minus L and a summation tone of the frequency H plus L are produced. These are called the first order sum and difference tones. The situation becomes much more complex as second order distortion products are considered. For example, these include frequencies of 2H minus 2L, 2H minus L, 2L minus H, 2L plus H, and so on. In fact, all these distortion products are similar to what an electronics engineer measures in an amplifier by what is called the cross-modulation method. In addition to the simpler aural harmonics which we explored with a single tone injected into our auditory system, we have also detected several combination tones resulting from injecting two tones into the system. With music, many more than two tones fall on the ear simultaneously. Just imagine the horde of aural harmonics and combination tones filling up the audible spectrum! The masking of higher frequencies by lower ones makes some of these distortion products inaudible. On the other hand, we must remember that distortion products interact with each other, thus creating even more distortion products. But these will be at progressively lower levels. In summary, we can say that when modest levels of sound fall on our ears, all of these distortion products generated in our heads are at very low levels. For louder sounds, however, the levels of distortion do become appreciable. In other words, at low levels, the ear is quite linear; at high levels, there is a departure from linearity.
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 6 non-linear distortion: Any distortion results in new frequency components being generated within the equipment which do not rightfully belong to the signal. If the input signal to an amplifier is increased, we expect a corresponding increase in output. The operating region over which this is true is rightly called the linear region. Every audio system, however, has its upper limit. Trying to get 100 watts out of a 10- watt amplifier certainly results in penetration of what is called the non-linear region. Our first exercise explores the distortion resulting from what is called signal clipping. The simplest way to describe the amount of distortion is to filter out the fundamental and measure the harmonics remaining. These harmonics are then expressed as a percentage of the fundamental. THD = Total harmonic distortion. Ten percent harmonic distortion is considered to be very heavy distortion. It is well for us to note at this point that modern professional power amplifiers and the better high-fidelity, consumer-type amplifiers are commonly rated as low as a few hundredths of 1 percent total harmonic distortion. A modified form of clipping results from applying too high a signal to a magnetic recorder. This results in what is often called a soft type of clipping as the tape becomes saturated magnetically. Another form of distortion has to do with the slight variations in the speed of the tape in a magnetic recorder or the rotational speed of the turntable as a disk recording is being played. Such speed changes result in unnatural shifts in frequency. This illustrates what is commonly (and understandably) called wow. A similar form of distortion resulting from rapid speed fluctuations is called flutter. It can be caused, among other things, by dirty recording heads in magnetic recorders.
MySQL (beta) at CHEARSdotinfo.co.uk Unit 3 When the tension and length of the string are just right (or in tune, as we would say), bowing the string sets it to vibrating at the standard A, which is defined at 440 vibrations per second, or 440-Hz. This number of vibrations per second is called the fundamental frequency. The 440-Hz vibration (the fundamental frequency) is called the first harmonic, and the 880-Hz vibration is called the second harmonic. The fundamental, or first, harmonic, is usually the strongest, and normally the higher the order of the harmonic, the weaker it is. Each instrument in the orchestra has its own particular harmonic signature. The number and relative intensities of these constituent tones determine the quality, or timbre, of that particular instrument. Research has shown that a prime requisite of the ability to hear out a harmonic in a complex wave is that the separation between adjacent harmonics must be greater than the critical bandwidth. If two adjacent harmonics fall within a common critical band, the ear cannot distinguish one from the other. All this means that the ear is basically like a Fourier analyzer-to use a term familiar to electronics people. We use this analyzing ability of our auditory system all the time without giving it a thought. In addition to hearing out the harmonics of a complex tone, the ear has remarkable powers of discrimination. With the people talking all around us, we can direct our attention to one person, subjectively pushing other conversations into the background. We can direct our attention to one group of instruments in an orchestra or to one singer in a choir. Listening to someone talk in the presence of high background noise, we are able to select out the talk and reject, to a degree, the noise. This is all done subconsciously, but we are constantly using this amazing faculty.
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 5 These harmonics are whole-number multiples of the fundamental frequency. we conclude that the triangular wave certainly has its own distinctive quality. The distinctiveness of its sound is all wrapped up in its harmonic structure. The harmonic content of a signal is the key to its distinctive sound quality. A 1000-Hz square wave has its own distinctive sound. All of its harmonics occur at the same odd multiples of the fundamental as with the triangular wave, but their magnitudes and time relationships are different. There is a richness to the violin tone which the sine wave certainly does not have. The violin tone is rich in overtones. As we deal with musical tones, it is fitting that we switch over to the musician's terminology. Instead of harmonics, the terms overtones or partials should be used. Overtones dominate the violin sound. Its rich tonal quality depends entirely on the overtone pattern. Each instrument of the orchestra has its own overtone pattern, which gives it its characteristic sound. To achieve high quality in the recording and reproduction of sound, it is necessary to preserve all the frequency components of that sound in their original form. Limitation of the frequency band or irregularities in frequency response, among other things, affect sound quality. Some musical instruments have overtones that are not whole-number multiples of the fundamental and thus cannot be called harmonics. For such instruments, the general word overtones must be used. Bells produce a wild mixture of overtones and the fundamental may not even be recognized. The overtones of drums are also not harmonically related to the fundamental, but they are responsible for the unique, rich drum sound. Summarizing, we have learned that preserving the integrity of the fundamental and overtone pattern of our signal preserves the quality of the signal, and this is what high fidelity is all about.
MySQL (beta) at CHEARSdotinfo.co.uk Unit 2: Fletcher repeated this experiment at many frequencies. The bandwidth of the noise just beginning to affect the masking of a particular tone he called the critical band, effective at that frequency. Fletcher's work encouraged other scientists to explore the shape of these so-called critical bands of the human hearing system. Instead of using two tones, which produced beats, some experi-menters used one tone and a band of noise much narrower than the critical band. Simply stated, the closer the probe tone is to the noise band, the easier it is to mask the noise. That’s exactly what Fletcher said: Only sound energy near a tone of a given frequency is effec-tive in masking it.
MySQL (beta) at CHEARSdotinfo.co.uk Unit 1: Playback at such low levels requires boosting lows and highs to restore a semblance of quality. This is the principle of the so-called loudness equalization, which is practiced by high-fidelity enthusiasts. The equalization required to make low-level music sound right comes close to tracing an equal-loudness contour. By a similar process, other contours are traced, each tied down to a specific sound-pressure level at 1000 Hz. These levels at 1000 Hz are arbitrarily called loudness levels in phons. When you go to an otologist or a clinical audiologist to have your hearing tested, your audiogram is really your own personal minimum audible equal-loudness contour!
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 4: In Lesson 3, we heard the effect of cutting off low - and high - frequency portions of the audio spectrum. These we called lo-cut and hi-cut. Sometimes it is desirable to reduce low - and high - frequency contributions less drastically than cutting them off. For this, the phrase roll-off is used. By boosting these important speech frequencies at 5 to 10 dB, understandability of speech can be improved, especially with a background of music or sound effects. This is called presence equalization. Clip-on microphones are very popular today, and most of them are capable of reasonably good quality if used properly. One problem with them is that the high - frequency components of the voice are quite directional, tending to miss the microphone. Boosting system response at the higher frequencies can compensate for this loss. By introducing a 10-dB boost at 5000 Hz, the high - frequency components are restored. Microphones clipped to shirt or necktie are very close to the chest and are prone to pick up chest resonances, which for a man, tend to overemphasize the voice components in the region of 700 to 800 Hz. Now, to compensate for chest resonance, a dip of 6 dB at 750 Hz is introduced. Thus, we see that intentional deviations from the idealized flat response may actually yield better recorded sound in the practical sense. Music may be made more pleasing to the ear, and speech may be made more understandable.
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 3: We see that an orchestra generates very significant amounts of energy in the low- frequency range and that the quality of the music suffers markedly if the full low-frequency range is restricted. Narrowing the band even further, from 300 Hz on the low-frequency end to 3000 Hz on the high-frequency end, a telephone-like quality results. Even though the voice quality has greatly changed, it is interesting to note that the voices are recognizable and the words are quite understandable. A small radio receiver tuned to an AM station might pass something like a band from 300 Hz to 5000 Hz.
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 2: Sound Level: a physical quantity (measured with instruments). Loudness: a psycho-physical sensation perceived by the human ear / brain mechanism. Decibel: one-tenth of a bel, which is the logarithm of the ratio of any two power-like quantities. Logarithm: (common log to base 10) of a number is the exponent of 10 that yields that number. When these tones are reproduced on a loud-speaker, you will notice that changes in head position change the loudness of the sound due to room effects. For this reason, keep your head in one position during each test. Of course, if you are listening on headphones, room acoustics have no effect. A change of 10 dB is often considered to be a doubling of loudness, or cutting loudness in half. A 10-dB change in level is less noticeable at 100 Hz but very prominent at 1000 Hz. The minimum discernible level change depends both on the frequency and the level of the sound.
MySQL (beta) at CHEARSdotinfo.co.uk Lesson 1: Spectrum: the distribution of the sound energy throughout the audible frequency range. Frequency: The number of cycles per second = the number of hertz. k = kilo = 1000 1 kHz = 1000 Hz 10 kHz = 10,000 Hz 20 kHz = 20,000 Hz The frequency range of audible sound is commonly taken as 20 Hz to 20,000 Hz. To avoid problems commonly associated with the extremes of the audible band, we will keep within a 100-Hz limit at the low end and 10,000-Hz limit on the high end. Octave: If one tone has twice or half the number of vibrations per second as another tone, the two tones are one octave apart. Pure Tone = single frequency Noise bands are useful in acoustical measure- ments in rooms because their constantly shifting nature, strange as it seems, gives steadier readings than pure tones. Pure tones, on the other hand, are commonly used in equip.
在翻译中,遇到两处有趣的地方: 其一,原文Reactions,直译为“反应”。根据前几日课程中的曲目推测为“科技对音乐的影响”。因为这一讲开始出现了磁带音乐。 其二,原文You and your public/things to hold on to,张睿博老师的意译为“你应该何去何从?”。
10月08日: 上午: 介绍 / 数字文化的意义? / 音乐种类以及社会角色 下午: 进入20世纪,民族性方法 vs. 全球性方法 10月09日: 上午: 全球 / 地域 下午: 早期激进主义 10月10日: 上午: 艺术和生活 vs. 为了艺术而艺术 下午: 激进主义(二) 10月11日: 上午: 音乐新纬度 下午: 形式主义 10月12日: 上午: 技术第一讲:信息 / 通讯 下午: 科技对音乐的影响(反应) 10月13日: 上午: 声音载体音乐 vs. 音符载体音乐 下午: 早期电子音乐 10月14日: 上午: 科技与表演 下午: 当代艺术音乐 10月15日: 上午: 技术第二讲:数字文化 下午: “主流”电子音乐 10月16日: 上午: 你应该何去何从? 下午: “非主流”电子音乐 10月17日: 上午: 未来/总结 下午: 实验流行音乐与多样化融合
这场音乐会,表演了3首经典的电子音乐作品,Pierre Boulez的Anthème II;Kaija Saariaho的Nymphea;Steve Reich的Different Trains。Serialism的作品在听觉上很难以让人接近,虽然在谱面上有严格的逻辑。音乐会上半场的两首作品中的小提琴部分,听起来比较刺耳。而下半场的Different Trains就变得很接近听众,可能这就是Minimalism的作品至今还很流行的原因。
这场音乐会分为上半场和下半场,上半场全部作品都是由Juliana Snapper现场演唱,现场电子音乐由Miller Puckette实时控制。Juliana Snapper精彩的现场表演是上半场音乐会吸引人的重要因素之一。(她不仅仅是演唱,还有很多表演的成分。)同时,人声比器乐更能让人hold on to。相比之下,下半场45分钟的Mixed Work(小提琴与电子音乐)难以让我集中注意力在作品中。
我觉得,“声音”和“音乐”,在国内分开发展比较好。声音艺术虽然与音乐相关,但是传入中国比较晚;当它进入中国时,它们几乎是两个不同领域了。在国外,两者相辅相成、逐渐发展;在国内却是相差甚远!
如果说:“你听听我的音乐作品”,然后播放《十面埋伏·声谱图》,从观众的角度而言会和预期有很大差异;可能会说:“这是什么?真难听!”。相反,如果提前告诉观众这是一部声音作品,那么观众听了之后可能会说:“很有趣!”
该作品由Luciano Berio创作于1966年,但是CHEARS的年份最早只支持到1979。(现在CHEARS已经进行了升级,最早可支持到1948年。)