Max/MSP 是一个图解环境可以编写音乐、音像和多媒体程序。它分成3部分：MAX是整个软件系统的基础因为它负责数学计算的模式和数据内容的传递。MSP负责处理所 有的声音信号，包括对外界输入音源的采集和各种最原始的声音波形产生器。Jitter负责所有视频的处理。
BenHouge 有着16年开发电子游戏的丰富经验，包括曾有4年任职于上海ubisoft开发Tom Clancy的EndWar。目前他身兼数职，一边在波士顿的伯克利音乐学院 (Berklee College)教授电子游戏音乐，同时与MIT麻省理工大学合作，共同研究音乐学分析http://mit.edu/music21/
Max/MSP is a very powerful tool for visual and auditory compositions, Ben Houge a seasoned professor who lectures in the USA will be giving this all day workshop from morning to afternoon, he'll cover topics in a patch along style to allow you to explore and ask questions.
About the instructor
I'm an internationally active American audio designer and composer, interested in how people make connections in the digital age, and finding points of convergence between different kinds of information, between disciplines, and between people.
Most of my work is related to videogames in some capacity; I've been a full time audio designer for games since 1996. For seven years I was at Sierra Entertainment in Seattle, and in 2004 I moved to Shanghai to join the French company Ubisoft, where I remained for four years. The best-known titles to which I've contributed include Leisure Suit Larry 7, King's Quest: Mask of Eternity,Half-Life: Opposing Force, Arcanum: Of Steamworks and Magick Obscura, and Tom Clancy's EndWar.
Outside of my day job, I've worked on various projects that revolve around computer-manipulated sound and aleatory processes. These days I mostly write computer music using Max/MSP, but I've also composed acoustic works, many of them open-ended to varying degrees. I've also written a lot of choral music, mostly sacred, which I think is not quite as unrelated as it might seem at first.
- Basic signal flow
- Build a simple ambient sound behavior
- Basic sample playback
- Build an intermittent system similar to the one I designed for EndWar this also gets into lots of stuff about data storage (in buffers, in coll objects) and scheduling (this was the basis of the talk I did at the last Expo 74 conference)
- Maybe also a bit of listening to music by folks like Cage or Earle Brown that incorporate some of these ideas
- Showing the algorithmic filter concepts I've used in two of my pieces (Kaleidoscope Music and Self-Portrait, Dusk, at the Point of Departure)
- Also looking into algorithmic rhythm generation, as featured in Kaleidoscope Music
- Basic concepts
- Use in "reshuffling" existing samples (wind, water, particle effects, etc.)
- Show application in some of my pieces (Radiospace)
- Play some historical examples (Barry Truax, Xenakis)
Video with Jitter
This is of course a huge topic, but could be nice to provide a quick introduction
I can show several of my Jitter pieces (Cycles, Tides, and Seasons; Shanghai Traces; Study for Insomnia) and talk about the different techniques employed
Look at incorporating real time data from the internet (as in Cycles, Tides, and Seasons)