Óscar Martín is an experimental programmer and musician. He bases his work on the deconstruction of field recordings, no-conventional sinthesis and the creative use of technology errors. Luthier-digital with a “pure data” environment, which he uses to develop his own no conventional tools for processing and real time algorithmic generative composition. He can be placed somewhere between Computer Music, the Aesthetics of Error, and generative Noise. He seeks the creation of virtual sound universes, imaginary soundscapes that encourage active listening and a different sensibility toward the perception of sound phenomena.
The workshop developed by Óscar Martín at Hangar is planned as an experimental theoretical and hands-on laboratory on the concepts of emergent systems, stochastic and chaotic generative music under installative formats. The participants will work based on the contents and the technological development of the installation RdES ( http://noconventions.mobi/noish/hotglue/?RdEs_eng/) and prepare a performance that will be presented to the public during the last day of the workshop.
RdEs is an installation that explores the sonic and compositional possibilities of the concepts of complexity and emerging systems, using a network of “modules-particles” that interact with each other and the environment. Each “module-particle” follows simple individual rules, but is able to generate more complex and sophisticated patterns when combined with the others.
Alex McLean is a live coder, software artist and researcher based in Sheffield UK. He is one third of the live coding group Slub, getting crowds to dance to algorithms at festivals across Europe. He promotes anthropocentric technology as co-founder of the ChordPunch record label, of event promoters Algorave, the TOPLAP live coding network and the Dorkbot electronic art meetings in Sheffield and London. Alex is a research fellow in Human/Technology Interface within the Interdisciplinary Centre for Scientific Research in Music, University of Leeds.
During his residency and workshop at Hangar he will explore alternative strategies for creating live sound and music.
During the workshop “Weaving&Speaking Live Generative Music” the participants will make connections between generative code and our perception of music, using metaphors of speech, knitting and shape, and playing with code as material. They will take a fresh look at generative systems, not through formal understanding but just by trying things out.
Through the sessions, the participants will work up through the layers of generative code. They will take a side look at symbols, inventing alphabets and drawing sound. They will string symbols together into words, exploring their musical properties, and how they can be interpreted by computers. They will weave words into the patterns of language, as live generation and transformation of musical patterns. They will learn how generative code is like musical notation, and how one can come up with live coding environments that are more like graphical scores.
Systems like Python, Supercollider, Haskell, OpenFrameworks, Processing, OpenCV will be visted and the participants will experiment as well with more esoteric interfaces.
Theo Burt is an artist working with perceptual relations and aesthetic application of sound basic technological composition, visual and light. Using automated systems He works with these materials in order to carry out experimental and intuitive processes and the result is the production of installations, live performances and fixed media pieces.
His residence in Hangar, which will take place between April 29 and May 30, 2013, is part of “Addicted2random”, the European research project focused on the connections between European musical past and contemporary trends of computer generated music.
Among his most compelling works include Bastard Structures 2, the system emerged as a result of this collaboration with Tim Wright (Germ, Tubejerk). This media provides a platform for exploration of optical and sonic effects, cognitive processes and limits of perception, where temporary structures, visual and sound interact with the geometry of the room and lead to disorienting exploration of the materiality of sound and of light.
Theo has worked primarily in galleries and spaces in the UK, but also in the international level has led projects in major European capitals and EEEUU. He also has worked previously in Barcelona for some exhibitions: Audiopantalla MACBA Colour Monochromes and Colour Projections produced for Sonar Cinema and supersymmetry.
For the workshop 1000 Years of Control Theo will provide an introduction to sound synthesis and the creation of autonomous and interactive music systems. The workshop will examine what happens when we return to the sounds of classic subtractive and FM synthesis but subsitute the common MIDI interface (which is structured around traditional Western notions of music) for an alternative system. While there will still be control over the basic parameters of synthesis, this new approach will change the way we create and structure the music and lead us towards new aesthetic outcomes. The workshop will combine the use of the open source software Pure Data with new software created specifically for this workshop.
Roc Jimenez de Cisneros, Barcelona, 1975
Sound artist specialised in algorithmic composition. In 1996 he creates the electronic experimentatal band EVOL . From 1997 he directs together with Anna Ramos the record company ALKU .
His work explores the possibilities of algorithmic composition in the electronic music production and it is often based onthe deconstruction of iconic sounds from rave culture. His work has been shown in galleries, museums and clubs of Europe, North America and Asia and it has been edited by international record companies like Entr’acte, Mego, Presto!?, fals.ch.
His residence at Hangar is taking place in the framework of ‘Addicted2random’, a European project on generative music. The aim of the project is to investigate on the connections between European musical past and contemporary music generated by computer.
The piece that he’ll develope during the residence is an aesthetic and formal extension of acid house. This is going to be a new layer in his intention of decontextualising rave iconic sounds as he also does together with Stephen Sharp in the project EVOL . During his residency he will also give the workshop ‘Post-acid autònom’, an introduction to sound synthesis and generative composition.
29th August 2012
The people who are working in the programming of the Addicted2Random tool met all together for the first time in Barcelona. Here are the outcomes of the first they of work.
– Description of the tool:
Pre: 0. Audio backend (Puredata/MaxMSP/SuperCollider) gets started, it transmitts an audio stream
1. The audio backend inform the proxy that it got started. It tells, what kind of gui elements it can handle, how they have to be mapped to its input data, chooses a proxy-algorithm (preselect) to calculate the element input data and where the audio stream can be found.
2. The proxy server registers its new a2r session at a central a2r index server
3. Someone starts the a2r client. The client asks the central a2r index server which sessions are active and can be joined.
4. The clients chooses a session and asks the corresponding proxy which gui elements has to make available and where the stream can be found.
5. The user chooses the gui element he/she wants to play with and starts interacting. The a2r client calculates sensor data from the smartphone sensors and the interaction of the user and transmits this to the proxy.
6. The proxy collects the input data from the different clients and calculates meaningful output with the help of the selected proxy-algorithm.
7. All clients listen to the output stream (or better fm radio / live session because of latency).
– Communication protocol:
The communication protocol between the patches and the server aims to be a guideline for the users that want to programme a patch compatible with the a2r server.
The protocol defines what kind of information (metadata) the patch will have to send for running on the server and which variables (sensors) the user will be able to modified.
The protocol also allows to implement variables groups for developing more complex interfaces in the future.