Paper abstracts

Building, maintaining and surviving a Linux pro audio studio

Speaker: Geoff Beasley

Time: 10:30

When setting up a FOSS-based production studio, choosing the right distribution is an essential starting point. There are, however, several other equally important considerations: operational decisions need to be made and appropriate software options must be selected when creating a suitable environment in Linux to produce a stable, functional and above all musical platform suitable for the rigors of audio production.

I will describe some basic principles and well evolved solutions within FOSS that truly delivers a stable, powerful and often uniquely featured environment that puts the engineer completely in control.


Linux makes me a better musician

Speaker: Roderick Dornan

Time: 11:30

This talk will explore the use of audio packages on Linux to make practice time with an instrument more efficient.

To demonstrate this method a piano score and accompanying studies for the piece will be scripted in Lillypond. Lillypond can quickly turn ideas into an elegant musical score that is immediately usable as notation. Furthermore the session can be published or shared with collaborators or educators.

We will then record a "practice session" using Ardour to hear the music we scripted up in Lillypond. Ardour can be used as a playback device for immediate review of the audio and sessions can be exported and archived which makes a great audio library of your progress. It is immediately useful to identify areas where improvement is required down to the macro level of study. The wave forms can be used to measure intonation and timing and individual runs can be compared to each other using both audio and visual analysis.

Lastly, the session we recorded in Ardour will be opened in Sonic Visualiser to perform some deeper analysis on what we just played. This application can be used for marking up the performance or practice session with text and colour and tempo. It can visualise the wave forms in a variety of formats and can perform spectral analysis of the music.

Regardless of the instrument, analysing your sessions makes you a better musician faster. By recording and listening to your practice sessions you are better prepared for your next session, and this presentation will show how this can be realised.


FFADO update

Speaker: Jonathan Woithe

Time: 12:10

The FFADO project implements a vendor-independent driver framework for firewire audio devices. Due to issues unrelated to FFADO which have affected the core developers, work on the drivers has been slower than intended over the past 12 months. However, progress continues to be made, and some important proof of concept work has been done towards the in-kernel streaming engine.

This brief update will summarise the recent work and outline the plans for development over the coming 12 months.


GStreamer and Blu-Ray

Speaker: Jan Schmidt

Time: 13:20

GStreamer plays DVDs quite well, but so far has no support for playing Blu-Ray discs which are quite a different beast. This talk covers some of the background and challenges to playing Blu-Ray content, a design for GStreamer Blu-Ray support and a demonstration of some code.

On the other side of the coin is support for authoring Blu-Ray content, which will be touched on briefly.


HTML5 media accessibility update

Speaker: Silvia Pfeiffer

Time: 14:20

You would think that publishing video on the Web - YouTube-style - is all that is required to make video a core part of the Web. Far from it: work on accessibility features and synchronization of several audio and video resources has been ongoing since video was introduced into HTML5. One browser has rolled out preview support for captions, others are close. But other features are still in the works. Silvia will give us an update about the latest developments at the WHATWG and W3C.


HTML5 video conferencing

Speaker: TImothy Terriberry

Time: 14:50

There is wide agreement that we need specifications for video conferencing in the new HTML5 standards. However, how to achieve this is not yet clear with several competing proposals on the table right now. Timothy is involved in all the conversations around this topic at the IETF, W3C and WHATWG. He can clarify the mess of competing standards and explain what parts of the specifications are relatively widely accepted and what parts are still undergoing intensive discussions and development.


Distributed and Integrated Show Control

Speaker: Monty Taylor

Time: 15:40

The world of lighting, sound and video projection for live performance is highly computerized and has been for some time - but it's still stuck in the old days of hardware vendor dominance and has no effective Open Source presence to speak of at all. We're trying to change that.

Because of the hardware dominance, the various disciplines become arbitrarily separated from a tooling perspective. Video projection systems, for instance, know how to control lights to some degree - but not in a way that would make a lighting designer ever want to use them. Of course, from a software perspective, there is absolutely no reason to separate the control of lighting, sound or video from each other.

Our Show Control project is Open Source and builds on several existing projects (yay for collaboration) including Open Sound Control, Open Lighting Archtecture and Open Frameworks. The goal is a system that understands that at the end of the day we're running one show, not three.

We're also applying the knowledge gained from doing highly distributed and highly available systems from the dot-com world to make this a system that any number of people can be programming and operating at the same time without the need for special hardware at any point in the chain except for the actual barriers that talk to the lighting, sound or video hardware. (Fancy that - people using their laptops to control systems of unseen servers. Wow)

The system (as of yet unnamed - anybody got any good ideas? we've been calling it "the show control system") is slated to make its US production debut on a new Opera that's being developed in San Francisco and then moving to New York. The show will premier in the Summer of 2012, so by LCA we will have some pretty exciting elements ready to show. We might even control some lights, some sound, some video... perhaps even the presentation itself.


Looping with Linux

Speaker: Jacob Lister

Time: 16:40

Since the age of recorded music has been the possibility of the loop - taking a slice of recorded sound and playing it back in repetition alongside and in time with the current performance. Loops can be build up layer upon layer to create a thick chorus of sound, from either a single instrument, or one person playing a multiple of different instruments.

The first modern loopers as we know them started appearing in the 1960s and 70s, magnetic tape recorders were modified to place record and playback heads a distance apart, with recording tape then literally 'looped' around reels. Nowadays the job is done with electronics, in stomp-box effects sitting at a musicians feet, or software running on laptop computers.

Jacob has been looping for years, and while not writing software for a livelihood, hacks away at his own software based looper which runs on the Linux operating system, and strums, picks, thrashes and shreds away on his various guitars. He will explain and demonstrate and the basics of looping in its various forms.