Main.Abstracts History

Hide minor edits - Show changes to output

January 09, 2012, at 01:14 PM by Jonathan Woithe - Revise Silvia's talk title and abstract
Changed lines 61-62 from:
!!!HTML5 media update
to:
!!!HTML5 media accessibility update
Changed line 67 from:
You would think that publishing video on the Web - YouTube-style - is all that is required to make video a core part of the Web. Far from it: work on accessibility features and synchronization of several audio and video resources has been ongoing since video was introduced into HTML5. And now we are looking at how to make real-time video work on the Web: how can we enable Web developers to make video conferencing possible in the browser using no more than a dozen or so lines of code. Silvia will give us an update about the latest developments at the WHATWG and W3C.
to:
You would think that publishing video on the Web - YouTube-style - is all that is required to make video a core part of the Web. Far from it: work on accessibility features and synchronization of several audio and video resources has been ongoing since video was introduced into HTML5. One browser has rolled out preview support for captions, others are close. But other features are still in the works. Silvia will give us an update about the latest developments at the WHATWG and W3C.
January 09, 2012, at 12:36 PM by Jonathan Woithe - Add abstract for Jacob's talk
Added lines 100-112:

[[#jlister]]
!!!Looping with Linux

Speaker: Jacob Lister

Time: 16:40

Since the age of recorded music has been the possibility of the loop - taking a slice of recorded sound and playing it back in repetition alongside and in time with the current performance.  Loops can be build up layer upon layer to create a thick chorus of sound, from either a single instrument, or one person playing a multiple of different instruments.

The first modern loopers as we know them started appearing in the 1960s and 70s, magnetic tape recorders were modified to place record and playback heads a distance apart, with recording tape then literally 'looped' around reels.  Nowadays the job is done with electronics, in stomp-box effects sitting at a musicians feet, or software running on laptop computers.

Jacob has been looping for years, and while not writing software for a livelihood, hacks away at his own software based looper which runs on the Linux operating system, and strums, picks, thrashes and shreds away on his various guitars.  He will explain and demonstrate and the basics of looping in its various forms.
January 09, 2012, at 12:34 PM by Jonathan Woithe - Add abstract for Timothy's talk
Added lines 68-78:

----

[[#tterriberry]]
!!!HTML5 video conferencing

Speaker: TImothy Terriberry

Time: 14:50

There is wide agreement that we need specifications for video conferencing in the new HTML5 standards. However, how to achieve this is not yet clear with several competing proposals on the table right now. Timothy is involved in all the conversations around this topic at the IETF, W3C and WHATWG. He can clarify the mess of competing standards and explain what parts of the specifications are relatively widely accepted and what parts are still undergoing intensive discussions and development.
November 25, 2011, at 03:46 AM by Jonathan Woithe - Add abstract for Roderick's talk
Changed lines 17-18 from:
!!!Composition and sequencing workflows
to:
!!!Linux makes me a better musician
Added lines 22-31:

This talk will explore the use of audio packages on Linux to make practice time with an instrument more efficient.

To demonstrate this method a piano score and accompanying studies for the piece will be scripted in Lillypond. Lillypond can quickly turn ideas into an elegant musical score that is immediately usable as notation. Furthermore the session can be published or shared with collaborators or educators.

We will then record a "practice session" using Ardour to hear the music we scripted up in Lillypond.  Ardour can be used as a playback device for immediate review of the audio and sessions can be exported and archived which makes a great audio library of your progress.  It is immediately useful to identify areas where improvement is required down to the macro level of study. The wave forms can be used to measure intonation and timing and individual runs can be compared to each other using both audio and visual analysis.

Lastly, the session we recorded in Ardour will be opened in Sonic Visualiser to perform some deeper analysis on what we just played. This application can be used for marking up the performance or practice session with text and colour and tempo. It can visualise the wave forms in a variety of formats and can perform spectral analysis of the music.

Regardless of the instrument, analysing your sessions makes you a better musician faster. By recording and listening to your practice sessions you are better prepared for your next session, and this presentation will show how this can be realised.
November 24, 2011, at 11:00 PM by Jonathan Woithe -
Changed line 4 from:
!!!Building, Maintaining and Surviving a Linux Pro Audio Studio
to:
!!!Building, maintaining and surviving a Linux pro audio studio
November 24, 2011, at 10:58 PM by Jonathan Woithe - Fix item ordering in Silvia's slot
Deleted lines 52-53:
You would think that publishing video on the Web - YouTube-style - is all that is required to make video a core part of the Web. Far from it: work on accessibility features and synchronization of several audio and video resources has been ongoing since video was introduced into HTML5. And now we are looking at how to make real-time video work on the Web: how can we enable Web developers to make video conferencing possible in the browser using no more than a dozen or so lines of code. Silvia will give us an update about the latest developments at the WHATWG and W3C.
Added lines 56-57:

You would think that publishing video on the Web - YouTube-style - is all that is required to make video a core part of the Web. Far from it: work on accessibility features and synchronization of several audio and video resources has been ongoing since video was introduced into HTML5. And now we are looking at how to make real-time video work on the Web: how can we enable Web developers to make video conferencing possible in the browser using no more than a dozen or so lines of code. Silvia will give us an update about the latest developments at the WHATWG and W3C.
November 24, 2011, at 10:57 PM by Jonathan Woithe - Add abstract for Geoff's talk
Changed lines 4-5 from:
!!!TBA
to:
!!!Building, Maintaining and Surviving a Linux Pro Audio Studio
Added lines 9-12:

When setting up a FOSS-based production studio, choosing the right distribution is an essential starting point.  There are, however, several other equally important considerations: operational decisions need to be made and appropriate software options must be selected when creating a suitable environment in Linux to produce a stable, functional and above all musical platform suitable for the rigors of audio production.

I will describe some basic principles and well evolved solutions within FOSS that truly delivers a stable, powerful and often uniquely featured environment that puts the engineer completely in control.
October 28, 2011, at 03:34 AM by Jonathan Woithe - Make the times consistent with the revised timetable
Changed lines 17-18 from:
Time: 11:15
to:
Time: 11:30
Changed lines 25-26 from:
Time: 12:00
to:
Time: 12:10
Changed lines 38-39 from:
Time: 13:30
to:
Time: 13:20
Changed lines 53-54 from:
Time: 14:15
to:
Time: 14:20
Changed line 62 from:
Time: 15:45
to:
Time: 15:40
October 14, 2011, at 01:09 AM by silvia - added Silvia's talk abstract
Added lines 48-49:

You would think that publishing video on the Web - YouTube-style - is all that is required to make video a core part of the Web. Far from it: work on accessibility features and synchronization of several audio and video resources has been ongoing since video was introduced into HTML5. And now we are looking at how to make real-time video work on the Web: how can we enable Web developers to make video conferencing possible in the browser using no more than a dozen or so lines of code. Silvia will give us an update about the latest developments at the WHATWG and W3C.
October 09, 2011, at 11:32 AM by Jonathan Woithe - Fix the time of Jan's talk. Where *did* I pull all those times from?
Changed line 38 from:
Time: 14:00
to:
Time: 13:30
October 09, 2011, at 11:31 AM by Jonathan Woithe - Fix the time of Silvia's talk
Changed line 51 from:
Time: 14:45
to:
Time: 14:15
October 09, 2011, at 11:30 AM by Jonathan Woithe - Fix the time of Monty's talk
Changed line 60 from:
Time: 14:45
to:
Time: 15:45
October 09, 2011, at 11:29 AM by Jonathan Woithe - Add skeleton entry for Silvia's talk
Added lines 43-51:

----

[[#spfeiffer]]
!!!HTML5 media update

Speaker: Silvia Pfeiffer

Time: 14:45
October 06, 2011, at 12:02 AM by 150.101.241.2 -
Changed line 47 from:
!!!Show Control for live performances
to:
!!!Distributed and Integrated Show Control
October 05, 2011, at 11:53 PM by Jonathan Woithe -
Changed line 33 from:
[[#schmidt]]
to:
[[#jschmidt]]
October 05, 2011, at 11:50 PM by Jonathan Woithe -
Changed line 6 from:
Speakers: Geoff Beasley
to:
Speaker: Geoff Beasley
October 05, 2011, at 11:50 PM by Jonathan Woithe - Put abstracts in for 2012
Changed lines 3-7 from:
[[#jschmidt1]]
!!!Editing video with Pitivi

Speakers: Jaime Schmidt & Jan Schmidt

to:
[[#gbeasley]]
!!!TBA

Speakers: Geoff Beasley

Deleted lines 9-10:
Pitivi is a video editor based on the GStreamer multimedia framework.  This presentation will provide an introduction to using Pitivi, and the newest features in recent releases by running through the process of importing and editing a real video.
Changed lines 12-16 from:
[[#jschmidt2]]
!!!Producing WebM content with GStreamer

Speaker: Jan Schmidt

to:
[[#rdornan]]
!!!Composition and sequencing workflows

Speaker: Roderick Dornan

Deleted lines 18-19:
There are several different ways of producing WebM compatible content with GStreamer based software. This talk will provide an introduction to some of the available methods: gst-launch, Transmageddon, Pitivi and Flumotion.
Changed lines 20-31 from:
[[#rdornan1]]
!!!Adventures in Real Time Audio

Speaker: Roderick Dornan

Time: 13:30

Setting up a workable real-time system is important if one wishes to use Linux for live audio work without audio dropouts.  This talk will discuss the author's experiences in configuring such a system and give attendees plenty of tips to apply to their own machines.

----

[[#jwoithe1
]]
to:
[[#jwoithe]]
Changed lines 25-30 from:
Time: 14:00

The [[http://www.ffado.org|FFADO project]] implements a vendor-independent driver framework for firewire audio devices.  The past 12 months has been a period of consolidation for FFADO with the bugfix release 2.0.1 being the most visible activity.  However, over this time considerable work has been done behind the scenes, with a number of new drivers approaching readiness for wider testing.  In addition there have been discussions about the project's next steps.

This brief update will outline
the practical benefits of the current progress in several drivers and then move on to a discussion about the next stage of FFADO development.  In particular, the recently refined plans for an in-kernel streaming engine will be shared which, it is hoped, will lead to considerable efficiencies within the FFADO system and broaden access to the FFADO devices.
to:
Time: 12:00

The [[http://www.ffado.org|FFADO project]] implements a vendor-independent driver framework for firewire audio devices.  Due to issues unrelated to FFADO which have affected the core developers, work on the drivers has been slower than intended over the past 12 months.  However, progress continues to be made, and some important proof of concept work has been done towards the in-kernel streaming engine.

This brief update will summarise the recent work and outline the plans for development over
the coming 12 months.
Changed lines 33-41 from:
[[#rdornan2]]
!!!Making music with Linux

Speaker: Roderick Dornan

Time
: 14:30

This tutorial-style presentation will give a practical demonstration of the vast array of open
-source tools we have at our disposal to make music.  The focus of this talk will be live music making, but many of the processes are just as applicable to composition and other "off-line" tasks.
to:
[[#schmidt]]
!!!GStreamer and Blu-Ray

Speaker: Jan Schmidt

Time: 14
:00

GStreamer plays DVDs quite well, but so far has no support for playing Blu
-Ray discs which are quite a different beast. This talk covers some of the background and challenges to playing Blu-Ray content, a design for GStreamer Blu-Ray support and a demonstration of some code.

On the
other side of the coin is support for authoring Blu-Ray content, which will be touched on briefly.
Changed lines 46-53 from:
[[#spfeiffer1]]
!!!Audio and video processing in HTML5

Speaker: Silvia Pfeiffer

Time
: 15:45

Audio and video processing have traditionally been hard number-crunching tasks to do and nobody would have considered executing them in a Web browser on a remote file. However, with the capabilities of modern hardware and web browser software, and the integration of audio and video into HTML5, it's now possible to do (almost) everything inside a web browser in real-time with a bit of JavaScript. In this talk we will look at the Firefox Audio API to visualize sound with an FFT and to manipulate sound. For video we look at the possibilities of the Canvas to manipulate video pixels in real-time to achieve things like motion detection.
to:
[[#mtaylor]]
!!!Show Control for live performances

Speaker: Monty Taylor

Time: 14
:45

The world of lighting, sound and video projection for live performance is highly computerized and has been for some time - but it's still stuck in the old days of hardware vendor dominance and has no effective Open Source presence to speak of at all. We're trying to change that.

Because of the hardware dominance, the various disciplines become arbitrarily separated from a tooling perspective. Video projection systems, for instance, know how to control lights
to some degree - but not in a way that would make a lighting designer ever want to use them. Of course, from a software perspective, there is absolutely no reason to separate the control of lighting, sound or video from each other.

Our Show Control project is Open Source and builds on several existing projects (yay for collaboration) including Open Sound Control, Open Lighting Archtecture and Open Frameworks. The goal is a system that understands that at the end of the day we're running one show, not three.

We're also applying the knowledge gained from doing highly distributed and highly available systems from the dot-com world to make this a system that any number of people can be programming and operating at the same time without the need for special hardware at any point in the chain except for the actual barriers that talk to the lighting, sound or video hardware. (Fancy that - people using their laptops to control systems of unseen servers. Wow)

The system (as of yet unnamed - anybody got any good ideas? we've been calling it "the show control system") is slated to make its US production debut on a new Opera that's being developed in San Francisco and then moving to New York. The show will premier in the Summer of 2012, so by LCA we will have some pretty exciting elements ready to show. We might even control some lights, some sound, some video... perhaps even the presentation itself
.
January 07, 2011, at 11:44 PM by Jonathan Woithe - Add abstract for Silvia's talk
Added lines 65-67:

Audio and video processing have traditionally been hard number-crunching tasks to do and nobody would have considered executing them in a Web browser on a remote file. However, with the capabilities of modern hardware and web browser software, and the integration of audio and video into HTML5, it's now possible to do (almost) everything inside a web browser in real-time with a bit of JavaScript. In this talk we will look at the Firefox Audio API to visualize sound with an FFT and to manipulate sound. For video we look at the possibilities of the Canvas to manipulate video pixels in real-time to achieve things like motion detection.

January 07, 2011, at 12:55 PM by Jonathan Woithe - Add Silvia's talk, change timing slightly
Changed lines 13-19 from:
TBA

Speaker: TBA

Time: 11:00
----

to:
Changed lines 19-20 from:
Time: 11:30
to:
Time: 11:15
Added lines 57-64:
----

[[#spfeiffer1]]
!!!Audio and video processing in HTML5

Speaker: Silvia Pfeiffer

Time: 15:45
January 07, 2011, at 10:56 AM by Jonathan Woithe -
Changed lines 3-13 from:
[[#jwoithe1]]
!!!FFADO update

Speaker: Jonathan Woithe

Time
: 14:00

The [[http
://www.ffado.org|FFADO project]] implements a vendor-independent driver framework for firewire audio devices.  The past 12 months has been a period of consolidation for FFADO with the bugfix release 2.0.1 being the most visible activity.  However, over this time considerable work has been done behind the scenes, with a number of new drivers approaching readiness for wider testing.  In addition there have been discussions about the project's next steps.

This brief update will outline the practical benefits of the current progress in several drivers and then move on to a discussion about the next stage of FFADO development.  In particular, the recently refined plans for an in-kernel streaming engine will be shared which, it is hoped, will lead to considerable efficiencies within the FFADO system and broaden access to the FFADO devices
.
to:
[[#jschmidt1]]
!!!Editing video with Pitivi

Speakers
: Jaime Schmidt & Jan Schmidt

Time
: 10:30

Pitivi is
a video editor based on the GStreamer multimedia framework.  This presentation will provide an introduction to using Pitivi, and the newest features in recent releases by running through the process of importing and editing a real video.
Changed lines 13-63 from:
to:
TBA

Speaker: TBA

Time: 11:00
----

[[#jschmidt2]]
!!!Producing WebM content with GStreamer

Speaker: Jan Schmidt

Time: 11:30

There are several different ways of producing WebM compatible content with GStreamer based software. This talk will provide an introduction to some of the available methods: gst-launch, Transmageddon, Pitivi and Flumotion.

----
[[#rdornan1]]
!!!Adventures in Real Time Audio

Speaker: Roderick Dornan

Time: 13:30

Setting up a workable real-time system is important if one wishes to use Linux for live audio work without audio dropouts.  This talk will discuss the author's experiences in configuring such a system and give attendees plenty of tips to apply to their own machines.

----

[[#jwoithe1]]
!!!FFADO update

Speaker: Jonathan Woithe

Time: 14:00

The [[http://www.ffado.org|FFADO project]] implements a vendor-independent driver framework for firewire audio devices.  The past 12 months has been a period of consolidation for FFADO with the bugfix release 2.0.1 being the most visible activity.  However, over this time considerable work has been done behind the scenes, with a number of new drivers approaching readiness for wider testing.  In addition there have been discussions about the project's next steps.

This brief update will outline the practical benefits of the current progress in several drivers and then move on to a discussion about the next stage of FFADO development.  In particular, the recently refined plans for an in-kernel streaming engine will be shared which, it is hoped, will lead to considerable efficiencies within the FFADO system and broaden access to the FFADO devices.

----

[[#rdornan2]]
!!!Making music with Linux

Speaker: Roderick Dornan

Time: 14:30

This tutorial-style presentation will give a practical demonstration of the vast array of open-source tools we have at our disposal to make music.  The focus of this talk will be live music making, but many of the processes are just as applicable to composition and other "off-line" tasks.

----
January 07, 2011, at 10:41 AM by Jonathan Woithe -
Changed lines 14-15 from:
[---]
to:
----
January 07, 2011, at 10:40 AM by Jonathan Woithe -
Added lines 13-14:

[---]
January 07, 2011, at 10:39 AM by Jonathan Woithe -
Added line 7:
Added lines 10-12:
The [[http://www.ffado.org|FFADO project]] implements a vendor-independent driver framework for firewire audio devices.  The past 12 months has been a period of consolidation for FFADO with the bugfix release 2.0.1 being the most visible activity.  However, over this time considerable work has been done behind the scenes, with a number of new drivers approaching readiness for wider testing.  In addition there have been discussions about the project's next steps.

This brief update will outline the practical benefits of the current progress in several drivers and then move on to a discussion about the next stage of FFADO development.  In particular, the recently refined plans for an in-kernel streaming engine will be shared which, it is hoped, will lead to considerable efficiencies within the FFADO system and broaden access to the FFADO devices.
January 07, 2011, at 10:31 AM by Jonathan Woithe -
Added lines 1-8:
!!Paper abstracts

[[#jwoithe1]]
!!!FFADO update

Speaker: Jonathan Woithe
Time: 14:00