Discussion:
Hi - Very quiet list - my first post
(too old to reply)
Mark Knecht
2003-01-20 15:50:41 UTC
Permalink
Hello all,
A few weeks ago Benno and I were talking about something else (hardware)
and he found out I'm a GigaStudio user. He suggested I check out the
LinuxSampler project. He did tell me it wasn't ready for testers, however,
if I can lend a hand early on with working on feature sets or compatibility
with existing libraries, I'd certainly be happy to do so. I have a couple
thousand dollars worth of GigaStudio libraries, as well as some Akai samples
and what not. Would love to see them go under Linux.

I see this list has been quiet for about a month. That's a bit sad, but I
understand that many other things are going on.

I did find that Steve had put a copy of the code on his site. I
downloaded and built it on RH 7.3. It runs, or at least it starts. I don't
know what to do with it!!

If the action is actually elsewhere, like on Swami, then point me in the
right direction. I'm definitely interested in seeing some sort of
GSt/Halion/Kontact software up and running in Open Source. That would be
great.

Cheers,
Mark
David Gerard Matthews
2003-01-20 16:09:49 UTC
Permalink
Hey Mark,
I've been on this list for a few months as well and
noticed it's kind of died down. Not sure what's up with that - it was
very active around November, iirc. There has been some discussion about
Swami, but I think the real hope for sampling under Linux might be XAP
(discussed extensively on linux-audio-dev.) I also use Gigasampler a
bit and seeing as I'd like to dump Windoze altogether I'm hoping for a
way to use my libraries in Linux (like my really nifty Akai-format
prepared piano CD...)
-dgm
Post by Mark Knecht
Hello all,
A few weeks ago Benno and I were talking about something else (hardware)
and he found out I'm a GigaStudio user. He suggested I check out the
LinuxSampler project. He did tell me it wasn't ready for testers, however,
if I can lend a hand early on with working on feature sets or compatibility
with existing libraries, I'd certainly be happy to do so. I have a couple
thousand dollars worth of GigaStudio libraries, as well as some Akai samples
and what not. Would love to see them go under Linux.
I see this list has been quiet for about a month. That's a bit sad, but I
understand that many other things are going on.
I did find that Steve had put a copy of the code on his site. I
downloaded and built it on RH 7.3. It runs, or at least it starts. I don't
know what to do with it!!
If the action is actually elsewhere, like on Swami, then point me in the
right direction. I'm definitely interested in seeing some sort of
GSt/Halion/Kontact software up and running in Open Source. That would be
great.
Cheers,
Mark
David Olofson
2003-01-20 16:09:19 UTC
Permalink
Post by David Gerard Matthews
Hey Mark,
I've been on this list for a few months as well and
noticed it's kind of died down. Not sure what's up with that - it
was very active around November, iirc. There has been some
discussion about Swami, but I think the real hope for sampling
under Linux might be XAP (discussed extensively on
linux-audio-dev.)
Well, I'm not really involved with Linuxsampler, but I'm certainly
responsible for most of the bandwidth in the XAP discussion. ;-)

XAP is a plugin API, so it doesn't really deal with sampling or any
other form of synthesis directly. Linuxsampler would be very useful
as a XAP plugin, but that's really more of an API selection matter
from Linuxsampler's POV. That is, we still need something to turn
into a XAP plugin. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Steve Harris
2003-01-20 17:20:50 UTC
Permalink
Post by David Olofson
XAP is a plugin API, so it doesn't really deal with sampling or any
other form of synthesis directly. Linuxsampler would be very useful
as a XAP plugin, but that's really more of an API selection matter
from Linuxsampler's POV. That is, we still need something to turn
into a XAP plugin. :-)
I think that the plans we have for linuxsampler make it a better inprocess
jack app than a XAP plugin. Several of us have other things that are more
urgent, but we will resume work in a few* weeks.

I have a couple of conferences and a gig to prepare for urgently and Benno
and Juan have a lot of work on.

* Where few is > 1 ;)

- Steve
David Olofson
2003-01-20 18:47:08 UTC
Permalink
Post by Steve Harris
Post by David Olofson
XAP is a plugin API, so it doesn't really deal with sampling or
any other form of synthesis directly. Linuxsampler would be very
useful as a XAP plugin, but that's really more of an API
selection matter from Linuxsampler's POV. That is, we still need
something to turn into a XAP plugin. :-)
I think that the plans we have for linuxsampler make it a better
inprocess jack app than a XAP plugin.
Why?

Indeed, XAP doesn't have it's own threads API for worker threads and
stuff, but neither does JACK, AFAIK. pthreads is fine. If XAP plugins
can tell if they're in a real time host or not, they can even work
off-line.

Anyway, XAP includes a serious instrument control API, whereas JACK
has no support for structured, variable rate data at all. (Yet?)

JACK + ALSA sequencer probably works. I just think it's backwards to
design new software around MIDI, instead of designing around a
standardized superset that can still interface with MIDI, and that
happens to be part of a plugin API.


Either way, there isn't much of a real argument until XAP is
finalized and we have at least one host. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
b***@gardena.net
2003-01-20 17:35:39 UTC
Permalink
Hi all,
sorry for this quiet phase, but the work at the ISP I work for
took a month longer than expected.
Before the end of the month it will be done thus I will finally
be able to get back on linuxsampler. (hopefully I'll not be regarded
as the vaporware man because of these delays ;-) )

Anyway in december I wrote some benchmark code and enveloping and
event schemes for basic building blocks (the sampler engine).
I have not implemented this stuff in code but I have some stuff
written on paper.
Now I see this XAP stuff from David O., it looks very cool but due
of my scarce spare time during the last weeks I was unable to
read all the gigabytes of text that David and other produced.

Perhaps it would be beneficial for all us to implement LinuxSampler
as a XAP plugin ? who knows ?
The fact that LinuxSampler's goal is to being able to user compiler
techniques (you "draw" the signal graph on the screen , compile the
result into C code and then run it through loading the .so module),
means that the API needs to be flexible and/or extensible so I
have currently no idea if it would be a good idea to build LinuxSampler
on XAP internally or treat it as a black box and export only the
audio-out channels, the MIDI-ins plus some internal controls.

I think it will be a good idea to keep the thoughts of XAP and
LinuxSampler guys in sync in order to optimize both the API and
the sampler engine.

In a week or so I will discuss and ask for comments about my ideas of
implementing the modular sample playback engine that uses compilation.
One of the most interesting topics will be how we implement efficient
event passing between the sampler's modules (eg two envelop generators
that output to an adder module which in turn drives the sampler's pitch
or volume.
Being a bit of a nitpicker, I'd like to achieve sample accurate
or near sample accurate event timing, allow dense streams of events
(eg up to 1/4 - 1/8 samplerate) without too much performance penalty
while retaining modularity and flexible event routing).
It's not an easy task to solve, but my preliminary brainstorming
suggests me that it is possible to retain high accuracy with only
a few small tradeoffs.
David is a quite good "event-hacker" so I hope that he gives us the
right advices in order to avoid design mistakes.

Immagine the handyness of loading an akai.so, module and start playing
AKAI libraries with the speed of a hardcoded engine.
The same can be said of GIG libraries.
Fortunately the disk playback was solved a few years ago so what's missing
is mainly some good event/modulation system, sample loading libriaries
(swami etc) and a GUI (the proposal was
to decouple the GUI from the engine using a TCP socket so that you can
build headless "sampler farms" that can be controlled remotely from an
arbitrary machine (even a TCP/IP equipped PDA :-))

I know many would like quick results fast (I would like them too), but
unfortunately a good audio engine takes time to design and we have only
few complex open source audio projects where we can learn from.

I think the biggest near term challenge is to get the basic sampler
(with recompiler) and even system up and running.
David Olofson
2003-01-20 20:50:18 UTC
Permalink
Post by b***@gardena.net
Hi all,
sorry for this quiet phase, but the work at the ISP I work for
took a month longer than expected.
Before the end of the month it will be done thus I will finally
be able to get back on linuxsampler. (hopefully I'll not be
regarded as the vaporware man because of these delays ;-) )
That used to be me, but I'm not sure I qualify any more... ;-)

(Audiality was resurrected for the third time, and inherited the code
from a games sound FX engine that came be by pretty much "by
accident", when I worked on an SDL port of the old game XKobo; Kobo
Deluxe. That is, I finally have something that can actually play
music. :-)
Post by b***@gardena.net
Anyway in december I wrote some benchmark code and enveloping and
event schemes for basic building blocks (the sampler engine).
I have not implemented this stuff in code but I have some stuff
written on paper.
Sounds interesting. (I remember seeing something about this on LAD,
or somewhere.) Will you make your findings and thoughs available in
digital form?
Post by b***@gardena.net
Now I see this XAP stuff from David O., it looks very cool but due
of my scarce spare time during the last weeks I was unable to
read all the gigabytes of text that David and other produced.
Oops. Sorry 'bout that! ;-)
Post by b***@gardena.net
Perhaps it would be beneficial for all us to implement LinuxSampler
as a XAP plugin ? who knows ?
I think it would be beneficial, provided it works at all, basically.
And - obviously - provided there will be usable XAP hosts before
LinuxSampler is mature enough for serious use.

Note that XAP plugins need to have their GUIs in separate processes,
to avoid GUI toolkit conflicts with the hosts! This applies to
in-process JACK as well, though, and shouldn't be a surprize to Un*x
developers. (BTW, is in-process actually implemented in JACK? Wasn't
last time I looked.)

Besides, I think total separation of GUI and DSP code is a good idea
anyway, for a number of reasons. (Stability, GUIs can be left out or
replaced, etc...)
Post by b***@gardena.net
The fact that LinuxSampler's goal is to being able to user compiler
techniques (you "draw" the signal graph on the screen , compile the
result into C code and then run it through loading the .so module),
means that the API needs to be flexible and/or extensible so I
have currently no idea if it would be a good idea to build
LinuxSampler on XAP internally or treat it as a black box and
export only the audio-out channels, the MIDI-ins plus some internal
controls.
This wouldn't be a problem. Just have the GUI do the compiling, and
then tell the plugin to load the .so and "mount" it. The latter is a
perfect job for the background worker calls I've suggested.

This isn't very different from loading patches into a normal synth or
sampler. There's just no way you can do it in the RT thread without
causing drop-outs, so you *have* to make someone do it in a separate
thread.

And it can't be easier than telling the host to call a function from
a suitable context and send you a RESULT event when the call returns,
can it? (Much easier than using pthreads, and - free bonus: it
doesn't break down when running off-line.)
Post by b***@gardena.net
I think it will be a good idea to keep the thoughts of XAP and
LinuxSampler guys in sync in order to optimize both the API and
the sampler engine.
Yes, that's what I've been thinking - but it seems that most others
in the XAP discussion have very little interest in plugins that don't
concist of 100% RT safe code... Could just be that I've failed to
explain the issues properly, though, or that we have so many other
things to worry about first.
Post by b***@gardena.net
In a week or so I will discuss and ask for comments about my ideas
of implementing the modular sample playback engine that uses
compilation. One of the most interesting topics will be how we
implement efficient event passing between the sampler's modules (eg
two envelop generators that output to an adder module which in turn
drives the sampler's pitch or volume.
Being a bit of a nitpicker, I'd like to achieve sample accurate
or near sample accurate event timing, allow dense streams of events
(eg up to 1/4 - 1/8 samplerate) without too much performance
penalty while retaining modularity and flexible event routing).
It's not an easy task to solve, but my preliminary brainstorming
suggests me that it is possible to retain high accuracy with only
a few small tradeoffs.
David is a quite good "event-hacker" so I hope that he gives us the
right advices in order to avoid design mistakes.
Well, in fact, I'm messing with that kind of stuff right now. :-)

I've been trying to find the optimal set of events for ramped
controls, and Steve's suggestion to use only *one* event looks like a
great idea!

Next, I'll probably have the envelope generator multiply it's output
with incoming "scale" events, to implement click free volume and pan
and things like that. That is, multiplication of two ramped event
streams.

If all goes well, the result will be in Audiality 0.1.1, to be
released sometime before everyone's forgotten the name, hopefully. ;-)
Post by b***@gardena.net
Immagine the handyness of loading an akai.so, module and start
playing AKAI libraries with the speed of a hardcoded engine.
The same can be said of GIG libraries.
Fortunately the disk playback was solved a few years ago so what's
missing is mainly some good event/modulation system, sample loading
libriaries (swami etc) and a GUI (the proposal was
to decouple the GUI from the engine using a TCP socket so that you
can build headless "sampler farms" that can be controlled remotely
from an arbitrary machine (even a TCP/IP equipped PDA :-))
With XAP, it seems like GUIs will interface to plugins through the
standard control system. You could think of them as XAP plugins,
running in their own soft RT hosts, connecting to the RT hosts
through an RT/non-RT gateway. No custom interfaces needed, that is.
The plugin<->GUI transport layer can become part of XAP.

In short, I think it's better to wrap this stuff in a XAP host SDK
lib, so you can use raw TCP/IP, JACK or whatnot, without rewriting
hosts and GUIs.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Josh Green
2003-01-20 19:48:31 UTC
Permalink
Post by David Gerard Matthews
Hey Mark,
I've been on this list for a few months as well and
noticed it's kind of died down. Not sure what's up with that - it was
very active around November, iirc. There has been some discussion about
Swami, but I think the real hope for sampling under Linux might be XAP
(discussed extensively on linux-audio-dev.) I also use Gigasampler a
bit and seeing as I'd like to dump Windoze altogether I'm hoping for a
way to use my libraries in Linux (like my really nifty Akai-format
prepared piano CD...)
-dgm
Swami is still going.. Unfortunately with just me as the developer it is
going somewhat slow. I've been a little burnt out on it lately so I'm
kind of taking a break (sometimes I find that I'm almost working on it
full time). This is hard when I realize I actually have to make money to
live (I haven't received 1 donation yet for Swami, although it hasn't
really been massively released yet.. Maybe the PayPal link is broken or
something, could someone test it? He he..). It would be cool if the
Linux audio community could actually make a living on open source
software development. We need to come up with other methods of raising
money though (grants, money pools, etc).

I already started implementing DLS2 support in libInstPatch. There is
now a generic RIFF parser that can be used for SoundFont files, DLS2 and
other RIFF based formats. The GUI is becoming more pluggable to make
plugging new formats into it rather easy (making a good API design takes
a lot of effort though).

So things are still progressing, just to assure all of you of that. I've
kept quiet because the development version is still not quite usable
(too much in development). The GTK1.2 branch works great though, and the
CVS version works with the most recent FluidSynth (well at least last
time I checked, FluidSynth is also undergoing massive API changes).

So. Keep the Linux music/audio dream alive and let everyday make it more
of a reality :) Cheers.
Josh Green
Mark Knecht
2003-01-20 17:15:50 UTC
Permalink
David & David,
Was that the name of a band in the late 80's maybe? I must go look.

Well, if it served no other purpose, at least my 1st posted created a
little bit of traffic on the list, right? ;-)
David Olofson
2003-01-20 18:33:06 UTC
Permalink
Post by Mark Knecht
David & David,
Was that the name of a band in the late 80's maybe? I must go look.
Well, I was just in my early teens back then, so I had nothing to do
with it... ;-)
Post by Mark Knecht
Well, if it served no other purpose, at least my 1st posted
created a little bit of traffic on the list, right? ;-)
Yeah, that's nice - and even Benno chimed in; long time no see! :-)


[...]
Post by Mark Knecht
As always, I probably have too many ideas, but personally I'd
really like to see a sample player somewhat like Battery for Linux.
I think that it's a much more simple interface, requires far fewer
system resources to work well, could use Wave files easily. I just
think overall it would get a lot of use up front. The existing
sample players, like iiwusynth and timidity are not, IMO, really
very good for dealing with drum sets.
Well, I'm not very familiar with Battery, but maybe Audiality will be
able to do what you want rather easily? It's basically a sampler with
an integrated mixer. Although the primary source of waveforms is a
script driven off-line synth (modular synth style), it can load and
process audio files as well. (Which is a feature that remains from
Audiality's days as a games sound FX engine.)

As to UI, the scripts is as close as you get right now. There's no
GUI at all for it yet, but I have detailed plans for an editor.

That said, I've created almost a hundred "useful" sounds while
playing around with the synth. A standard text editor turns out to
work a lot better than I ever imagined it could! :-) (Though, my
previous synth programing experience is mostly with tiny LCDs and
buttons, and a few Windoze synth editors that only made things *more*
awkward.)

Anyway, Audiality is still in early beta, and there are a bunch of
features that need to be implemented before it's truly useful. I
wrote some demo songs a long time ago, so it's not *totally* useless,
but let's just say, the sampler part is *really* rudimentary.


[...]
Post by Mark Knecht
I haven't paid too much attention to XAP as I think it's too
much below the hood for someone like me. I'm glad to see that you
are thinking of samplers with it. I can tell folks are interested.
We're thinking of *everything* that could process or generate audio,
though the primary target is synths and samplers. (We already have
LADSPA for effects and JACK for applications/"large scale plugins".)

Think of XAP as a truly Free VSTi or DXi alternative.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Josh Green
2003-01-20 19:53:16 UTC
Permalink
On Mon, 2003-01-20 at 09:15, Mark Knecht wrote:

<cut>
As always, I probably have too many ideas, but personally I'd really like
to see a sample player somewhat like Battery for Linux. I think that it's a
much more simple interface, requires far fewer system resources to work
well, could use Wave files easily. I just think overall it would get a lot
of use up front. The existing sample players, like iiwusynth and timidity
are not, IMO, really very good for dealing with drum sets.
Has there been any discussion of doing something like that?
What do you find lacking with iiwusynth and drum sets? If you are
talking of creating drum sets its probably more of the patch editor that
needs to be up for this task. I have had a number of ideas in the area
of making it easier to create drum kits with Swami. I would like to know
what kinds of features you could envision for your own uses.

<cut>
Cheers,
Mark
Cheers.
Josh Green
Mark Knecht
2003-01-20 17:42:04 UTC
Permalink
Steve,
I completely understand.

Question - the linuxsampler-gcc code I got from your server compiles with
no options in the ./configure stage, but fails when I try the --enable-jack
option. Does Jack support work for this code yet? Do I need to enable other
options to enable jack support?

Cheers,
Mark
-----Original Message-----
Steve Harris
Sent: Monday, January 20, 2003 9:21 AM
Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post
Post by David Olofson
XAP is a plugin API, so it doesn't really deal with sampling or any
other form of synthesis directly. Linuxsampler would be very useful
as a XAP plugin, but that's really more of an API selection matter
from Linuxsampler's POV. That is, we still need something to turn
into a XAP plugin. :-)
I think that the plans we have for linuxsampler make it a better inprocess
jack app than a XAP plugin. Several of us have other things that are more
urgent, but we will resume work in a few* weeks.
I have a couple of conferences and a gig to prepare for urgently and Benno
and Juan have a lot of work on.
* Where few is > 1 ;)
- Steve
-------------------------------------------------------
This SF.NET email is sponsored by: FREE SSL Guide from Thawte
are you planning your Web Server Security? Click here to get a FREE
Thawte SSL guide and find the answers to all your SSL security issues.
http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0026en
_______________________________________________
Linuxsampler-devel mailing list
https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel
Steve Harris
2003-01-20 17:47:58 UTC
Permalink
Post by Mark Knecht
Steve,
I completely understand.
Question - the linuxsampler-gcc code I got from your server compiles with
no options in the ./configure stage, but fails when I try the --enable-jack
option. Does Jack support work for this code yet? Do I need to enable other
options to enable jack support?
I dont know, you'd have to ask Juan, its his code. I've a feeling that the
jack code is based on an old version of the API.

- Steve
Mark Knecht
2003-01-20 18:09:16 UTC
Permalink
Post by Mark Knecht
Post by Mark Knecht
Steve,
I completely understand.
Question - the linuxsampler-gcc code I got from your server
compiles with
Post by Mark Knecht
no options in the ./configure stage, but fails when I try the
--enable-jack
Post by Mark Knecht
option. Does Jack support work for this code yet? Do I need to
enable other
Post by Mark Knecht
options to enable jack support?
I dont know, you'd have to ask Juan, its his code. I've a feeling that the
jack code is based on an old version of the API.
- Steve
I just wondered if you had tried to do it. ;-)
Mark Knecht
2003-01-20 18:18:58 UTC
Permalink
-----Original Message-----
Sent: Monday, January 20, 2003 9:36 AM
Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post
Perhaps it would be beneficial for all us to implement LinuxSampler
as a XAP plugin ? who knows ?
I don't think I 'know', but I can report some data.

This could work, in some applications, but (for me anyway) it might have
limited appeal.

In the GigaStudio world the sample libraries are huge, mostly running
between 1-2GB per library. I use 10-15 of them at a time, so there is no way
to put this in DRAM. This puts a pretty high stress on the underlying disk
subsystem, which is fine as long as GSt is on it's own computer where the
PCI bus and audio subsystem are available to do the work. So, in replacing
something like GSt or Halion, my guess is that LinuxSampler wants to look
mostly like a stand along application. That said, it would be very cool to
be able to send it control commands via the MIDI stream, or over Ethernet,
from my main DAW. GSt doesn't support much of that today. (If at all...I
don't use it.)

If we were looking at a trimmed down version of the technology which
operated more like Battery, then I think that using the underlying sampler
technology as both a plugin or a stand alone app running on the same DAW, if
most of the samples fit in memory. (Easy for drums, not so easy for piano)

I personally think a Battery like app would be a great starting point for
this sort of technology, to be followed up by a full-blown sampler app
later.

Cheers,
Mark
Benno Senoner
2003-01-20 18:54:52 UTC
Permalink
Well, the fact that the engine is decoupled from the GUI means that
both solutions, a standalone sampler machine and the "full virtual studio in one
machine" are doable.
Regarding the stress that disk based sampling puts on the machine.
Yes it is a quite stressful application, but I don't think PCI is the
main bottleneck here.
Yes its peak performance is only 133MB/sec but as we know harddisks
usually transfer only 10-25MB/sec in the real world.
This means that you can have two separate disks one for HD recording and the
other for the sampler running in parallel without interfering too much eachother.
The problem of GS is that it is a low-level hack (requires special sound card
drivers, the sampler engine basically runs at kernel level (if it crashes =sh*t
happens :-) ) thus not very multitasking-friendly.
(especially when you want to run it in parallel with other CPU/disk hungry
software like Cubase/Logic etc).
This is why users love Halion: it is a VST plugin that can under Cubase,
you get sample accurate processing and perfect integration, something that
the MIDI driven (even via local midi loopback) GS cannot achieve.
I have not seen GS and Halion in action side by side but from reading the
forums, when it comes to raw performance and low latency GS beats Halion because
of all these low-level hacks.
These issues coupled with the fact that Windows is not that stable when you
overload it, are (IMHO) the main reason why windows users tend to run GS
in a dedicated box treating it just like a hw device.
On the other hand Linux's multitasking works exceptionally well and well
designed realtime audio software can fully utilize the machine's resources
without compromising stability.
This is why, given enough horsepower, I support the idea of the whole virtual
studio in one single Linux box.
MIDI sequencing and a sampler/synth engine on the same box is not a problem
since sequencing only takes a fraction of the available resources.
If you add HD recording to the equation, then the workload increases
significantly but nothing speaks against of runnning both the HDR and the
sampler software in the same box.
A single disk system is probably not up to the task because the sampler needs
to access to the data on disk and cannot easily coexist with the HDR tracks
that fight for the same (scarce) disk bandwidth/latency.
With two disks, the focus shifts to the CPU (mixing and doing FXes on HDR tracks
can chew up quite some CPU), but fortunately fast CPUs (2GHz+) can do both at
the same time.
My PII400 is able to stream about 60 tracks (voices) from a 16GB IDE disk (IBM)
when using the old sampler test code. (evo), this means that with 5x the MHz
and two disks it is easily possible to do both HDR and disk based sampling on
the same box.
Regarding battery vs fully-fledged sampler: I agree, better to start out with
something simple and elaborate later, but if we get this
"sampler-construction-kit" done, then evolving from the simple to the
"extended" version of the sampler will take only a small amount of time since
you will basically design it using your mouse and not your editor and C compiler.
:-)
For example Juan says he prefers to first write hardcoded engines and then
start thinking about recompilation techniques but I see it as a bit of a waste
of time. I prefer to take a longer
design phase and write an engine that can later scale really high without
the limitations of hardcoded engines.

cheers,
Benno

http://linuxsampler.sf.net
Post by Mark Knecht
In the GigaStudio world the sample libraries are huge, mostly running
between 1-2GB per library. I use 10-15 of them at a time, so there is no way
to put this in DRAM. This puts a pretty high stress on the underlying disk
subsystem, which is fine as long as GSt is on it's own computer where the
PCI bus and audio subsystem are available to do the work. So, in replacing
something like GSt or Halion, my guess is that LinuxSampler wants to look
mostly like a stand along application. That said, it would be very cool to
be able to send it control commands via the MIDI stream, or over Ethernet,
from my main DAW. GSt doesn't support much of that today. (If at all...I
don't use it.)
If we were looking at a trimmed down version of the technology which
operated more like Battery, then I think that using the underlying sampler
technology as both a plugin or a stand alone app running on the same DAW, if
most of the samples fit in memory. (Easy for drums, not so easy for piano)
I personally think a Battery like app would be a great starting point for
this sort of technology, to be followed up by a full-blown sampler app
later.
Cheers,
Mark
-------------------------------------------------
This mail sent through http://www.gardena.net
David Olofson
2003-01-20 22:23:26 UTC
Permalink
Post by Benno Senoner
Well, the fact that the engine is decoupled from the GUI means that
both solutions, a standalone sampler machine and the "full virtual
studio in one machine" are doable.
Exactly - and using an API that handles both audio and sample
accurate control would make it easier to get right and more robust, I
think.
Post by Benno Senoner
Regarding the stress that disk based sampling puts on the machine.
Yes it is a quite stressful application, but I don't think PCI is
the main bottleneck here.
Yes its peak performance is only 133MB/sec but as we know harddisks
usually transfer only 10-25MB/sec in the real world.
Yes - but if you have a phatt array of drives, PCI won't cut it. I
guess that's why mid and high end servers, and even workstations come
with 64 bit PCI and stuff.
Post by Benno Senoner
This means that you can have two separate disks one for HD
recording and the other for the sampler running in parallel without
interfering too much eachother.
BTW, there's a problem if you need HDR and direct-from-disk sampling
at the same time. (Which I would all the time, if I used a disk
sampler, since I never record synth stuff to disk before vocals. I
like to have full control until the final mixdown.)

Obviously, you can just use three disks; one for the system, one for
the sampler and one for the HDR. However, if you use more than one
disk sampler at a time, this starts getting out of hand...

This is why I suggested a standard disk butler API on LAD - but I
think we need to do a lot more thinking and coding before turning
that into a standard. Maybe something useful will eventually take
form in the internals of LinuxSampler, so we can design a XAP
extension around that, rather than guessing.
Post by Benno Senoner
The problem of GS is that it is a
[...]

So, in short, GS is a Windows specific performance hack, while Halion
is a plugin sampler Done Right - only on the wrong OS.
Post by Benno Senoner
On the other hand Linux's multitasking works exceptionally well
and well designed realtime audio software can fully utilize the
machine's resources without compromising stability.
Yes - although I think Paul has pointed out that there are still
issues with the scheduler when pushing it as hard as JACK does. IIRC,
it sometimes doesn't schedule the right process. Seems obvious that
this should be fixed in 2.5, but until then...
Post by Benno Senoner
This is why, given enough horsepower, I support the idea of the
whole virtual studio in one single Linux box.
...and with 2 or more P-4 or better CPUs, preferably utilizing SIMD,
we're looking at some *serious* DSP power...
Post by Benno Senoner
MIDI sequencing and a sampler/synth engine on the same box is not a
problem since sequencing only takes a fraction of the available
resources. If you add HD recording to the equation, then the
workload increases significantly but nothing speaks against of
runnning both the HDR and the sampler software in the same box.
Except that they need separate disks, unless they share the disk
butler, I think... Just adding another disk would probably be
acceptable to most serious users, though.


[...]
Post by Benno Senoner
Regarding battery vs fully-fledged sampler: I agree, better to
start out with something simple and elaborate later, but if we get
this
"sampler-construction-kit" done, then evolving from the simple to
the "extended" version of the sampler will take only a small amount
of time since you will basically design it using your mouse and not
your editor and C compiler.
:-)
For example Juan says he prefers to first write hardcoded engines
and then start thinking about recompilation techniques but I see it
as a bit of a waste of time. I prefer to take a longer
design phase and write an engine that can later scale really high
without the limitations of hardcoded engines.
Well, as long as you actually get through the design phase without
killing the project, spending more time on the design sounds like a
good idea. Saves time in the long run, provided you think straight
and know what you're doing.


That said, I did it the other way around with Audiality. (Which is
effectively a working test bed for my XAP ideas, BTW.) It's also the
first major spare time project of mine that's really off the ground.
It's been more or less fully functional right from the start
(although it didn't do much :-), almost 2 MB of code ago. I've just
added features as I needed them (ehrm, or just thought they would be
fun to play with), and restructured when things started getting too
messy.

Note that this restructuring part is *very* important, but usually
really rather boring. I believe the only reason why Audiality isn't a
total heap of spaghetti is that I have trouble remembering complex
APIs and logic. If it's to messy, I just can't work with it and must
fix it.

Obviously, there's a great deal of rewriting and moving stuff around
in this process, but that's not all bad. I've actually *tried* the
solutions that I ruled out, and when I get a new "great idea", I can
just test it and see if it actually works. When it comes to design,
testing ideas in a real app is a lot more useful than testing in a
fake environment. Often, you realize where you went wrong as soon as
you start typing the test code.


What I'm saying is that the "hack away" approach may not be the most
effective way of creating software, but it sure beats never getting
off the ground. I think this all boils down to "It's more fun to hack
something that sort of works." If it doesn't compile, run and do
something "interesting" every 10-20 hours of hacking or so, you're in
trouble...


Can't tell you what to do, but given my experience, I would say Juan
has a point. Though, it seems to me that rudimentary C code
generation shouldn't have to be *that* complicated. You could
probably fake most of it, and basically just have the engine output,
compile and load a hand coded voice struct, until you have the real
stuff in place.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Steve Harris
2003-01-20 20:59:41 UTC
Permalink
Post by Mark Knecht
In the GigaStudio world the sample libraries are huge, mostly running
between 1-2GB per library. I use 10-15 of them at a time, so there is no way
to put this in DRAM. This puts a pretty high stress on the underlying disk
subsystem, which is fine as long as GSt is on it's own computer where the
PCI bus and audio subsystem are available to do the work. So, in replacing
something like GSt or Halion, my guess is that LinuxSampler wants to look
mostly like a stand along application.
That was what we are aiming for. IIRC Evo (linuxsampler's predecesor)
could read .gig files to some extent and streams offf disk.

- Steve
David Olofson
2003-01-20 21:16:30 UTC
Permalink
Post by Mark Knecht
-----Original Message-----
Sent: Monday, January 20, 2003 9:36 AM
Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post
Perhaps it would be beneficial for all us to implement
LinuxSampler as a XAP plugin ? who knows ?
I don't think I 'know', but I can report some data.
This could work, in some applications, but (for me anyway) it might
have limited appeal.
In the GigaStudio world the sample libraries are huge, mostly
running between 1-2GB per library. I use 10-15 of them at a time,
so there is no way to put this in DRAM. This puts a pretty high
stress on the underlying disk subsystem, which is fine as long as
GSt is on it's own computer where the PCI bus and audio subsystem
are available to do the work. So, in replacing something like GSt
or Halion, my guess is that LinuxSampler wants to look mostly like
a stand along application.
I see - and in that scenario, it doesn't really matter whether
LinuxSampler is it's own host, runs as a JACK client, or runs as a
XAP synth under another host. The difference is in the APIs and what
they provide.

Stand-alone app:
+ No restrictions!
- ...and no definitive standards. Just pick some protocols.

JACK client:
+ You don't have to worry about audio I/O.
+ Audio routing can be handled by other tools.
+ No lib conflicts, as you can have your own process.
- Linux still breaks JACK out-of-process latency... :-(
- There is no instrument control protocol.
- Control and audio run on different subsystems.

XAP plugin:
+ You don't have to worry about audio I/O.
+ There is a comprehensive instrument control protocol.
+ Control and audio routing and transport is integrated.
+ You're running in-process, which works well on any LL kernel.


(Note that these are not necessarily absolute facts. It's just based
on my own observations and information from the lists.)
Post by Mark Knecht
That said, it would be very cool to be
able to send it control commands via the MIDI stream, or over
Ethernet, from my main DAW. GSt doesn't support much of that today.
(If at all...I don't use it.)
That's just a matter of the host mapping MIDI to XAP controls in a
useful way. What you need is a MIDI->XAP translator - and you could
send a custom one with LinuxSampler, to implement SysEx or whatever
you like. (It's just a driver/plugin, just like the interfaces to
ALSA, JACK and whatnot.)

Audiality implements MIDI internally for now, since it's not a
plugin. I just picked OSS + ALSA "rawmidi" is the control interface
and SDL + OSS for audio I/O, since that's what worked best for what I
needed at the time. (There was no XAP when I started hacking
Audiality, and it's still not finalized - and either way, there's no
host.)
Post by Mark Knecht
If we were looking at a trimmed down version of the technology
which operated more like Battery, then I think that using the
underlying sampler technology as both a plugin or a stand alone app
running on the same DAW, if most of the samples fit in memory.
(Easy for drums, not so easy for piano)
What exactly do you mean by "stand alone app"...?

By my definition, the only difference between a completely
stand-alone application and a plugin + GUI process is that in the
latter case, the RT thread belongs to another process.

This has no implications to users (if they even know about it!),
except that the latter will probably interface better with host
applications than will something that uses MIDI or similar in
parallel with JACK audio streaming.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Steve Harris
2003-01-20 21:41:57 UTC
Permalink
...
Post by David Olofson
- Linux still breaks JACK out-of-process latency... :-(
?
Post by David Olofson
+ You don't have to worry about audio I/O.
+ There is a comprehensive instrument control protocol.
+ Control and audio routing and transport is integrated.
+ You're running in-process, which works well on any LL kernel.
- Doesn't exist yet ;)
- Each point in the overall graph will be a different instance
- Making the threading work well with all hosts will be hard
- No way of receiving arbitary MIDI data (is that useful?)
- No direct JACK i/o

The obvious solution is to provide the services as both XAP and JACK.

- Steve
David Olofson
2003-01-20 22:52:47 UTC
Permalink
Post by Steve Harris
...
Post by David Olofson
- Linux still breaks JACK out-of-process latency... :-(
Seems to be a scheduler issue, but I don't have first hand experience
with this. I don't know if it's been fixed yet, but Paul have been
complaining about it every now and then.
Post by Steve Harris
Post by David Olofson
+ You don't have to worry about audio I/O.
+ There is a comprehensive instrument control protocol.
+ Control and audio routing and transport is integrated.
+ You're running in-process, which works well on any LL kernel.
- Doesn't exist yet ;)
That's a point! :-)
Post by Steve Harris
- Each point in the overall graph will be a different
instance
Yes, but that applies to JACK and applications as well. I don't see
what you mean.
Post by Steve Harris
- Making the threading work well with all hosts will be hard
Possibly. Do you have some details on this? (Conflicts with what the
host is doing or whatever.)
Post by Steve Harris
- No way of receiving arbitary MIDI data (is that useful?)
It's not useful for normal stuff, since you can map all RT MIDI
events to XAP events one way or another through a standard
driver/translator plugin. In fact, you can even pipe SysEx through
XAP (as data controls), so there might just not be a need for MIDI at
all; just a standard way of passing "non-standard" stuff to plugins,
if they want it. (You'd just have input ports for SysEx if you want
it.)
Post by Steve Harris
- No direct JACK i/o
Why would you need that if you're a XAP plugin? It's the host that
decides where your audio ports are connected. That's one of the few
really significant differences between JACK and LADSPA, XAP, VST etc.
Post by Steve Harris
The obvious solution is to provide the services as both XAP and JACK.
Yes... Although I don't see why you couldn't provide your own
JACKified XAP host that automatically loads and hooks up
LinuxSampler. XAP would just be the native API of the synth - wrap as
you like, using standard or custom wrappers.

Or you could just make the sampler a lib with whatever interface you
like, and then implement "wrappers" for JACK, XAP, VST or whatever.
Not sure I see the point in designing your own private API just for
that, though.

I was going to do something like that with Audiality, but haven't
decided how to do it yet. I might just make it a XAP plugin, as XAP
and Audiality internals have lots in common anyway. There will still
be an easy to use "games sound engine" style API as well, but that
could be implemented as a specialized XAP host for Audiality (or
maybe even a "real" XAP host), rather than Audiality itself.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Steve Harris
2003-01-20 23:45:12 UTC
Permalink
Post by David Olofson
Post by Steve Harris
Post by David Olofson
- Linux still breaks JACK out-of-process latency... :-(
Seems to be a scheduler issue, but I don't have first hand experience
with this. I don't know if it's been fixed yet, but Paul have been
complaining about it every now and then.
Oh, I see, thats not specific to JACK, it affects all SCHED_FIFO programs.
The intention is that linuxsampler wil be inprocess anyway.
Post by David Olofson
Post by Steve Harris
- Each point in the overall graph will be a different instance
Yes, but that applies to JACK and applications as well. I don't see
what you mean.
No, beacuse a JACK application can export ports to all other jack
applications, whereas a plugin is only available to its host app.
Post by David Olofson
Post by Steve Harris
- Making the threading work well with all hosts will be hard
Possibly. Do you have some details on this? (Conflicts with what the
host is doing or whatever.)
Nothing concrete, but I can imagine how hard it would be to get it working
reliably in LADSPA and this isnt really any different.
Post by David Olofson
Post by Steve Harris
- No direct JACK i/o
Why would you need that if you're a XAP plugin? It's the host that
decides where your audio ports are connected. That's one of the few
really significant differences between JACK and LADSPA, XAP, VST etc.
The JACK audio routing system is just more powerful than plugin hosting.
Obviously you need that too, but for something as potentially
sophisticated as a sampler I'd really want it available drectly to JACK.
More layers of overheard and shims would kinda defeat the point.

- Steve
David Olofson
2003-01-21 00:19:42 UTC
Permalink
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
Post by David Olofson
- Linux still breaks JACK out-of-process latency... :-(
Seems to be a scheduler issue, but I don't have first hand
experience with this. I don't know if it's been fixed yet, but
Paul have been complaining about it every now and then.
Oh, I see, thats not specific to JACK, it affects all SCHED_FIFO programs.
Yes, it's a general RT issue that just happens to impact JACK more
than single RT thread apps. I wasn't very clear.
Post by Steve Harris
The intention is that linuxsampler wil be inprocess
anyway.
But can you do that with JACK at this point? (Haven't checked lately.)
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
- Each point in the overall graph will be a different instance
Yes, but that applies to JACK and applications as well. I don't
see what you mean.
No, beacuse a JACK application can export ports to all other jack
applications, whereas a plugin is only available to its host app.
Ah, I see. So, if the XAP plugin runs in a JACKified host, where's
the difference? Also, I don't see how using multiple instances could
change this. They would just be independent samplers, possibly
sharing the disk butler, if that runs as a separate process.
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
- Making the threading work well with all hosts will be hard
Possibly. Do you have some details on this? (Conflicts with what
the host is doing or whatever.)
Nothing concrete, but I can imagine how hard it would be to get it
working reliably in LADSPA and this isnt really any different.
Well, I don't really see any problems beyond getting the
sampler/butler interaction to work right. You still have an RT part
(the "official" plugin) and one or more lower priority worker threads.
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
- No direct JACK i/o
Why would you need that if you're a XAP plugin? It's the host
that decides where your audio ports are connected. That's one of
the few really significant differences between JACK and LADSPA,
XAP, VST etc.
The JACK audio routing system is just more powerful than plugin hosting.
In what way? Isn't the only significant difference that the
connections are made by clients rather than a host?

It's still block based I/O. There is audio in your input buffers when
you get woken up or called, and there should be audio in your output
buffers before you signal back or return. You can have NULL buffers
and whatnot to optimize silence with both schemes, of course.
Post by Steve Harris
Obviously you need that too, but for something as
potentially sophisticated as a sampler I'd really want it available
drectly to JACK. More layers of overheard and shims would kinda
defeat the point.
Well, I have problems seeing where the overhead is, when you're still
dealing with buffers of float32 samples all over the place, but I'm
probably missing something. You'll need wrappers or other solutions
in one direction or another, no matter what the lowest level API is -
unless you support only one API, of course.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Steve Harris
2003-01-21 09:31:25 UTC
Permalink
Post by David Olofson
Post by Steve Harris
Oh, I see, thats not specific to JACK, it affects all SCHED_FIFO programs.
Yes, it's a general RT issue that just happens to impact JACK more
than single RT thread apps. I wasn't very clear.
No, it just shows up in jack because thats the only useful way to run
multiple SCHED_FIFO applications!
Post by David Olofson
Post by Steve Harris
The intention is that linuxsampler wil be inprocess
anyway.
But can you do that with JACK at this point? (Haven't checked lately.)
Well yes, the alsa_driver is inprocess. There is some support for loading
application inprocess at runtime, but its not 100%.
Post by David Olofson
Post by Steve Harris
Nothing concrete, but I can imagine how hard it would be to get it
working reliably in LADSPA and this isnt really any different.
Well, I don't really see any problems beyond getting the
sampler/butler interaction to work right. You still have an RT part
(the "official" plugin) and one or more lower priority worker threads.
Well, the butler /is/ the problem.
Post by David Olofson
Post by Steve Harris
The JACK audio routing system is just more powerful than plugin hosting.
In what way? Isn't the only significant difference that the
connections are made by clients rather than a host?
A JACK app can connect directly to physical i/o, and can connect to any
ported part of another application. A XAP instance is limited to the
connections that can be provided by the host application.

The behaviour of a hardware sampler leads me to think of it more like a
jack application than a plugin. Thats not to say that I dont think a
sampler plugin is useful, obviously it is, but I think a JACK sanmpler is
more useful.
Post by David Olofson
Post by Steve Harris
Obviously you need that too, but for something as
potentially sophisticated as a sampler I'd really want it available
drectly to JACK. More layers of overheard and shims would kinda
defeat the point.
Well, I have problems seeing where the overhead is, when you're still
dealing with buffers of float32 samples all over the place, but I'm
probably missing something. You'll need wrappers or other solutions
in one direction or another, no matter what the lowest level API is -
unless you support only one API, of course.
Well as XAP and JACK are fundamentally callback based you can provide
common source with just the control handling that isn't shared. It isn't
neccesary to have all the XAP host cruft (VVIDs, events, blah, blah)
between jack and the sampler code.

- Steve
David Olofson
2003-01-21 15:30:18 UTC
Permalink
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
Oh, I see, thats not specific to JACK, it affects all
SCHED_FIFO programs.
Yes, it's a general RT issue that just happens to impact JACK
more than single RT thread apps. I wasn't very clear.
No, it just shows up in jack because thats the only useful way to
run multiple SCHED_FIFO applications!
Well, most people aren't using SCHED_FIFO for RT prototyping of RTAI
or RTL applications, so that's a point... :-)


[...]
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
Nothing concrete, but I can imagine how hard it would be to get
it working reliably in LADSPA and this isnt really any
different.
Well, I don't really see any problems beyond getting the
sampler/butler interaction to work right. You still have an RT
part (the "official" plugin) and one or more lower priority
worker threads.
Well, the butler /is/ the problem.
Of course - it's a worker thread, and the plugin needs to communicate
with it without screwing other stuff up. Apart from obvious resource
sharing conflicts (signals, maybe), it seems to me that it would be
little more than a matter of picking a suitable priority for the
butler thread. What am I missing?
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
The JACK audio routing system is just more powerful than plugin hosting.
In what way? Isn't the only significant difference that the
connections are made by clients rather than a host?
A JACK app can connect directly to physical i/o, and can connect to
any ported part of another application.
So can a XAP plugin - only it's actually the host that decides and
makes the connection. After all, what you get is still an audio
buffer that you're supposed to read or write once per block cycle. I
still don't see why it would matter what API(s) are used to get
access to the buffers.
Post by Steve Harris
A XAP instance is limited
to the connections that can be provided by the host application.
Yes, and the way I see it, that's *intended*. When you're using a
host app, the host app is responsible for connections with the
outside world. (The only exception would be "driver plugins", which
interface the RT net with JACK, ALSA and other APIs.) If the host
doesn't allow the user to make the desired connections, the host is
broken and/or not the right tool for the job.

From the user POV, the only difference is that the JACK version would
integrate I/O selection in the LinuxSampler UI, while the XAP version
would rely on the host UI for that. Is that a problem?
Post by Steve Harris
The behaviour of a hardware sampler leads me to think of it more
like a jack application than a plugin. Thats not to say that I dont
think a sampler plugin is useful, obviously it is, but I think a
JACK sanmpler is more useful.
What behavior are you referring to? Seriously, I want XAP plugins to
behave as much like real hardware as is possible and desirable. I
think there is a design issue with XAP if it can't host a sampler
properly.
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
Obviously you need that too, but for something as
potentially sophisticated as a sampler I'd really want it
available drectly to JACK. More layers of overheard and shims
would kinda defeat the point.
Well, I have problems seeing where the overhead is, when you're
still dealing with buffers of float32 samples all over the place,
but I'm probably missing something. You'll need wrappers or other
solutions in one direction or another, no matter what the lowest
level API is - unless you support only one API, of course.
Well as XAP and JACK are fundamentally callback based you can
provide common source with just the control handling that isn't
shared.
Exactly.
Post by Steve Harris
It isn't neccesary to have all the XAP host cruft (VVIDs,
events, blah, blah) between jack and the sampler code.
Right, but you still need a control interface. And it should probably
be sample accurate and support ramping, even if the first priority is
driving it from MIDI.

If you use the ALSA sequencer API directly, you'll have to provide an
alternative interface anyway, since ALSA sequencer doesn't make much
sense for the XAP plugin. If you use your own custom control
interface, you need to wrap it with custom code for *both* JACK and
XAP.

Using XAP, no extra work is needed for the XAP version (obviously),
and you could just use a standard or custom MIDI->XAP
driver/converter (which is one of the first things I'll implement, as
I still need MIDI to do anything useful) together with LinuxSampler
when running it as a JACK client.

Using XAP as your "custom" API makes a lot of sense to me. If it's
too complex to be a viable solution for that, I'm suspecting that we
need to do some cleaning up. It's really supposed to be about as
clean and simple as possible, while providing what you need for
instrument control. If it's not suitable for a sampler, I think we
might be on the wrong track.

What I'm saying is that XAP is *intended* for this kind of stuff. If
it doesn't fit, we'll have to make it fit, or there's just no point
in having it.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Steve Harris
2003-01-21 17:08:23 UTC
Permalink
Post by David Olofson
Post by Steve Harris
A JACK app can connect directly to physical i/o, and can connect to
any ported part of another application.
So can a XAP plugin - only it's actually the host that decides and
makes the connection. After all, what you get is still an audio
buffer that you're supposed to read or write once per block cycle. I
still don't see why it would matter what API(s) are used to get
access to the buffers.
So, in other words, it can't ;) The idea is that the sampler should be
able to connect to hardware inputs from its UI in order to sample. From
within XAP you cant do that. I hope. Its not really a plugin UI feature
and if it was present it would be bloat.
Post by David Olofson
Post by Steve Harris
The behaviour of a hardware sampler leads me to think of it more
like a jack application than a plugin. Thats not to say that I dont
think a sampler plugin is useful, obviously it is, but I think a
JACK sanmpler is more useful.
What behavior are you referring to? Seriously, I want XAP plugins to
behave as much like real hardware as is possible and desirable. I
think there is a design issue with XAP if it can't host a sampler
properly.
XAP can host a sampler properly, its just not /ideal/. If it was ideal it
would be JACK.

There are two use cases (it seems, from the windows world), samplers as
applications (gigasampler, ie. linuxsampler under JACK), and as plugins
(Halion and friends, ie linuxsample under XAP).

I think that the application model gives you more power and control, but
both are useful.

- Steve
David Olofson
2003-01-21 17:31:33 UTC
Permalink
Post by Steve Harris
Post by David Olofson
Post by Steve Harris
A JACK app can connect directly to physical i/o, and can
connect to any ported part of another application.
So can a XAP plugin - only it's actually the host that decides
and makes the connection. After all, what you get is still an
audio buffer that you're supposed to read or write once per block
cycle. I still don't see why it would matter what API(s) are used
to get access to the buffers.
So, in other words, it can't ;)
Right; it can't *make* the connection, but it can handle the
connection, once it's made.
Post by Steve Harris
The idea is that the sampler should
be able to connect to hardware inputs from its UI in order to
sample.
Mark Knecht
2003-01-20 18:51:50 UTC
Permalink
Post by David Olofson
Either way, there isn't much of a real argument until XAP is
finalized and we have at least one host. :-)
Ain't no reason to argue anyway! ;-)

(And I know you guys aren't!!)
David Olofson
2003-01-20 21:18:14 UTC
Permalink
Post by Mark Knecht
Post by David Olofson
Either way, there isn't much of a real argument until XAP is
finalized and we have at least one host. :-)
Ain't no reason to argue anyway! ;-)
(And I know you guys aren't!!)
Well, as long as it's productive and people don't feel like throwing
fists, it's probably better called a discussion. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Mark Knecht
2003-01-20 21:28:02 UTC
Permalink
-----Original Message-----
Steve Harris
Sent: Monday, January 20, 2003 1:00 PM
Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post
That was what we are aiming for. IIRC Evo (linuxsampler's predecesor)
could read .gig files to some extent and streams offf disk.
- Steve
Yep, I read that somewhere, at least in a goals statement. I'm new to this
guys, so I don't know the history interms of how far things got.

We've already today created almost as many messages as happened in December!
;-)
Mark Knecht
2003-01-20 21:42:02 UTC
Permalink
Josh,
Hi. I'm not much of a fluid-synth user yet, but I'll outline the things I
think I look to battery for:

1) An up front GUI that's pretty easy to see and understand (important for
us 'command-line challenged' types!)
2) Uses 16 & 24-bit wave files easily
3) Assigns specific samples to both specific MIDI notes AND channels. Has
tuning, ADSR envelope plus limited plug in support for each note.
4) Velocity support - maps velocity to different samples. (VERY important
for using .gig files. Not typical of sound font based tools.)
5) Easy to mix samples from different sets to make a new set.

I would really be happy if Swami might start by reading .gig files and
allowing me to export things like a kick from one set and a snare from
another, save them and load them in a Battery like tool.

I hope this gives you some ideas.

Cheers,
Mark
-----Original Message-----
Sent: Monday, January 20, 2003 11:53 AM
To: Mark Knecht
Subject: RE: [Linuxsampler-devel] Hi - Very quiet list - my first post
<cut>
Post by Mark Knecht
As always, I probably have too many ideas, but personally
I'd really like
Post by Mark Knecht
to see a sample player somewhat like Battery for Linux. I think
that it's a
Post by Mark Knecht
much more simple interface, requires far fewer system resources to work
well, could use Wave files easily. I just think overall it
would get a lot
Post by Mark Knecht
of use up front. The existing sample players, like iiwusynth
and timidity
Post by Mark Knecht
are not, IMO, really very good for dealing with drum sets.
Has there been any discussion of doing something like that?
What do you find lacking with iiwusynth and drum sets? If you are
talking of creating drum sets its probably more of the patch editor that
needs to be up for this task. I have had a number of ideas in the area
of making it easier to create drum kits with Swami. I would like to know
what kinds of features you could envision for your own uses.
<cut>
Post by Mark Knecht
Cheers,
Mark
Cheers.
Josh Green
Josh Green
2003-01-21 01:35:50 UTC
Permalink
Post by Mark Knecht
Josh,
Hi. I'm not much of a fluid-synth user yet, but I'll outline the things I
1) An up front GUI that's pretty easy to see and understand (important for
us 'command-line challenged' types!)
Thats basically what Swami is (and much more really, its an entire API
framework for manipulating patch formats, with a GUI as one of the
interfaces).. It has a FluidSynth plugin to use it as its wavetable
synth engine (currently the only supported one, but I will be adding a
hardware EMU8k/10k plugin in the future). Of course Swami still needs a
bit of work to make it the best patch editor in the world :)
Post by Mark Knecht
2) Uses 16 & 24-bit wave files easily
Swami currently only supports SoundFont files (I'm just now adding DLS2
support). SoundFont uses 16 bit data only, DLS2 officially only supports
8 or 16 bit data, but the format could in theory accomodate 24 bit or
other formats. They just wouldn't be portable (DLS2 does have support
for conditional proprietary stuff).
Post by Mark Knecht
3) Assigns specific samples to both specific MIDI notes AND channels. Has
tuning, ADSR envelope plus limited plug in support for each note.
Yes to everything.. Except that MIDI channel mapping is usually done by
selecting Bank:Preset pairs which are specific instruments or banks of
drums (not built into the format). Drums are traditionally only on MIDI
channel 10, although SoundFont does not restrict this. Plugin support
for each note? Not sure what you mean by that, but FluidSynth does have
a LADSPA host for adding LADSPA plugins to the synthesis output (no GUI
support for this yet though).
Post by Mark Knecht
4) Velocity support - maps velocity to different samples. (VERY important
for using .gig files. Not typical of sound font based tools.)
Yes.. Swami has this already. Each zone can have its own velocity range
which causes the sound to play, can also layer velocity/key range zones.
Post by Mark Knecht
5) Easy to mix samples from different sets to make a new set.
Swami can open multiple patch files and easily copy samples/instruments
between them, if this is what you mean.
Post by Mark Knecht
I would really be happy if Swami might start by reading .gig files and
allowing me to export things like a kick from one set and a snare from
another, save them and load them in a Battery like tool.
I don't know much about .gig files, but I heard someone mention they are
based on DLS2. If this is the case it might be easy to add support for
them after DLS2 support is finished.
Post by Mark Knecht
I hope this gives you some ideas.
Sure.. A lot of this is already available though.. Have you tried
Swami/FluidSynth? The current CVS is GTK1.2 based and works with
FluidSynth CVS (at least last time I checked). Development is currently
happening on the CVS swami-1-0 branch, but it isn't operational quite
yet.
Post by Mark Knecht
Cheers,
Mark
Cheers.
Josh Green
Mark Knecht
2003-01-20 17:56:26 UTC
Permalink
Josh,
Hi. Removed iiwusynth-devel as I am not a member of that list.

I like what I see in Swami so far. It reminds me of parts of the
GigaStudio editing interface where you build your own .gig files. It
doesn't have all of the velocity mapping stuff, but most of it seems to
be there.

The thing I see in Swami right now, and forgive me if I'm wrong about
this after just a short read through, is that it appears oriented to
dealing with samples and libraries, as opposed to being a MIDI-based
sample player. (I.e. - and instrument) It does look like it makes using
fluid-synth potentially easier, so I should pay attention to it for that
reason alone I suppose.

If you want to go look at Battery's interface, it's pretty clean and
to the point.

http://www.nativeinstruments.de/index.php?id=battery_us

It's really oriented towards playing short samples, but it can be
used for longer ones also.

BTW - I am NOT a Battery owner. I have used it very little. I was
just pointing out that as a sample playing app, it is a model that
people seem to get their hands around quickly. I also think it's ideal
for a first test vehicle for this sampler engine.

Maybe I'm taking some of the posts here in the wrong light, but yours
and Dave's both seem aimed at getting me to use something that currently
exists, where my idea was to find good uses for this new Linux-Sampler
engine. If I'm off base and people here don't think this is a good use,
then that's OK too. I was just speaking up since this list has been
'very quiet', as the title in my thread indicates!

Cheers,
Mark
Post by Josh Green
Post by Mark Knecht
Josh,
Hi. I'm not much of a fluid-synth user yet, but I'll outline the things I
1) An up front GUI that's pretty easy to see and understand (important for
us 'command-line challenged' types!)
Thats basically what Swami is (and much more really, its an entire API
framework for manipulating patch formats, with a GUI as one of the
interfaces).. It has a FluidSynth plugin to use it as its wavetable
synth engine (currently the only supported one, but I will be adding a
hardware EMU8k/10k plugin in the future). Of course Swami still needs a
bit of work to make it the best patch editor in the world :)
Post by Mark Knecht
2) Uses 16 & 24-bit wave files easily
Swami currently only supports SoundFont files (I'm just now adding DLS2
support). SoundFont uses 16 bit data only, DLS2 officially only supports
8 or 16 bit data, but the format could in theory accomodate 24 bit or
other formats. They just wouldn't be portable (DLS2 does have support
for conditional proprietary stuff).
Post by Mark Knecht
3) Assigns specific samples to both specific MIDI notes AND channels. Has
tuning, ADSR envelope plus limited plug in support for each note.
Yes to everything.. Except that MIDI channel mapping is usually done by
selecting Bank:Preset pairs which are specific instruments or banks of
drums (not built into the format). Drums are traditionally only on MIDI
channel 10, although SoundFont does not restrict this. Plugin support
for each note? Not sure what you mean by that, but FluidSynth does have
a LADSPA host for adding LADSPA plugins to the synthesis output (no GUI
support for this yet though).
Post by Mark Knecht
4) Velocity support - maps velocity to different samples. (VERY important
for using .gig files. Not typical of sound font based tools.)
Yes.. Swami has this already. Each zone can have its own velocity range
which causes the sound to play, can also layer velocity/key range zones.
Post by Mark Knecht
5) Easy to mix samples from different sets to make a new set.
Swami can open multiple patch files and easily copy samples/instruments
between them, if this is what you mean.
Post by Mark Knecht
I would really be happy if Swami might start by reading .gig files and
allowing me to export things like a kick from one set and a snare from
another, save them and load them in a Battery like tool.
I don't know much about .gig files, but I heard someone mention they are
based on DLS2. If this is the case it might be easy to add support for
them after DLS2 support is finished.
Post by Mark Knecht
I hope this gives you some ideas.
Sure.. A lot of this is already available though.. Have you tried
Swami/FluidSynth? The current CVS is GTK1.2 based and works with
FluidSynth CVS (at least last time I checked). Development is currently
happening on the CVS swami-1-0 branch, but it isn't operational quite
yet.
Post by Mark Knecht
Cheers,
Mark
Cheers.
Josh Green
-------------------------------------------------------
SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See!
http://www.vasoftware.com
_______________________________________________
Linuxsampler-devel mailing list
https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel
Josh Green
2003-01-21 12:25:15 UTC
Permalink
Post by Mark Knecht
Josh,
Hi. Removed iiwusynth-devel as I am not a member of that list.
I like what I see in Swami so far. It reminds me of parts of the
GigaStudio editing interface where you build your own .gig files. It
doesn't have all of the velocity mapping stuff, but most of it seems to
be there.
What kind of velocity functionality are you looking for?
Post by Mark Knecht
The thing I see in Swami right now, and forgive me if I'm wrong about
this after just a short read through, is that it appears oriented to
dealing with samples and libraries, as opposed to being a MIDI-based
sample player. (I.e. - and instrument) It does look like it makes using
fluid-synth potentially easier, so I should pay attention to it for that
reason alone I suppose.
Currently the GUI is somewhat out-dated compared to the underlying API
architecture. Its currently structured primarily as an editor. But I
have many things on my list to add to make it a better front-end for
FluidSynth (wavetable MIDI channel instrument selections, GUI MIDI
controllers, session modulators, etc). As things are now you can start
up FluidSynth within Swami using the alsa_seq MIDI driver and connect a
sequencer to it (you can do this with FluidSynth stand alone as well).
Also I'm going to be making the layout very dynamic and flexible. The
GUI is already separated into self contained objects. I just need to
create a GUI manager that allows creating new GUI objects, designating
their layout (docked with other elements or in its own dialog, etc) and
setting their view parameters. So you could for instance create multiple
patch tree objects, a couple envelope editors, and define their layout
and save and restore it. This will allow for different setups depending
on if someone is editing, using it as a wavetable bank manager,
sequencing with it, etc.
Post by Mark Knecht
If you want to go look at Battery's interface, it's pretty clean and
to the point.
http://www.nativeinstruments.de/index.php?id=battery_us
It's really oriented towards playing short samples, but it can be
used for longer ones also.
Yeah, the interface looks nice. I already have a crude envelope editor
in place. I haven't worked on it in a while, but I have been thinking
about making it really flexible and allowing it to be overlayed on the
sample waveform, allowing multiple parameters from different instruments
to be edited simultainiously, etc.
Post by Mark Knecht
BTW - I am NOT a Battery owner. I have used it very little. I was
just pointing out that as a sample playing app, it is a model that
people seem to get their hands around quickly. I also think it's ideal
for a first test vehicle for this sampler engine.
As far as Linuxsampler and Swami are concerned.. Linuxsampler may end up
being a wavetable engine within Swami, as well as my libInstPatch
library being used for accessing patch formats. This all remains to be
seen though.
Post by Mark Knecht
Maybe I'm taking some of the posts here in the wrong light, but yours
and Dave's both seem aimed at getting me to use something that currently
exists, where my idea was to find good uses for this new Linux-Sampler
engine. If I'm off base and people here don't think this is a good use,
then that's OK too. I was just speaking up since this list has been
'very quiet', as the title in my thread indicates!
Sure.. You got things going again :)
Post by Mark Knecht
Cheers,
Mark
Lates..
Josh Green
Mark Knecht
2003-01-20 18:27:21 UTC
Permalink
Post by Josh Green
Swami currently only supports SoundFont files (I'm just now adding DLS2
support). SoundFont uses 16 bit data only, DLS2 officially only supports
8 or 16 bit data, but the format could in theory accomodate 24 bit or
other formats. They just wouldn't be portable (DLS2 does have support
for conditional proprietary stuff).
If you're interested in the .gig format, you can get lots of files at

http://www.worrasplace.com

He's got a nice selection of large and small ones.

I know nothing of the internal format of the file, but I suspect that
there are web sites out there with info.

Cheers,
Mark
Mark Knecht
2003-01-20 21:52:56 UTC
Permalink
-----Original Message-----
David Olofson
Sent: Monday, January 20, 2003 10:33 AM
Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post
Well, I'm not very familiar with Battery, but maybe Audiality will be
able to do what you want rather easily? It's basically a sampler with
an integrated mixer. Although the primary source of waveforms is a
script driven off-line synth (modular synth style), it can load and
process audio files as well. (Which is a feature that remains from
Audiality's days as a games sound FX engine.)
I don't know much of anything about Audiality, except for I have a hard time
spelling it! ;-)

Really, the web page looks interesting, but I've never tried to use it. My
focus (in this specific conversation) really initiated from the idea that
linux would benefit from have a simple app that starts from a base of audio
samples, as opposed to a synth of some type, and knows enough about MIDI,
envelopes and processing to be interesting and useful.

This is where I'm coming from on this specific day, which is not to say I'm
not interested in other things, but this is a linux-sampler, after all. ;-)
David Olofson
2003-01-20 23:05:45 UTC
Permalink
On Monday 20 January 2003 22.52, Mark Knecht wrote:
[...]
Post by Mark Knecht
I don't know much of anything about Audiality, except for I have a
hard time spelling it! ;-)
Yeah - weird name, isn't it? :-)
Post by Mark Knecht
Really, the web page looks interesting, but I've never tried to use
it.
Well, if you're command line challenged, as you say, you should
probably wait 'till there's a GUI for it. ;-)
Post by Mark Knecht
My focus (in this specific conversation) really initiated from
the idea that linux would benefit from have a simple app that
starts from a base of audio samples, as opposed to a synth of some
type, and knows enough about MIDI, envelopes and processing to be
interesting and useful.
I see. Reading your other post, it seems like most of what you've
mentioned is either implemented or high on the TODO list for
Audiality (obviously, since I'm actually going to *use* this myself
:-) - but I don't know when I'll get around to hack some form of GUI.
It's not a top priority for me, since creating sounds from scratch is
workable with a text editor, and loading WAVs is one trivial line of
script per file.
Post by Mark Knecht
This is where I'm coming from on this specific day, which is not to
say I'm not interested in other things, but this is a
linux-sampler, after all. ;-)
Yes - and it seems like "Battery emulation" would be a rather useful
starting point that still isn't way beyond reach. I'm afraid most of
the work is in the GUI programming domain, though. The parts that
deal with what you mentioned are really the trivial parts in
Audiality, and that's the case with most audio applications. Most of
the complexity in Audiality is stuff that synths generally don't
have; such as the fully configurable mixer, the off-line synth and
the script interpreter that drives it.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Mark Knecht
2003-01-20 23:48:13 UTC
Permalink
-----Original Message-----
David Olofson
Sent: Monday, January 20, 2003 3:06 PM
Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post
Post by Mark Knecht
My focus (in this specific conversation) really initiated from
the idea that linux would benefit from have a simple app that
starts from a base of audio samples, as opposed to a synth of some
type, and knows enough about MIDI, envelopes and processing to be
interesting and useful.
I see. Reading your other post, it seems like most of what you've
mentioned is either implemented or high on the TODO list for
Audiality (obviously, since I'm actually going to *use* this myself
:-) - but I don't know when I'll get around to hack some form of GUI.
It's not a top priority for me, since creating sounds from scratch is
workable with a text editor, and loading WAVs is one trivial line of
script per file.
The value of GUI's early is marketing, and I don't think that's important
right now.

I actually do agree and believe that there's a lot value starting with good
scripts and testing the features BEFORE you invest in a GUI. Let's go after
that first. I'll go look more at what's there in Audiality. My initial worry
was that it was a sequencer, and wanted to be a master and not an
instrument. If it can respond to external MIDI, then it looks like it's got
many of the other interesting things today.

I'm not clear about whether you, David, see this as a linux-sampler project.
I did, but it doesn't matter to me. Give me some instructions and scripts
and either way I'll go test it. However, I think there is value in having a
linux-sampler project at this scale. Something that could be put together
rather quickly, tested pretty easily, to get more people interested in the
whole program overall. With small samples it would run out of memory, but
with larger samples it would have to stream from disk. Getting to the point
where we could do that would be cool. I also think it would help ring out
bugs in the engine.

As GUI's go I think that Battery's is pretty straight forward. They have
something like 50 boxes on the screen. Each box corresponds to a specific
MIDI note and channel. You just load a sound in each box and go. Each box in
Battery has an ADSR, some plugin capabilities and other things, but to start
we just make a matrix of 5x10 or so, load some wave files and go. That would
be a useful device unto itself. The rest could come later when we've seen
the underlying sample playback engine working.

To clarify a couple of things:

1) It's a Jack app
2) It really should handle stereo wave files for samples from day 1
3) Should either have a hard audio panner built into each box, or better
yet, respond to MIDI panning, volume and velocity events. Don't bother with
multiple samples per box (velocity chooses them) until we've seen it run.

I completely believe this could be script driven in the beginning, and
possibly stays that way under the hood even longer term.
David Olofson
2003-01-21 01:35:07 UTC
Permalink
On Tuesday 21 January 2003 00.48, Mark Knecht wrote:
[...GUI...]
Post by Mark Knecht
Post by David Olofson
It's not a top priority for me, since creating sounds from
scratch is workable with a text editor, and loading WAVs is one
trivial line of script per file.
The value of GUI's early is marketing, and I don't think that's
important right now.
That's a good point. Also, most serious users care more about quality
and features, so having something that can do the job is wort a lot
more than having something that looks cool, but doesn't cut it.
Post by Mark Knecht
I actually do agree and believe that there's a lot value starting
with good scripts and testing the features BEFORE you invest in a
GUI.
Yeah, you might actually get it *right*! :-) (And rebuilding GUIs
ain't all that fun...)
Post by Mark Knecht
Let's go after that first. I'll go look more at what's there
in Audiality. My initial worry was that it was a sequencer, and
wanted to be a master and not an instrument.
It contains a MIDI player, but that's it. It was originally meant for
playing SFX and music in a game - and I just can't stand MODs these
days, and didn't want to rely on SoundFonts, so I decided to use MIDI
+ samples instead. Using the sequencer and the off-line synth is
optional, although you have to use either the C API or scripting (the
latter is definitely preferred most of the time) to load WAVs and
tweak patch parameters, if you have to.
Post by Mark Knecht
If it can respond to
external MIDI, then it looks like it's got many of the other
interesting things today.
Check - and it even maps the new mixer no NRPNs, so you can configure
routing and FXs the way you like directly from the sequencer. There's
nothing I hate as much as having to fiddle with all machines to
switch between songs and projects... Just making sure to play the
init beat of a song is so much smoother. :-)

The only "serious" issue right now is that it's all integer/fixed
point processing; not float32. That means you have 16 bits per synth
voice (if even that) and 24 bit integer mixing. Maybe sufficient for
most work, but you can't make use of 24 bit samples as it is now.

I intend to add full FP support soon, and I might drop the integer
support altogether. It's getting more and more pointless, considering
what will be "low end hardware" in a few years. Audiality can't
compete with a simple MOD player on Pentium and 486 CPUs anyway, and
I'm getting more and more bored with that awful integer code, special
handling of compilers w/o 64 bit int support and whatnot.

On modern CPUs, the choice of algorithms impacts the CPU usage much
more than integer vs float, so for new games, Audiality will still be
a viable option - just don't use convolution based reverbs! ;-) Use
something else for Pentiums, or help maintain the integer support in
Audiality. :-)
Post by Mark Knecht
I'm not clear about whether you, David, see this as a linux-sampler project.
It's an independent project which I've been working on for a little
more than a year, as part of a game. I don't know if it makes sense
merging the projects (different priorities, "Choice is Good" and all
that), but it's probably a good idea to share ideas, knowledge and
perhaps some code.

BTW, lots of ideas and even som code for XAP comes from Audiality.
It's entirely possible that I'll split it into a XAP host and some
XAP plugins, given that it's basically structured like that
internally anyway; it's just not as formal as a plugin API, and
doesn't do dynamic loading.
Post by Mark Knecht
I did, but it doesn't matter to me. Give me some
instructions and scripts and either way I'll go test it.
Well, 0.1.0 lacks "waveform mapping", so you can't have more than one
waveform per patch. (Right; the demo songs use one channel for each
drum sound... *hehe*) You might want to wait for 0.1.1, but I'm not
sure when I'll release it.

Could make a development snapshot, though, as it's back in a
functional state now. Most interesting news would be some docs, some
envelope generator bug fixes, a much more powerful mixer (with a nice
NRPN interface) and perhaps most interestingly to testers; a demo app
that loads all sounds I've created so far, and sets the engine up as
a MIDI synth. All you have to do is hack the main loader script to
load your WAVs instead.
Post by Mark Knecht
However, I
think there is value in having a linux-sampler project at this
scale. Something that could be put together rather quickly, tested
pretty easily, to get more people interested in the whole program
overall. With small samples it would run out of memory, but with
larger samples it would have to stream from disk. Getting to the
point where we could do that would be cool. I also think it would
help ring out bugs in the engine.
As GUI's go I think that Battery's is pretty straight forward. They
have something like 50 boxes on the screen. Each box corresponds to
a specific MIDI note and channel. You just load a sound in each box
and go. Each box in Battery has an ADSR, some plugin capabilities
and other things, but to start we just make a matrix of 5x10 or so,
load some wave files and go. That would be a useful device unto
itself. The rest could come later when we've seen the underlying
sample playback engine working.
Sounds like about a day's work to get the required features into
Audiality - but the GUI is more work. Maybe not too far from what I'm
going to implement anyway, though...

The GUI editor I have in mind will basically be a hybrid between a
text editor and a modular synth editor. When it sees constructs it
understands, it displays them as graphical editors instead of text,
so you can edit envelopes, maps and other stuff in a smoother manner.

"Maps" is what rang a bell. What you're describing is basically a
mapping editor of sorts. A map is an object that selects things from
an array, based on some criteria. For example, you could hack a patch
script that to other patches, based on pitch. Each of those patches
could then map to different waveforms, based on velocity.

In the most primitive form, you can just hack something that outputs
a script that sets up the mappings and EGs and loads the waveforms.
For something more sophisticated, make that an application with
access to Audiality's C API, so you can pass the generated script
directly, for instant response.
Post by Mark Knecht
1) It's a Jack app
Planned, and shouldn't be too hard.
Post by Mark Knecht
2) It really should handle stereo wave files for samples from day 1
Audiality did stereo samples before it got the name. :-) It's really
just because the original sound FX samples for that game were in
stereo, but it's a nice feature.

That said, I'll probably switch to mono internally (two voices for
stereo), since mono + stereo is yet another 2x cases for the voice
resamplers. (There are currently 40 of them, to support 8/16 x
mono/stereo and 5 quality modes... Lots of macro magic there! *heh*)
This is another relic from the days as a low end games SFX engine.
Post by Mark Knecht
3) Should either have a hard audio panner built into each box, or
better yet, respond to MIDI panning,
Check.
Post by Mark Knecht
volume
Check.
Post by Mark Knecht
and velocity events.
Check.
Post by Mark Knecht
Don't bother with multiple samples per box (velocity chooses them)
until we've seen it run.
Well, I'll probably get that for free. I'm thinking about creating a
generic "mapper" object. Just give it a number and it hands you a
number back. Then use that to select patches or waveforms based on
whatever parameter you like; MIDI CCs, velocity, pitch or whatever.
Post by Mark Knecht
I completely believe this could be script driven in the beginning,
and possibly stays that way under the hood even longer term.
Yeah, that's kind of handy. Though I wouldn't recommend inventing a
custom language and implementing an interpreter for it, unless you
*really* need to. I did it because I'm interested in that kind of
stuff, and because I wanted a streamlined syntax that I like, and the
ability to run scripts reliably in RT context. Still work to do on
the latter, though; need to switch to bytecode first of all.

Anyway, a big advantage with using scripts for the interface is that
you can change the way things work without hacking and recompiling
the GUI or the engine. That's really the main reason why I
implemented it in Audiality in the first place; I could edit and test
sounds and music in-game without even restarting the game.

It's also cool stuff for "power users", who like to play around with
things not meant for ordinary users. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Mark Knecht
2003-01-20 23:57:38 UTC
Permalink
Post by David Olofson
Post by Benno Senoner
The problem of GS is that it is a
[...]
So, in short, GS is a Windows specific performance hack, while Halion
is a plugin sampler Done Right - only on the wrong OS.
Ah, but GS has the library that the other's wish they had. (And that I've
already invested a couple grand in, so let's not get religious and go some
other direction! There would be huge value in being able to load GSt
libraries into ANYTHING we do.

GSt crashes all the time when running on a PC with other apps. It's actually
pretty stable on it's own machine.
Post by David Olofson
Post by Benno Senoner
MIDI sequencing and a sampler/synth engine on the same box is not a
problem since sequencing only takes a fraction of the available
resources. If you add HD recording to the equation, then the
workload increases significantly but nothing speaks against of
runnning both the HDR and the sampler software in the same box.
Except that they need separate disks, unless they share the disk
butler, I think... Just adding another disk would probably be
acceptable to most serious users, though.
There's a lot going on these days on the sequencer side with notation. I
expect that I will run 2-3 computers to really do what I want to do, but
that's me.

I can already bring my disks to their knees just running Ardour. I doubt my
current, sub-2GHz Athlon XP would run this sampler at the level I push GSt,
which is 10-15 stereo libraries and maybe 100 voices sustained over time.
Hans Zimmer, doing movie scores, has talked of pushing multiple copies of
GSt to the level of 300-500 voices sustained. Those guys are using arrays of
SCSI drives. It can be a lot bigger than just another drive.
David Olofson
2003-01-21 01:48:53 UTC
Permalink
Post by Mark Knecht
Post by David Olofson
Post by Benno Senoner
The problem of GS is that it is a
[...]
So, in short, GS is a Windows specific performance hack, while
Halion is a plugin sampler Done Right - only on the wrong OS.
Ah, but GS has the library that the other's wish they had. (And
that I've already invested a couple grand in, so let's not get
religious and go some other direction! There would be huge value in
being able to load GSt libraries into ANYTHING we do.
Well, I was talking about software design - not file formats. Of
course, LinuxSampler should load and play *anything*! :-)
Post by Mark Knecht
GSt crashes all the time when running on a PC with other apps. It's
actually pretty stable on it's own machine.
That's to be expected when abusing an OS like that, it seems... And
that's why I gave up on audio programming on Windoze a few years ago.
Post by Mark Knecht
Post by David Olofson
Post by Benno Senoner
MIDI sequencing and a sampler/synth engine on the same box is
not a problem since sequencing only takes a fraction of the
available resources. If you add HD recording to the equation,
then the workload increases significantly but nothing speaks
against of runnning both the HDR and the sampler software in
the same box.
Except that they need separate disks, unless they share the disk
butler, I think... Just adding another disk would probably be
acceptable to most serious users, though.
There's a lot going on these days on the sequencer side with
notation. I expect that I will run 2-3 computers to really do what
I want to do, but that's me.
Well, I use two with Audiality, because I have yet to find a Linux
sequencer that I can compile, that does what I need, and that doesn't
get on my nerves. Still using Cakewalk on Windoze, that is. (Although
that's getting on my nerves as well - and not only for political
reasons! *heh*)
Post by Mark Knecht
I can already bring my disks to their knees just running Ardour. I
doubt my current, sub-2GHz Athlon XP would run this sampler at the
level I push GSt, which is 10-15 stereo libraries and maybe 100
voices sustained over time. Hans Zimmer, doing movie scores, has
talked of pushing multiple copies of GSt to the level of 300-500
voices sustained. Those guys are using arrays of SCSI drives. It
can be a lot bigger than just another drive.
Sounds like they have some serious seeking overhead there... I had 16
stereo tracks playing on a 5 GB 5400 rpm drive under Windoze 3.11, so
I know for a fact that it's possible to get at least 60% of the
nominal sustained rate of the drive. IIRC, I used 400 kB of buffering
per track for that, and you need to multiply that with the number of
tracks to keep the seeking overhead constant. That is, 800 kB for 32
tracks, or 8 MB (!) for 320 tracks. Yes, this is *per track* buffer
size.

Indeed, SCSI disks are generally faster when it comes to access
times, but not *that* much faster. RAID arrays (with the same data on
all disks) only divide average access times by the number of drives,
or something like that.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
M. Nentwig
2003-01-21 12:41:16 UTC
Permalink
Moi,

I don't think that there are any restrictions concerning velocity
mapping with Swami. You can assign samples to arbitrary note and / or
velocity 'windows'. In theory it's possible to have a different sample
for each key / velocity combination (but I'd bet nobody has tried that
yet :)

-Markus
David Olofson
2003-01-21 15:45:35 UTC
Permalink
Post by M. Nentwig
Moi,
I don't think that there are any restrictions concerning velocity
mapping with Swami. You can assign samples to arbitrary note and /
or velocity 'windows'. In theory it's possible to have a different
sample for each key / velocity combination (but I'd bet nobody has
tried that yet :)
<plug qualifier="shameless">
How about being able to write C-like code that calculates or
otherwise determines mapping when a note is started?

Well, whether or not it's really useful, this is where Audiality is
going. Processing timestamped events in C is a bit hairy, so I'd
prefer using a custom higher level language for that. Another point
is that strapping on a scripting engine eliminates lots of hardcoded
logic, and the restrictions that come with it.
</plug>


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Josh Green
2003-01-21 19:48:26 UTC
Permalink
Post by David Olofson
Post by M. Nentwig
Moi,
I don't think that there are any restrictions concerning velocity
mapping with Swami. You can assign samples to arbitrary note and /
or velocity 'windows'. In theory it's possible to have a different
sample for each key / velocity combination (but I'd bet nobody has
tried that yet :)
<plug qualifier="shameless">
How about being able to write C-like code that calculates or
otherwise determines mapping when a note is started?
Well, whether or not it's really useful, this is where Audiality is
going. Processing timestamped events in C is a bit hairy, so I'd
prefer using a custom higher level language for that. Another point
is that strapping on a scripting engine eliminates lots of hardcoded
logic, and the restrictions that come with it.
</plug>
<plug qualifier="also shameless">
Yes, I can envision Python being a nice language for this type of thing.
A project for the future of Swami as well. As things stand now
modulators can be used with MIDI velocity source controls to do weired
mappings with velocity (can even control other effects, say Filter
Cutoff for instance :)
</plug>
Josh Green
2003-01-21 19:32:17 UTC
Permalink
Post by M. Nentwig
Moi,
I don't think that there are any restrictions concerning velocity
mapping with Swami. You can assign samples to arbitrary note and / or
velocity 'windows'. In theory it's possible to have a different sample
for each key / velocity combination (but I'd bet nobody has tried that
yet :)
-Markus
It would be interesting to create layered velocity sounds as well, where
samples could be blended over the velocity range in conjunction with an
inverted velocity modulator (to cause a sample to fade out towards the
top of its velocity range). You could have a morphing effect as one
plays notes in increasing or decreasing velocity. Once the Python
binding is completed in Swami (not really that much to do I think)
writing scripts to do these types of things should be fairly easy :)
Cheers.
Josh Green
David Olofson
2003-01-21 19:42:29 UTC
Permalink
On Tuesday 21 January 2003 20.32, Josh Green wrote:
[...]
velocity. Once the Python binding is completed in Swami (not really
that much to do I think) writing scripts to do these types of
things should be fairly easy :) Cheers.
Speaking of scripting, are you planning on actually running Python in
RT context, or just use it for "rendering" maps?


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Josh Green
2003-01-21 20:11:28 UTC
Permalink
Post by David Olofson
[...]
velocity. Once the Python binding is completed in Swami (not really
that much to do I think) writing scripts to do these types of
things should be fairly easy :) Cheers.
Speaking of scripting, are you planning on actually running Python in
RT context, or just use it for "rendering" maps?
For just editing operations, the idea of real time is not of importance.
For doing real time control of effects and MIDI, it might be. It really
remains to be seen in practice what kind of latency is induced by
calling Python code in real time. In the MIDI realm it might not matter
too much. I'm not yet fully familiar with using Python embedded in a
program, but I'm sure there is probably a way to compile script source
into object code. Anyways..
Josh Green
David Olofson
2003-01-21 21:14:46 UTC
Permalink
Post by Josh Green
Post by David Olofson
[...]
velocity. Once the Python binding is completed in Swami (not
really that much to do I think) writing scripts to do these
types of things should be fairly easy :) Cheers.
Speaking of scripting, are you planning on actually running
Python in RT context, or just use it for "rendering" maps?
For just editing operations, the idea of real time is not of
importance. For doing real time control of effects and MIDI, it
might be. It really remains to be seen in practice what kind of
latency is induced by calling Python code in real time. In the MIDI
realm it might not matter too much.
Well, MIDI may not suffer as much from unbounded latency as audio,
but I'm not willing to take chances. We're talking about *unbounded*
worst case latency here, and it's really as bad as it sounds. If you
*can* have memory management stall MIDI processing for half a second
in the middle of a live performance, it *will* happen sooner or
later. (You know Murphy...)

Either way, Audiality runs all event processing in the same context
as the audio processing, so I can't realistically use anything that
isn't RT safe anyway. Even the slightest deadline misses would cause
audible drop-outs.
Post by Josh Green
I'm not yet fully familiar with
using Python embedded in a program, but I'm sure there is probably
a way to compile script source into object code. Anyways..
That might work, but I suspect it will only improve throughput
without making worst case latencies bounded. If the compiled code
could still uses malloc(), garbage collection and other
non-deterministic stuff, you have gained next to nothing WRT RT
reliability.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Josh Green
2003-01-22 02:26:09 UTC
Permalink
Post by David Olofson
Well, MIDI may not suffer as much from unbounded latency as audio,
but I'm not willing to take chances. We're talking about *unbounded*
worst case latency here, and it's really as bad as it sounds. If you
*can* have memory management stall MIDI processing for half a second
in the middle of a live performance, it *will* happen sooner or
later. (You know Murphy...)
Half a second? I'm sure that rarely occurs. I can't speak for Python's
memory management, but much of the critical stuff in Swami uses glib
memory chunks. These allow for an initial allocation block and then only
allocate more if and when needed (as long as you pre-allocate enough, it
shouldn't happen). If I do ever get around to creating a sequencing sub
system, using Python functions will be completely optional. Users who
use this feature will probably understand the potential for problems.
When just playing around with composing music, I don't think its much of
an issue. When one wants to do real time stuff, all the Python functions
can be rendered to a MIDI buffer with explicit time stamps (those that
don't take real time input of course).
Currently, I'm more interested in nice functionality then sub ms
latency. This can always be optimized at a later date.
Post by David Olofson
Either way, Audiality runs all event processing in the same context
as the audio processing, so I can't realistically use anything that
isn't RT safe anyway. Even the slightest deadline misses would cause
audible drop-outs.
Post by Josh Green
I'm not yet fully familiar with
using Python embedded in a program, but I'm sure there is probably
a way to compile script source into object code. Anyways..
That might work, but I suspect it will only improve throughput
without making worst case latencies bounded. If the compiled code
could still uses malloc(), garbage collection and other
non-deterministic stuff, you have gained next to nothing WRT RT
reliability.
//David Olofson - Programmer, Composer, Open Source Advocate
Cheers.
Josh Green

Mark Knecht
2003-01-21 14:51:21 UTC
Permalink
-----Original Message-----
Green
Sent: Tuesday, January 21, 2003 4:25 AM
What kind of velocity functionality are you looking for?
Josh,
I think Markus had it basically right in his follow up email. We need to
be able to map multiple sample sets against a single note, or range of
notes, but choose them based on what MIDI velocity is received. This is the
way all of the good GSt libraries work.

As an example only, for the piano libraries they may sample the piano at
4, 8 or 16 even different playing key pressures. Then the softest sample is
mapped from a MIDI velocity of 0-31, the second from 32-63, third from
64-95, fourth from 96-127. Within each range the same sample is played, but
the sample's audio volume is adjusted based on the velocity, so that a MIDI
velocity of 93 plays louder than a velocity of 68, but they both play the
same sample.

The technical issue with these libraries is that sometimes a velocity of
63 and 64 do not compare well with each other, so there are adjustments made
in each sample set to get it all to work. You are going to find these
adjustments in a .gig file I'm sure.

The other one we need is 'key switching', where a range of keys on the
keyboard are reserved as switches, not notes. When one of these keys is
pressed, the complete sample set for all MIDI velocities changes. I think
this one is easier to implement though. (Famous last words...) You'll find
this in some of the .gig libraries, but possibly not on Worra's site.

Cheers,
Mark
David Olofson
2003-01-21 15:49:46 UTC
Permalink
On Tuesday 21 January 2003 15.51, Mark Knecht wrote:
[...]
Post by Mark Knecht
The other one we need is 'key switching', where a range of keys
on the keyboard are reserved as switches, not notes. When one of
these keys is pressed, the complete sample set for all MIDI
velocities changes. I think this one is easier to implement though.
(Famous last words...) You'll find this in some of the .gig
libraries, but possibly not on Worra's site.
That's a very interesting idea... (Especially if you have 88 keys. ;-)

I have NRPN programmable CC->mixer control mapping (so you can hook
the mod wheel up to the auto-wah base cutoff or something), but I
never thought about mapping *keys*... Or poly pressure. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Mark Knecht
2003-01-21 17:12:27 UTC
Permalink
-----Original Message-----
David Olofson
Sent: Tuesday, January 21, 2003 7:50 AM
Subject: Re: [Linuxsampler-devel] RE: Hi - Very quiet list - my first
post
[...]
Post by Mark Knecht
The other one we need is 'key switching', where a range of keys
on the keyboard are reserved as switches, not notes. When one of
these keys is pressed, the complete sample set for all MIDI
velocities changes. I think this one is easier to implement though.
(Famous last words...) You'll find this in some of the .gig
libraries, but possibly not on Worra's site.
That's a very interesting idea... (Especially if you have 88 keys. ;-)
I have NRPN programmable CC->mixer control mapping (so you can hook
the mod wheel up to the auto-wah base cutoff or something), but I
never thought about mapping *keys*... Or poly pressure. :-)
Key switching is used very nicely in most GSt horn libraries today, as well
as in the Scarbee Bass libraries. Here's an idea of what you get:

(Key Switch Map from memory - definitely not correct)

Key-map Sample
C-3 Standard notes, long sustain
D-3 Standard note, staccato
E-3 Slide up to note
F-3 Slide down to note
G-3 Trills

It's very powerful and allows a library developer to map lots of useful
stuff into the library without taking up normal note space, and also not
confusing beginning users.

It's a bit cranky when you consider notation capabilities in programs like
Rosegarden. None of them know that these notes are key switches. I've taken
recently to moving key switch notes to a separate track that transmits on
the same channel.

Cheers,
Mark
David Olofson
2003-01-21 17:38:25 UTC
Permalink
On Tuesday 21 January 2003 18.12, Mark Knecht wrote:
[...]
Post by Mark Knecht
It's a bit cranky when you consider notation capabilities in
programs like Rosegarden. None of them know that these notes are
key switches. I've taken recently to moving key switch notes to a
separate track that transmits on the same channel.
Well, I'm using piano roll for the few edits I make (I just record
and maybe quantize, scale velocities, scale note lenghts etc), and
that would work perfecty with key switches, I think.

One thing I just *have* to try is hooking some keys up to f and q of
some resonant filters... :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
M. Nentwig
2003-01-21 17:19:42 UTC
Permalink
Moi,
I think that sound fonts can map out a
portion of the keyboard, from looking at Swami, so would it be
possible to
have a bass below middle C and a piano above middle C?
Yes, that's possible. Take two instruments and assign the zones on
preset level.
The technical issue with these libraries is that sometimes
a velocity of
63 and 64 do not compare well with each other, so there are
adjustments made
in each sample set to get it all to work. You are going to find these
adjustments in a .gig file I'm sure.
That's no problem.
The SF2 standard allows completely independent parameters for each
individual velocity / key zone. One could even use a modulator to make
the transition gradual throughout the vel / key range of a sample (for
example velocity to filter cutoff and amplitude).
The other one we need is 'key switching', where a range of
keys on the
keyboard are reserved as switches, not notes. When one of
these keys is
pressed, the complete sample set for all MIDI velocities
changes. I think
this one is easier to implement though. (Famous last
words...) You'll find
this in some of the .gig libraries, but possibly not on Worra's site.
Famous last words indeed... That would mean adding new features to the
SF2 format, to synth and editor.
And then, why switch samples only? If I change to another sample, I'll
probably also want to change filter, envelopes and so on.

In case somebody is interested in the solution I'm using to get a
similar result (with a control program 'wrapped' around iiwusynth):

In iiwusynth there is a quite new feature, the so-called MIDI router. It
can change (for example) the MIDI channel of received data, as in 'all
data received on channels 4..7 goes to the synth on channel 0'.
When I want to switch on-the-fly between different sounds, I assign them
to different synth channels. To change sounds, I just upload a new
router configuration (the router is smart enough to get pending
'noteoff' events right). For example: In state 1, all data goes to
channel 0, in state 2 all data to channel 1. I use program change
messages to switch between 'states' (instead of reserved keys).
Together with the Ladspa Fx unit this also allows to change the effects
setup. For example: Rhodes EP on synth channel 0 and 1, and a phaser
inserted at the audio output of channel 0 only. This effectively
switches the phaser on and off (what's best: held notes are unaffected,
so you can hold a chord, switch the router setup, and continue playing
with a different sound).
If there is interest in the control program I'm using, let me know. But
it's meant for live playing, not for sequencing, and far from
ready-for-the-masses.

Cheers

Markus
Mark Knecht
2003-01-21 17:51:39 UTC
Permalink
Post by M. Nentwig
Post by Mark Knecht
The other one we need is 'key switching', where a range of
keys on the
keyboard are reserved as switches, not notes. When one of
these keys is
pressed, the complete sample set for all MIDI velocities
changes. I think
this one is easier to implement though. (Famous last
words...) You'll find
this in some of the .gig libraries, but possibly not on Worra's site.
Famous last words indeed... That would mean adding new features to the
SF2 format, to synth and editor.
And then, why switch samples only? If I change to another sample, I'll
probably also want to change filter, envelopes and so on.
Markus,
As I said earlier, I haven't, in the past, been a big fan of sound fonts,
not because I know anything about them technically. Just because the Windows
based SF players haven't sounded as good as GSt. When this conversation
started yesterday, I was (and still am) a proponent of doing this app using
the LinuxSampler engine. I'm not asking or even suggesting that anyone
change anything that exists in the Linux SF app space to do what I want to
do. On the other hand, I think peopl eare askign me to use SF's, and I'm not
sure they'll work.

Nor do I know how to map my GSt libraries to one. If Josh wants to tackle
that problem in Swami, then I would certain be happy to help out and do a
bit of testing.

Also, please understand I'm not trying to give a complete use of this
feature in GSt and do not suggest that my list is complete. I know it isn't.
I'm just hoping that I'm getting the key switch idea across so that you
developers can make it real. If there is continued interest in the
Linux-Sampler community to support GSt libraries (a stated goal) then this
threshold will eventually have to be crossed.

In GSt all of the stuff you mention, and more, is supported. I believe that
in GigaSampler it is not all supported, however, GS is pretty much gone now
except as a CD in sound card distributions.
Post by M. Nentwig
In case somebody is interested in the solution I'm using to get a
<SNIP>

This looks like an interesting way to possibly take a drum track and then
split off individaul drums mapped to certain notes and send them to
different synths? Interesting idea.

What sort of latency does this incur? I would assume it's pretty high if you
have to recieve a MIDI event, process it, and then retransmit. Can that be
used live and get a good, tight feel?

The Music Labs MIDI Replicator has some similar features. What I like about
that product (other than its low, $29.95 price) is that I can run MIDI to
many machines over Ethernet, thus saving money. We did some testing and
found it significantly less likely to run into MIDI choke issues also.
David Olofson
2003-01-21 19:31:26 UTC
Permalink
On Tuesday 21 January 2003 18.51, Mark Knecht wrote:
[...]
Post by Mark Knecht
Post by M. Nentwig
In case somebody is interested in the solution I'm using to get a
similar result (with a control program 'wrapped' around
<SNIP>
This looks like an interesting way to possibly take a drum track
and then split off individaul drums mapped to certain notes and
send them to different synths? Interesting idea.
What sort of latency does this incur? I would assume it's pretty
high if you have to recieve a MIDI event, process it, and then
retransmit. Can that be used live and get a good, tight feel?
I don't know how it's implemented here, but technically, as long as
it's done somewhere in between the sequencer and the hardware MIDI
output, there shouldn't be any significant latency. It's only when
you're running physical 31250 bps wire between units that chaining
devices is a problem.


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Josh Green
2003-01-21 20:02:16 UTC
Permalink
Post by Mark Knecht
Markus,
As I said earlier, I haven't, in the past, been a big fan of sound fonts,
not because I know anything about them technically. Just because the Windows
based SF players haven't sounded as good as GSt. When this conversation
started yesterday, I was (and still am) a proponent of doing this app using
the LinuxSampler engine. I'm not asking or even suggesting that anyone
change anything that exists in the Linux SF app space to do what I want to
do. On the other hand, I think peopl eare askign me to use SF's, and I'm not
sure they'll work.
Nor do I know how to map my GSt libraries to one. If Josh wants to tackle
that problem in Swami, then I would certain be happy to help out and do a
bit of testing.
I'm still doing API work in the area of the GUI to make it easy to
plugin new patch formats. Once this work is done (and a few other
things) I will start looking into adding DLS2, .gig, Akai, Gus and
perhaps GSt (I don't know anything about it yet, so not sure how hard
that would be).
Post by Mark Knecht
Also, please understand I'm not trying to give a complete use of this
feature in GSt and do not suggest that my list is complete. I know it isn't.
I'm just hoping that I'm getting the key switch idea across so that you
developers can make it real. If there is continued interest in the
Linux-Sampler community to support GSt libraries (a stated goal) then this
threshold will eventually have to be crossed.
In GSt all of the stuff you mention, and more, is supported. I believe that
in GigaSampler it is not all supported, however, GS is pretty much gone now
except as a CD in sound card distributions.
I think a lot of this could be tackled with some of the "session state"
saving/restoring ideas that have been discussed on LAD before. If you
had a MIDI processor with scripting support, etc. You could create
little filters and actuators and save them along with a project which
could then be loaded at a later date. I don't think we need to worry
about supporting every single feature in a patch format (at least when
it comes to MIDI processing). Having a Linux Audio/Music session
format/standard would be cool. Cheers.
Josh Green
Mark Knecht
2003-01-21 12:10:34 UTC
Permalink
On Tue, 2003-01-21 at 20:02, Josh Green wrote:
I will start looking into adding DLS2, .gig, Akai, Gus and
Post by Josh Green
perhaps GSt (I don't know anything about it yet, so not sure how hard
that would be).
.gig is GSt, one and the same (GSt is GigaStudio, .gig is its file
format)
Mark Knecht
2003-01-21 18:03:32 UTC
Permalink
-----Original Message-----
David Olofson
Sent: Tuesday, January 21, 2003 9:38 AM
Subject: Re: [Linuxsampler-devel] RE: Hi - Very quiet list - my first
post
[...]
Post by Mark Knecht
It's a bit cranky when you consider notation capabilities in
programs like Rosegarden. None of them know that these notes are
key switches. I've taken recently to moving key switch notes to a
separate track that transmits on the same channel.
Well, I'm using piano roll for the few edits I make (I just record
and maybe quantize, scale velocities, scale note lenghts etc), and
that would work perfecty with key switches, I think.
One thing I just *have* to try is hooking some keys up to f and q of
some resonant filters... :-)
Dave,
I hope I wasn't misunderstood. RG, Pro Tools, Cubase SX, they all work
with key switches. I can put key switch events on the same track. They are
sent and cause the sample sets to switch. That does not cause a problem.

The _only_ problem I've run into is that while you and I understand that
a key switch at C-3 isn't a musical note, a notation program does not, and
will paint a quarter note there when one does not need one musically.

Cheers,
Mark
David Olofson
2003-01-21 19:39:12 UTC
Permalink
On Tuesday 21 January 2003 19.03, Mark Knecht wrote:
[...]
Post by Mark Knecht
Dave,
I hope I wasn't misunderstood. RG, Pro Tools, Cubase SX, they
all work with key switches. I can put key switch events on the same
track. They are sent and cause the sample sets to switch. That does
not cause a problem.
Yes, that's what's so great with them; they're based on a part of the
protocol that you basically *have* to support to control an
instrument anyway.
Post by Mark Knecht
The _only_ problem I've run into is that while you and I
understand that a key switch at C-3 isn't a musical note, a
notation program does not, and will paint a quarter note there when
one does not need one musically.
Yeah, I see what you mean, although I'm still not sure whether you're
thinking about printing or editing in "staff view". Either way, both
have the same problem, although it's probably even more annoying to
get it on paper...! ;-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
Continue reading on narkive:
Loading...