Setting up Wine for real-time audio

WaitMultipleObjects() Win32 API call

Using Wine for multimedia without glitches is highly dependent on futex2() API, which is in foreseeable making its way to the mainline Linux kernel:

More precisely, the patches add futex_waitv(), which allows a drop-in replacement of WaitMultipleObjects() Win32 API call, leading much more efficient single round-trip processing, rather than having to use a variable number of futex() calls.

In order to take advantage of the new API now, a patched kernel must be used. Arch Linux provides and officially supports Zen Kernel (linux-zen), which contains a well-considered set of patches on top of the mainline kernel for desktop usage, include futex2() patches.

In addition a patched version of Wine is required because official releases still lack the support futex2() API (for more than obvious reasons). A popular, and "field-tested" choice is wine-tkg available at:


WineASIO is ASIO driver proxy for JACK Audio available at:

In Arch Linux the AUR package is out-dated, but luckily WineASIO is somewhat trivial to compile from sources. Once successfully setup, Windows audio applications should discover an ASIO device unsurprisingly having the name "WineASIO".

Enable IOMMU properly on Intel x86 CPU's for desktop virtualization use.

When using IOMMU hardware for desktop virtualization on Intel x86 CPU's, these
kernel command-line settings are probably appropriate for most of the time:

  • intel_iommu=on: enable the hardware.
  • iommu=pt: DMA address mapping passthrough.

The latter option disables DMA address mapping for the host kernel, and thus
disables the performance hit for non-virtualization use, and is thus essential.
Otherwise, you will always a small price of IOMMU step in address translation,
even when not taking advantage of it.

Sending MIDI to a VST plugin in FL Studio

Native FL Studio Plugins, such as Sampler, are not able to do MIDI communication. Thus, in order to send MIDI data to a VST plugin, you have to create a Layer instance, and assign the Sampler instance as its child. I.e. the MIDI patterns that you'd normally have triggering the Sampler, trigger the Layer, which then triggers the Sampler instance. Then, create a MIDI Out instance, also assigned as child of the Layer instance.

Finally configure the MIDI port settings properly for the MIDI Out instance and the destination VST plugin, as shown the picture:

Obviously, you can use any port number you don't use for anything else (e.g. for a controller or hardware synth).



Using Graphene to run applications inside SGX enclaves

Install dependencies (not comprehensive, contains only non-obvious):

$ pip3 install google-api-python-client

Get Graphene:

$ git clone

Check the FSGSBASE support, which can be validated from AT_HW_CAP2:

$ LD_SHOW_AUXV=1 /bin/true | grep AT_HWCAP2
AT_HWCAP2:            0x2

Go to the cloned directory, and build Graphene:

$ cd graphene

Create a signing key for enclaves:

$ openssl genrsa -3 -out enclave-key.pem 3072
$ export SGX_SIGNER_KEY=$PWD/enclave-key.pem

There is an example, which runs bash inside an enclave. Let's give that a shot!

First, build it:

$ cd Examples/bash
$ make SGX=1 DEBUG=1

Then, you can run it:

$ SGX=1 ./pal_loader ./bash -c "ls"

Low-latency audio settings for Linux

I'll briefly describe how I go on configuring Linux for low-latency audio in Ubuntu and its derivatives.

Threaded interrupt handlers

When threaded interrupt handlers are enabled, kernel only acknowledges the triggered interrupt with preemption disabled, and right after that assigns a thread for the interrupt handler. The overall latency is reduced because user space processes can be scheduled almost immediately.

Threaded interrupt handlers can be enabled by adding threadirqs to GRUB_CMDLINE_LINUX in /etc/default/grub, and running update-grub. After tforfhe next reboot, they are activated.

rtirq-init package contains rtirq initialization script, which assigns priority to all interrupt handlers according to the rules in /etc/default/rtirq. The default configuration gives the highest priority for the sound card and USB host controller:

$ ps -T -o comm,policy,rtprio -p $(pgrep -w -d ',' irq) | egrep '(snd|hci)'
irq/126-xhci_hc FF      85
irq/143-snd_hda FF      90

This reduces audio latency as the sound card gets always served first.

CPU frequency scaling

In order to keep audio latency in a steady state, the CPU should always run at a constant frequency.


Downscaling occurs when the operating frequency of a CPU is decreased. The default power governor, powersave, does this when the demand for computing is low.

Downscaling can be disabled by using another power governor called performance, which keeps the CPU operating at its maximum frequency. A straight-forward way to enable it is to install cpufrequtils package, and create a file called /etc/default/cpufrequtils with the following statement:


After reboot the governor should have been changed:

$ cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor


Upscaling occurs when the operating frequency of a CPU is temporarily increased above its maximum operating frequency. Intel CPU's have a feature called Turbo Boost, which causes the CPU to automatically upscale when the demand for processing is high.

Here's a systemd service disabling Turbo Boost, taken from a blog post:

$ cat /etc/systemd/system/disable-turbo-boost.service
Description=Disable Turbo Boost on a Intel CPU

ExecStart=/bin/sh -c "echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo"
ExecStop=/bin/sh -c "echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo"


Resource limits

Popular audio software, such as JACK and Reaper, can usually take the advantage of real-time scheduling and memory locking, if such resources are available for the application process.

Resource limits for can be configured by creating a new config file to /etc/security/limits.d. The approach that I use is a supplemental group for low latency audio:

$ cat | sudo tee /etc/security/limits.d/audio.user.conf
@audio.user - memlock   8388608 # 8 GB
@audio.user - rtprio    40

JACK Audio Connection Kit (JACK)

It's not advisable to set memory locking unlimited, but rather set an appropriate fixed limit for your needs. This will provide some governance against software bugs, and also prevents user unknowingly overloading the system.

The priorities of user threads should be capped well below the interrupt handler priorities, as hardware should be served first. If this does not happen, it could at worst cause the hardware to fail.

JACK Audio Connection Kit, or just JACK, is a low-latency audio server for multiplexing the audio hardware for multiple clients. Its the most modern incarnation, JACK2, is centered around a command-line tool called jack_control, interfacing jackdbus daemon process, which manages the actual audio server jackd and PulseAudio interconnections with it.


I've created ~/bin/jack_init script for reconfiguring JACK2:



# Use the ALSA backend.
jack_control ds alsa

# Enable rtprios.
jack_control eps realtime true

# Configure the sound card.
jack_control dps device $SOUND_CARD
jack_control dps capture $SOUND_CARD
jack_control dps playback $SOUND_CARD
jack_control dps rate $SAMPLE_RATE
jack_control dps nperiods $FRAMES_PER_INT
jack_control dps period $FRAME_SIZE

For example, running jack_init hw:EVO4 3 256 populates ~/.config/jack/conf.xml with

  <option name="driver">alsa</option>
  <option name="realtime">true</option>
  <driver name="alsa">
   <option name="device">hw:EVO4</option>
   <option name="capture">hw:EVO4</option>
   <option name="playback">hw:EVO4</option>
   <option name="rate">48000</option>
   <option name="period">256</option>
   <option name="nperiods">3</option>

The JACK run-time logs are stored to ~/.log/jack/jackdbus.log.

Sampling a web browser to Reaper using JACK

JACK gives quite convenient tools to sample audio from a web browser, or any other desktop application, which is sometimes so much convenient than trying to save it as a file. It's just a trivial matter of re-routing PulseAudio interconnections:

$ jack_connect "PulseAudio JACK Sink:front-left" REAPER:in1
$ jack_connect "PulseAudio JACK Sink:front-right" REAPER:in2
$ jack_disconnect "PulseAudio JACK Sink:front-left" system:playback_1
$ jack_disconnect "PulseAudio JACK Sink:front-right" system:playback_2

Now desktop audio can be only heard when monitored from Reaper and can be trivially record to any track.