GitHubOpen In Colab 执行或查看/下载此笔记本

多麦克风波束赋形

简介

使用麦克风阵列对于提高信号质量(例如,减少混响和噪声)在执行语音识别任务之前非常有用。麦克风阵列还可以估计声源的到达方向,并且此信息稍后可用于“监听”感兴趣声源的方向。

传播模型

我们假设以下声音传播模型

\(x_m[n] = h_m[n] \star s[n] + b_m[n]\),

其中 \(m\) 代表麦克风索引,\(n\) 代表采样点索引,\(h_m\) 代表房间脉冲响应。表达式 \(s[n]\) 代表语音源信号,\(b_m[n]\) 代表加性噪声,\(x_m[n]\) 代表麦克风 \(m\) 采集到的信号。信号也可以在频域表示

\(X_m(t,j\omega) = H_m(j\omega)S(t,j\omega) + B_m(t,j\omega)\),

或采用向量形式

\(\mathbf{X}(t,j\omega) = \mathbf{H}(j\omega)S(t,j\omega) + \mathbf{B}(t,j\omega)\).

注意 \(\mathbf{X}(t,j\omega) \in \mathbb{C}^{M \times 1}\)

在消声情况下,我们可以代入 \(h_m[n] = a_m[n] = \delta(n-\tau_m)\),并且写成 \(H_m(j\omega) = A_m(j\omega) = e^{-j\omega\tau_m}\),其中 \(\tau_m\) 是直达路径的采样点时延,或采用向量形式 \(\mathbf{A}(j\omega) \in \mathbb{C}^{M \times 1}\)

协方差矩阵

对于某些波束赋形器,我们还使用以下协方差矩阵

\(\displaystyle\mathbf{R}_{XX}(j\omega) = \frac{1}{T}\sum_{t=1}^{T}\mathbf{X}(t,j\omega)\mathbf{X}^H(t,j\omega)\)

\(\displaystyle\mathbf{R}_{SS}(j\omega) = \frac{1}{T}\sum_{t=1}^{T}\mathbf{H}(j\omega)\mathbf{H}^H(j\omega)|S(t,j\omega)|^2\)

\(\displaystyle\mathbf{R}_{NN}(j\omega) = \frac{1}{T}\sum_{t=1}^{T}\mathbf{B}(t,j\omega)\mathbf{B}^H(t,j\omega)\)

在实际应用中,通常使用时频掩码来估计语音和噪声的协方差矩阵

\(\displaystyle\mathbf{R}_{SS}(j\omega) \approx \frac{1}{T}\sum_{t=1}^{T}M_S(t,j\omega)\mathbf{X}(t,j\omega)\mathbf{X}^H(t,j\omega)\)

\(\displaystyle\mathbf{R}_{NN}(j\omega) \approx \frac{1}{T}\sum_{t=1}^{T}M_N(t,j\omega)\mathbf{X}(t,j\omega)\mathbf{X}^H(t,j\omega)\)

到达时间差

麦克风 \(1\)\(m\) 之间的到达时间差可以使用带相位变换的广义互相关(GCC-PHAT)来估计,其表达式如下

\(\displaystyle\tau_m = argmax_{\tau} \int_{-\pi}^{+\pi}{\frac{X_1(j\omega) X_m(j\omega)^*}{|X_1(j\omega)||X_m(j\omega)|}e^{j\omega\tau}}d\omega\)

到达方向

相位变换导向响应功率

SRP-PHAT 扫描阵列周围虚拟单位球体上的每个潜在到达方向,并计算相应的功率。对于每个DOA(用单位向量 \(\mathbf{u}\) 表示),存在一个沿 \(\mathbf{u}\) 方向的导向向量 \(\mathbf{A}(j\omega,\mathbf{u}) \in \mathbb{C}^{M \times 1}\)

\(\displaystyle E(\mathbf{u}) = \sum_{p=1}^{M}{\sum_{q=p+1}^{M}{\int_{-\pi}^{+\pi}{\frac{X_p(j\omega)X_q(j\omega)^*}{|X_p(j\omega)||X_q(j\omega)|}}}A_p(j\omega,\mathbf{u})A_q(j\omega,\mathbf{u})^* d\omega}\)

最大功率对应的DOA被选为声源的DOA

\(\mathbf{u}_{max} = argmax_{\mathbf{u}}{E(\mathbf{u})}\)

多信号分类

MUSIC 扫描阵列周围虚拟单位球体上的每个潜在到达方向,并计算相应的功率。对于每个DOA(用单位向量 \(\mathbf{u}\) 表示),存在一个沿 \(\mathbf{u}\) 方向的导向向量 \(\mathbf{A}(j\omega,\mathbf{u}) \in \mathbb{C}^{M \times 1}\)。矩阵 \(\mathbf{U}(j\omega) \in \mathbb{C}^{M \times S}\) 包含对 \(\mathbf{R}_{XX}(j\omega)\) 进行特征分解时获得的对应于 \(S\) 个最小特征值的 \(S\) 个特征向量。功率对应于

\(\displaystyle E(\mathbf{u}) = \frac{\mathbf{A}(j\omega,\mathbf{u})^H \mathbf{A}(j\omega,\mathbf{u})}{\sqrt{\mathbf{A}(j\omega,\mathbf{u})^H \mathbf{U}(j\omega)\mathbf{U}(j\omega)^H\mathbf{A}(j\omega,\mathbf{u})}}\)

最大功率对应的DOA被选为声源的DOA

\(\mathbf{u}_{max} = argmax_{\mathbf{u}}{E(\mathbf{u})}\)

波束赋形

我们在频域应用波束形成:\(Y(j\omega) = \mathbf{W}^H(j\omega)\mathbf{X}(j\omega)\)

延时求和

延时求和波束形成器旨在对齐语音信号以产生相长干涉。系数选择如下

\(\mathbf{W}(j\omega) = \frac{1}{M} \mathbf{A}(j\omega)\).

最小方差无畸变响应

MVDR波束形成器的系数如下

\(\displaystyle\mathbf{W}(j\omega) = \frac{\mathbf{R}_{XX}^{-1}(j\omega)\mathbf{A}(j\omega)}{\mathbf{A}^H(j\omega)\mathbf{R}_{XX}^{-1}(j\omega)\mathbf{A}(j\omega)}\).

广义特征值

GEV波束形成器的系数对应于通过广义特征值分解获得的主成分,满足

\(\mathbf{R}_{SS}(j\omega)\mathbf{W}(j\omega) = \lambda\mathbf{R}_{NN}(j\omega)\mathbf{W}(j\omega)\)

安装 SpeechBrain

首先安装 SpeechBrain

%%capture
# Installing SpeechBrain via pip
BRANCH = 'develop'
!python -m pip install git+https://github.com/speechbrain/speechbrain.git@$BRANCH

准备音频

然后我们将加载通过模拟空气传播为 4 麦克风阵列获得的语音信号。我们还将加载扩散噪声(所有方向)和定向噪声(可建模为空间中的点源)。这里的目标是将混响语音与噪声混合以生成带噪混合信号,并测试波束赋形方法以增强语音。

首先下载要使用的音频样本

%%capture
!wget https://www.dropbox.com/s/0h414xocvu9vw96/speech_-0.82918_0.55279_-0.082918.flac
!wget https://www.dropbox.com/s/xlehxo26mnlkvln/noise_diffuse.flac
!wget https://www.dropbox.com/s/4l6iy5zc9bgr7qj/noise_0.70225_-0.70225_0.11704.flac

现在加载音频文件

import matplotlib.pyplot as plt
from speechbrain.dataio.dataio import read_audio

xs_speech = read_audio('speech_-0.82918_0.55279_-0.082918.flac') # [time, channels]
xs_speech = xs_speech.unsqueeze(0) # [batch, time, channels]
xs_noise_diff = read_audio('noise_diffuse.flac') # [time, channels]
xs_noise_diff = xs_noise_diff.unsqueeze(0) # [batch, time, channels]
xs_noise_loc = read_audio('noise_0.70225_-0.70225_0.11704.flac') # [time, channels]
xs_noise_loc =  xs_noise_loc.unsqueeze(0) # [batch, time, channels]
fs = 16000 # sampling rate

plt.figure(1)
plt.title('Clean signal at microphone 1')
plt.plot(xs_speech.squeeze()[:,0])
plt.figure(2)
plt.title('Diffuse noise at microphone 1')
plt.plot(xs_noise_diff.squeeze()[:,0])
plt.figure(3)
plt.title('Directive noise at microphone 1')
plt.plot(xs_noise_loc.squeeze(0)[:,0])
plt.show()

我们可以听听混响语音

from IPython.display import Audio
Audio(xs_speech.squeeze()[:,0],rate=fs)

现在将混响语音与噪声混合以创建带噪多通道混合信号

ss = xs_speech
nn_diff = 0.05 * xs_noise_diff
nn_loc = 0.05 * xs_noise_loc
xs_diffused_noise = ss + nn_diff
xs_localized_noise = ss + nn_loc

我们可以看看带噪混合信号

plt.figure(1)
plt.title('Microphone 1 (speech + diffused noise)')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(2)
plt.title('Microphone 1 (speech + directive noise)')
plt.plot(xs_localized_noise.squeeze()[:,0])
plt.show()

我们可以听听带噪混合信号

from IPython.display import Audio
Audio(xs_diffused_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)

处理

相位变换导向响应功率

STFT将信号转换到频域,然后协方差计算每个频率bin的协方差矩阵。SRP-PHAT模块将返回到达方向。我们需要提供麦克风阵列的几何形状,在本例中是一个直径为0.1m的四麦克风均匀分布的圆形阵列。系统为每个STFT帧估计DOA。在本例中,我们使用一个来自方向 \(x=-0.82918\), \(y=0.55279\)\(z=-0.082918\) 的声源。从结果可以看出方向相当准确(由于球体离散化存在轻微差异)。另请注意,由于所有麦克风都位于 \(xy\) 平面内,系统无法区分正 \(z\) 轴和负 \(z\) 轴。

from speechbrain.dataio.dataio import read_audio
from speechbrain.processing.features import STFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import SrpPhat

import torch

mics = torch.zeros((4,3), dtype=torch.float)
mics[0,:] = torch.FloatTensor([-0.05, -0.05, +0.00])
mics[1,:] = torch.FloatTensor([-0.05, +0.05, +0.00])
mics[2,:] = torch.FloatTensor([+0.05, +0.05, +0.00])
mics[3,:] = torch.FloatTensor([+0.05, +0.05, +0.00])

stft = STFT(sample_rate=fs)
cov = Covariance()
srpphat = SrpPhat(mics=mics)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
doas = srpphat(XXs)

print(doas)

多信号分类

STFT将信号转换到频域,然后协方差计算每个频率bin的协方差矩阵。MUSIC模块将返回到达方向。我们需要提供麦克风阵列的几何形状,在本例中是一个直径为0.1m的四麦克风均匀分布的圆形阵列。系统为每个STFT帧估计DOA。在本例中,我们使用一个来自方向 \(x=-0.82918\), \(y=0.55279\)\(z=-0.082918\) 的声源。从结果可以看出方向相当准确(由于球体离散化存在轻微差异)。另请注意,由于所有麦克风都位于 \(xy\) 平面内,系统无法区分正 \(z\) 轴和负 \(z\) 轴。

from speechbrain.dataio.dataio import read_audio
from speechbrain.processing.features import STFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import Music

import torch

mics = torch.zeros((4,3), dtype=torch.float)
mics[0,:] = torch.FloatTensor([-0.05, -0.05, +0.00])
mics[1,:] = torch.FloatTensor([-0.05, +0.05, +0.00])
mics[2,:] = torch.FloatTensor([+0.05, +0.05, +0.00])
mics[3,:] = torch.FloatTensor([+0.05, +0.05, +0.00])

stft = STFT(sample_rate=fs)
cov = Covariance()
music = Music(mics=mics)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
doas = music(XXs)

print(doas)

延时求和波束赋形

STFT将信号转换到频域,然后协方差计算每个频率bin的协方差矩阵。GCC-PHAT模块将估计每个麦克风之间的到达时间差(TDOA),并使用此TDOA进行延时求和。

受扩散噪声干扰的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import DelaySum

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
delaysum = DelaySum()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
tdoas = gccphat(XXs)
Ys_ds = delaysum(Xs, tdoas)
ys_ds = istft(Ys_ds)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_ds[0,:,:,0,0]**2 + Ys_ds[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_ds.squeeze())
plt.show()

我们还可以听波束形成后的信号,并与带噪声的信号进行比较。

from IPython.display import Audio
Audio(xs_diffused_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_ds.squeeze(),rate=fs)

受定向噪声干扰的语音

当存在定向噪声时,这会更棘手,因为GCC-PHAT会捕捉到噪声源的TDOA。目前,我们简单地假设我们知道TDOA,但可以应用理想二值掩码来区分语音TDOA和噪声TDOA。

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import DelaySum

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
delaysum = DelaySum()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
XXs = cov(Xs)
tdoas = gccphat(XXs)

Xs = stft(xs_localized_noise)
XXs = cov(Xs)
Ys_ds = delaysum(Xs, tdoas)
ys_ds = istft(Ys_ds)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_ds[0,:,:,0,0]**2 + Ys_ds[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_ds.squeeze())
plt.show()

我们还可以听波束形成后的信号,并与带噪声的信号进行比较。

from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_ds.squeeze(),rate=fs)

最小方差无畸变响应

STFT将信号转换到频域,然后协方差计算每个频率bin的协方差矩阵。GCC-PHAT模块将估计每个麦克风之间的到达时间差(TDOA),并使用此TDOA进行MVDR波束形成。

受扩散噪声干扰的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import Mvdr

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
mvdr = Mvdr()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
Nn = stft(nn_diff)
NNs = cov(Nn)
XXs = cov(Xs)
tdoas = gccphat(XXs)
Ys_mvdr = mvdr(Xs, NNs, tdoas)
ys_mvdr = istft(Ys_mvdr)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_mvdr[0,:,:,0,0]**2 + Ys_mvdr[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_mvdr.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_diffused_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_mvdr.squeeze(),rate=fs)

受定向噪声污染的语音

再次,当存在定向噪声时,这会更棘手,因为GCC-PHAT会捕捉到噪声源的TDOA。目前,我们简单地假设我们知道TDOA,但可以应用理想二值掩码来区分语音TDOA和噪声TDOA。

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import GccPhat
from speechbrain.processing.multi_mic import Mvdr

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
mvdr = Mvdr()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
Nn = stft(nn_loc)
XXs = cov(Xs)
NNs = cov(Nn)
tdoas = gccphat(XXs)

Xs = stft(xs_localized_noise)
Ys_mvdr = mvdr(Xs, NNs, tdoas)
ys_mvdr = istft(Ys_mvdr)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_diffused_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_mvdr[0,:,:,0,0]**2 + Ys_mvdr[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_mvdr.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_mvdr.squeeze(),rate=fs)

广义特征值波束赋形

STFT将信号转换到频域,然后协方差计算每个频率bin的协方差矩阵。我们假设可以分别计算语音和噪声的协方差矩阵,并将其用于波束形成。协方差矩阵可以使用理想二值掩码进行估计。

受扩散噪声干扰的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import Gev

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
gev = Gev()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_diffused_noise)
Ss = stft(ss)
Nn = stft(nn_diff)
SSs = cov(Ss)
NNs = cov(Nn)
Ys_gev = gev(Xs, SSs, NNs)
ys_gev = istft(Ys_gev)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_localized_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_gev[0,:,:,0,0]**2 + Ys_gev[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_gev.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_gev.squeeze(),rate=fs)

受定向噪声干扰的语音

from speechbrain.processing.features import STFT, ISTFT
from speechbrain.processing.multi_mic import Covariance
from speechbrain.processing.multi_mic import Gev

import matplotlib.pyplot as plt
import torch

stft = STFT(sample_rate=fs)
cov = Covariance()
gccphat = GccPhat()
gev = Gev()
istft = ISTFT(sample_rate=fs)

Xs = stft(xs_localized_noise)
Ss = stft(ss)
Nn = stft(nn_loc)
SSs = cov(Ss)
NNs = cov(Nn)
Ys_gev = gev(Xs, SSs, NNs)
ys_gev = istft(Ys_gev)

plt.figure(1)
plt.title('Noisy signal at microphone 1')
plt.imshow(torch.transpose(torch.log(Xs[0,:,:,0,0]**2 + Xs[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(2)
plt.title('Noisy signal at microphone 1')
plt.plot(xs_localized_noise.squeeze()[:,0])
plt.figure(3)
plt.title('Beamformed signal')
plt.imshow(torch.transpose(torch.log(Ys_gev[0,:,:,0,0]**2 + Ys_gev[0,:,:,1,0]**2), 1, 0), origin="lower")
plt.figure(4)
plt.title('Beamformed signal')
plt.plot(ys_gev.squeeze())
plt.show()
from IPython.display import Audio
Audio(xs_localized_noise.squeeze()[:,0],rate=fs)
from IPython.display import Audio
Audio(ys_gev.squeeze(),rate=fs)

引用 SpeechBrain

如果您在研究或商业中使用SpeechBrain,请使用以下BibTeX条目引用

@misc{speechbrainV1,
  title={Open-Source Conversational AI with {SpeechBrain} 1.0},
  author={Mirco Ravanelli and Titouan Parcollet and Adel Moumen and Sylvain de Langen and Cem Subakan and Peter Plantinga and Yingzhi Wang and Pooneh Mousavi and Luca Della Libera and Artem Ploujnikov and Francesco Paissan and Davide Borra and Salah Zaiem and Zeyu Zhao and Shucong Zhang and Georgios Karakasidis and Sung-Lin Yeh and Pierre Champion and Aku Rouhe and Rudolf Braun and Florian Mai and Juan Zuluaga-Gomez and Seyed Mahed Mousavi and Andreas Nautsch and Xuechen Liu and Sangeet Sagar and Jarod Duret and Salima Mdhaffar and Gaelle Laperriere and Mickael Rouvier and Renato De Mori and Yannick Esteve},
  year={2024},
  eprint={2407.00463},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2407.00463},
}
@misc{speechbrain,
  title={{SpeechBrain}: A General-Purpose Speech Toolkit},
  author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
  year={2021},
  eprint={2106.04624},
  archivePrefix={arXiv},
  primaryClass={eess.AS},
  note={arXiv:2106.04624}
}