Uniform Convergence of Deep Neural Networks With Lipschitz Continuous Activation Functions and Variable Widths [0.03%]
具 lipschitz 连续激活函数和可变宽度的深度神经网络的一致收敛性
Yuesheng Xu,Haizhang Zhang
Yuesheng Xu
We consider deep neural networks (DNNs) with a Lipschitz continuous activation function and with weight matrices of variable widths. We establish a uniform convergence analysis framework in which sufficient conditions on weight matrices and...
Xiaoda Qu,Xiran Fan,Baba C Vemuri
Xiaoda Qu
Distributional approximation is a fundamental problem in machine learning with numerous applications across all fields of science and engineering and beyond. The key challenge in most approximation methods is the need to tackle the intracta...
Matrix Reordering for Noisy Disordered Matrices: Optimality and Computationally Efficient Algorithms [0.03%]
含噪乱序矩阵的重排:最优性和高效算法
T Tony Cai,Rong Ma
T Tony Cai
Motivated by applications in single-cell biology and metagenomics, we investigate the problem of matrix reordering based on a noisy disordered monotone Toeplitz matrix model. We establish the fundamental statistical limit for this problem i...
Non-Asymptotic Guarantees for Reliable Identification of Granger Causality via the LASSO [0.03%]
LASSO变量选择在Granger因果性分析中识别一致性的非渐近性质研究
Proloy Das,Behtash Babadi
Proloy Das
Granger causality is among the widely used data-driven approaches for causal analysis of time series data with applications in various areas including economics, molecular biology, and neuroscience. Two of the main challenges of this method...
On Support Recovery with Sparse CCA: Information Theoretic and Computational Limits [0.03%]
稀疏CCA中的支持恢复:信息理论和计算限制
Nilanjana Laha,Rajarshi Mukherjee
Nilanjana Laha
In this paper, we consider asymptotically exact support recovery in the context of high dimensional and sparse Canonical Correlation Analysis (CCA). Our main results describe four regimes of interest based on information theoretic and compu...
Classification logit two-sample testing by neural networks for differentiating near manifold densities [0.03%]
基于神经网络的分类逻辑二样本检验以区分接近流形的概率密度分布
Xiuyuan Cheng,Alexander Cloninger
Xiuyuan Cheng
The recent success of generative adversarial networks and variational learning suggests that training a classification network may work well in addressing the classical two-sample problem, which asks to differentiate two densities given fin...
Mechanisms for Hiding Sensitive Genotypes with Information-Theoretic Privacy [0.03%]
基于信息论隐私的敏感基因型隐藏机制研究
Fangwei Ye,Hyunghoon Cho,Salim El Rouayheb
Fangwei Ye
Motivated by the growing availability of personal genomics services, we study an information-theoretic privacy problem that arises when sharing genomic data: a user wants to share his or her genome sequence while keeping the genotypes at ce...
Orbit Structure of Grassmannian G2,m and a Decoder for Grassmann Code C(2, m) [0.03%]
Grassmann流形G2,m的轨道结构及C(2, m)的译码器
Fernando L Piñero,Prasant Singh
Fernando L Piñero
In this article, we consider decoding Grassmann codes, linear codes associated to the Grassmannian and its embedding in a projective space. We look at the orbit structure of Grassmannian arising from the multiplicative group F q m * in ...
Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference [0.03%]
稀疏组Lasso:最优样本复杂度、收敛速度及统计推断
T Tony Cai,Anru R Zhang,Yuchen Zhou
T Tony Cai
We study sparse group Lasso for high-dimensional double sparse linear regression, where the parameter of interest is simultaneously element-wise and group-wise sparse. This problem is an important instance of the simultaneously structured m...
Yuchen Zhou,Anru R Zhang,Lili Zheng et al.
Yuchen Zhou et al.
This paper studies a general framework for high-order tensor SVD. We propose a new computationally efficient algorithm, tensor-train orthogonal iteration (TTOI), that aims to estimate the low tensor-train rank structure from the noisy high-...