Global convergence of ADMM in nonconvex nonsmooth optimization

Y Wang, W Yin, J Zeng - Journal of Scientific Computing, 2019 - Springer
In this paper, we analyze the convergence of the alternating direction method of multipliers
(ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ϕ (x_0 …

Accelerating ADMM for efficient simulation and optimization

J Zhang, Y Peng, W Ouyang, B Deng - ACM Transactions on Graphics …, 2019 - dl.acm.org
The alternating direction method of multipliers (ADMM) is a popular approach for solving
optimization problems that are potentially non-smooth and with hard constraints. It has been …

Fast ADMM algorithm for distributed optimization with adaptive penalty

C Song, S Yoon, V Pavlovic - Proceedings of the AAAI Conference on …, 2016 - ojs.aaai.org
We propose new methods to speed up convergence of the Alternating Direction Method of
Multipliers (ADMM), a common optimization tool in the context of large scale and distributed …

Blockchain in IoT security: a survey

F Alkurdi, I Elgendi, KS Munasinghe… - 2018 28th …, 2018 - ieeexplore.ieee.org
Blockchain shows a huge prospective in the coming future. It is atechnology that provides
the possibility of generating and sharing transaction ledgers that are tamper proof. Use …

Graph structured autoencoder

A Majumdar - Neural Networks, 2018 - Elsevier
In this work, we introduce the graph regularized autoencoder. We propose three variants.
The first one is the unsupervised version. The second one is tailored for clustering, by …

Discretely-constrained deep network for weakly supervised segmentation

J Peng, H Kervadec, J Dolz, IB Ayed, M Pedersoli… - Neural Networks, 2020 - Elsevier
An efficient strategy for weakly-supervised segmentation is to impose constraints or
regularization priors on target regions. Recent efforts have focused on incorporating such …

Beyond gradient descent for regularized segmentation losses

D Marin, M Tang, IB Ayed… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
The simplicity of gradient descent (GD) made it the default method for training ever-deeper
and complex neural networks. Both loss functions and architectures are often explicitly tuned …

Energy efficiency optimization: Joint antenna-subcarrier-power allocation in OFDM-DASs

X Li, X Ge, X Wang, J Cheng… - IEEE Transactions on …, 2016 - ieeexplore.ieee.org
Due to environmental concerns of rising energy consumption caused by explosive growth in
the demands of wireless multimedia services, energy efficiency has become an important …

Towards understanding the adversarial vulnerability of skeleton-based action recognition

T Zheng, S Liu, C Chen, J Yuan, B Li, K Ren - arXiv preprint arXiv …, 2020 - arxiv.org
Skeleton-based action recognition has attracted increasing attention due to its strong
adaptability to dynamic circumstances and potential for broad applications such as …

An empirical study of ADMM for nonconvex problems

Z Xu, S De, M Figueiredo, C Studer… - arXiv preprint arXiv …, 2016 - arxiv.org
The alternating direction method of multipliers (ADMM) is a common optimization tool for
solving constrained and non-differentiable problems. We provide an empirical study of the …