A review on dropout regularization approaches for deep neural networks within the scholarly domain

I Salehin, DK Kang - Electronics, 2023 - mdpi.com
Dropout is one of the most popular regularization methods in the scholarly domain for
preventing a neural network model from overfitting in the training phase. Developing an …

Do not have enough data? Deep learning to the rescue!

A Anaby-Tavor, B Carmeli, E Goldbraich… - Proceedings of the AAAI …, 2020 - aaai.org
Based on recent advances in natural language modeling and those in text generation
capabilities, we propose a novel data augmentation method for text classification tasks. We …

Survey of dropout methods for deep neural networks

A Labach, H Salehinejad, S Valaee - arXiv preprint arXiv:1904.13310, 2019 - arxiv.org
Dropout methods are a family of stochastic techniques used in neural network training or
inference that have generated significant research interest and are widely used in practice …

Gmail smart compose: Real-time assisted writing

MX Chen, BN Lee, G Bansal, Y Cao, S Zhang… - Proceedings of the 25th …, 2019 - dl.acm.org
In this paper, we present Smart Compose, a novel system for generating interactive, real-
time suggestions in Gmail that assists users in writing mails by reducing repetitive typing. In …

Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs

RL Murphy, B Srinivasan, V Rao, B Ribeiro - arXiv preprint arXiv …, 2018 - arxiv.org
We consider a simple and overarching representation for permutation-invariant functions of
sequences (or multiset functions). Our approach, which we call Janossy pooling, expresses …

Co-regularized alignment for unsupervised domain adaptation

A Kumar, P Sattigeri, K Wadhawan… - Advances in neural …, 2018 - proceedings.neurips.cc
Deep neural networks, trained with large amount of labeled data, can fail to generalize well
when tested with examples from a target domain whose distribution differs from the training …

Evolutionary stochastic gradient descent for optimization of deep neural networks

X Cui, W Zhang, Z Tüske… - Advances in neural …, 2018 - proceedings.neurips.cc
We propose a population-based Evolutionary Stochastic Gradient Descent (ESGD)
framework for optimizing deep neural networks. ESGD combines SGD and gradient-free …

IAUnet: Global context-aware feature learning for person reidentification

R Hou, B Ma, H Chang, X Gu, S Shan… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Person reidentification (reID) by convolutional neural network (CNN)-based networks has
achieved favorable performance in recent years. However, most of existing CNN-based …

SCL-CVD: Supervised contrastive learning for code vulnerability detection via GraphCodeBERT

R Wang, S Xu, Y Tian, X Ji, X Sun, S Jiang - Computers & Security, 2024 - Elsevier
Detecting vulnerabilities in source code is crucial for protecting software systems from
cyberattacks. Pre-trained language models such as CodeBERT and GraphCodeBERT have …

Simplified multilayer graph convolutional networks with dropout

F Yang, H Zhang, S Tao - Applied Intelligence, 2022 - Springer
Graph convolutional networks (GCNs) and their variants are excellent deep learning
methods for graph-structured data. Moreover, multilayer GCNs can perform feature …