[HTML][HTML] When brain-inspired ai meets agi

L Zhao, L Zhang, Z Wu, Y Chen, H Dai, X Yu, Z Liu… - Meta-Radiology, 2023 - Elsevier
Abstract Artificial General Intelligence (AGI) has been a long-standing goal of humanity, with
the aim of creating machines capable of performing any intellectual task that humans can …

Engineering a less artificial intelligence

FH Sinz, X Pitkow, J Reimer, M Bethge, AS Tolias - Neuron, 2019 - cell.com
Despite enormous progress in machine learning, artificial neural networks still lag behind
brains in their ability to generalize to new situations. Given identical training data …

Graph neural networks: foundation, frontiers and applications

L Wu, P Cui, J Pei, L Zhao, X Guo - … of the 28th ACM SIGKDD Conference …, 2022 - dl.acm.org
The field of graph neural networks (GNNs) has seen rapid and incredible strides over the
recent years. Graph neural networks, also known as deep learning on graphs, graph …

Involution: Inverting the inherence of convolution for visual recognition

D Li, J Hu, C Wang, X Li, Q She, L Zhu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Convolution has been the core ingredient of modern neural networks, triggering the surge of
deep learning in vision. In this work, we rethink the inherent principles of standard …

Evolutionary deep learning: A survey

ZH Zhan, JY Li, J Zhang - Neurocomputing, 2022 - Elsevier
As an advanced artificial intelligence technique for solving learning problems, deep learning
(DL) has achieved great success in many real-world applications and attracted increasing …

Masked language modeling and the distributional hypothesis: Order word matters pre-training for little

K Sinha, R Jia, D Hupkes, J Pineau, A Williams… - arXiv preprint arXiv …, 2021 - arxiv.org
A possible explanation for the impressive performance of masked language model (MLM)
pre-training is that such models have learned to represent the syntactic structures prevalent …

Single path one-shot neural architecture search with uniform sampling

Z Guo, X Zhang, H Mu, W Heng, Z Liu, Y Wei… - Computer Vision–ECCV …, 2020 - Springer
We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its
advantages over existing NAS approaches. Existing one-shot method, however, is hard to …

What's hidden in a randomly weighted neural network?

V Ramanujan, M Wortsman… - Proceedings of the …, 2020 - openaccess.thecvf.com
Training a neural network is synonymous with learning the values of the weights. By
contrast, we demonstrate that randomly weighted neural networks contain subnetworks …

Efficient graph generation with graph recurrent attention networks

R Liao, Y Li, Y Song, S Wang… - Advances in neural …, 2019 - proceedings.neurips.cc
We propose a new family of efficient and expressive deep generative models of graphs,
called Graph Recurrent Attention Networks (GRANs). Our model generates graphs one …

Darts+: Improved differentiable architecture search with early stopping

H Liang, S Zhang, J Sun, X He, W Huang… - arXiv preprint arXiv …, 2019 - arxiv.org
Recently, there has been a growing interest in automating the process of neural architecture
design, and the Differentiable Architecture Search (DARTS) method makes the process …