Large language models are zero-shot reasoners T Kojima, SS Gu, M Reid, Y Matsuo, Y Iwasawa Advances in neural information processing systems 35, 22199-22213, 2022 | 2516 | 2022 |
Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment T Kojima, Y Matsuo, Y Iwasawa Proceedings of the 31st International Joint Conference on Artificial …, 2022 | 25 | 2022 |
Unnatural error correction: Gpt-4 can almost perfectly handle unnatural scrambled text Q Cao, T Kojima, Y Matsuo, Y Iwasawa Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | 6 | 2023 |
Making use of latent space in language GANs for generating diverse text without pre-training T Kojima, Y Iwasawa, Y Matsuo Proceedings of the 16th Conference of the European Chapter of the …, 2021 | 3 | 2021 |
On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons T Kojima, I Okimura, Y Iwasawa, H Yanaka, Y Matsuo arXiv preprint arXiv:2404.02431, 2024 | 2 | 2024 |
Robustifying Vision Transformer Without Retraining from Scratch Using Attention-Based Test-Time Adaptation T Kojima, Y Iwasawa, Y Matsuo New Generation Computing 41 (1), 5-24, 2023 | 1 | 2023 |
Cycle Sketch GAN: Unpaired Sketch to Sketch Translation Based on Cycle GAN Algorithm T Kojima Proceedings of the Annual Conference of JSAI 33rd (2019), 3B3E203-3B3E203, 2019 | | 2019 |