Do LLMs Exhibit Human-Like Reasoning? Evaluating Theory of Mind in LLMs for Open-Ended Responses M Amirizaniani, E Martin, M Sivachenko, A Mashhadi, C Shah arXiv preprint arXiv:2406.05659, 2024 | 3 | 2024 |
Can LLMs Reason Like Humans? Assessing Theory of Mind Reasoning in LLMs for Open-Ended Questions M Amirizaniani, E Martin, M Sivachenko, A Mashhadi, C Shah Proceedings of the 33rd ACM International Conference on Information and …, 2024 | 1 | 2024 |
E2T2: Emote Embedding for Twitch Toxicity Detection K Moosavi, E Martin, MA Ahmad, A Mashhadi Companion Publication of the 2024 Conference on Computer-Supported …, 2024 | | 2024 |
AuditLLM: A Tool for Auditing Large Language Models Using Multiprobe Approach M Amirizaniani, E Martin, T Roosta, A Chadha, C Shah arXiv preprint arXiv:2402.09334, 2024 | | 2024 |