Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always …
With the burgeoning growth of online video platforms and the escalating volume of video content, the demand for proficient video understanding tools has intensified markedly. Given …
Recent progress in large-scale pre-training has led to the development of advanced vision- language models (VLMs) with remarkable proficiency in comprehending and generating …
HG Souto, A Moradi - Expert Systems with Applications, 2024 - Elsevier
This paper investigates the application of neural basis expansion analysis with exogenous variables (NBEATSx) in the prediction of daily stock realized volatility for various time steps …
Graph neural networks are widely used tools for graph prediction tasks. Motivated by their empirical performance, prior works have developed generalization bounds for graph neural …
Video summarization aims to distill the most important information from a source video into either an abridged video clip or a textual narrative. Existing methods often treat the …
S Shen, Y Zhou, B Wei, EI Chang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Existing fine-tuning methods for computer vision tasks primarily focus on re-weighting the knowledge learned from the source domain during pre-training. They aim to retain beneficial …
The advent of large-scale pretrained language models (PLMs) has contributed greatly to the progress in natural language processing (NLP). Despite its recent success and wide …
Video summarization aims to create short, accurate, and cohesive summaries of longer videos. Despite the existence of various video summarization datasets, a notable limitation …