PlinyCompute: A platform for high-performance, distributed, data-intensive tool development

J Zou, RM Barnett, T Lorido-Botran, S Luo… - Proceedings of the …, 2018 - dl.acm.org
This paper describes PlinyCompute, a system for development of high-performance, data-
intensive, distributed computing tools and libraries.\emphIn the large, PlinyCompute …

Bringing geospatial data closer to mobile users: A caching approach based on vector tiles for wireless multihop scenarios

C Li, H Lu, Y Xiang, Z Liu, W Yang… - Mobile Information …, 2018 - Wiley Online Library
Mobile applications based on geospatial data are nowadays extensively used to support
people's daily activities. Despite the potential overlap among nearby users' geospatial data …

A high-performance processing system for monitoring stock market data stream

K Li, D Fernandez, D Klingler, Y Gao, J Rivera… - Proceedings of the 16th …, 2022 - dl.acm.org
High-performance real-time monitoring of the stock market data stream is one of the
challenging use cases of stream processing systems. By monitoring real-time stock price …

Declarative Relational Machine Learning Systems

D Jankov - 2023 - search.proquest.com
Several systems, most notably TensorFlow and PyTorch, have revolutionized how we
practice machine learning (ML). They allow an ML practitioner to create complex models …

Elastic cocoa: Scaling in to improve convergence

M Kaufmann, T Parnell, K Kourtis - arXiv preprint arXiv:1811.02322, 2018 - arxiv.org
In this paper we experimentally analyze the convergence behavior of CoCoA and show, that
the number of workers required to achieve the highest convergence rate at any point in time …

[PDF][PDF] Αναστασιος Κυριλλιδης

CA Uribe - 2023 - repository.rice.edu
In recent years ML models have been growing rapidly in terms of number of parameters. For
example, the large transformer model GPT-1 debuted with an impressive 117 million …

Parallel training of machine learning models

N Ioannou, C Duenner, T Parnell - US Patent 11,573,803, 2023 - Google Patents
Parallel training of a machine learning model on a computerized system is described.
Computing tasks of a system can be assigned to multiple workers of the system. Training …

Elastic training of machine learning models via re-partitioning based on feedback from the training algorithm

M Kaufmann, T Parnell, AK Kourtis - US Patent 11,886,960, 2024 - Google Patents
Parallel training of a machine learning model on a computerized system may be provided.
Computing tasks can be assigned to multiple workers of a system. A method may include …

[PDF][PDF] Performance of serialized object processing in Java

H Hagberg - 2019 - aaltodoc.aalto.fi
Distributed systems have become increasingly more common. In these systems, multiple
nodes communicate with each other, typically over a network, which requires serialization of …

[PDF][PDF] Live Processing of a Distributed Camera Network

P MOREAU - 2019 - bu.edu
The shark crisis in the island of La Reunion requires innovative measures to secure sensible
surfing and recreation shores. The CRA research center is developing a detection algorithm …