Abstract
<jats:p>This paper presents a structured and pragmatic analysis of several aggregation methods for asynchronous centralized federated learning: classical federated averaging (FedAvg), asynchronous gradient/parametric advancement (FedAsync-style methods), robust aggregation based on coordinate-weighted medians, and adaptive optimizers such as FedAdam and FedYogi. The study focuses on their suitability under conditions of heterogeneous client time, update staleness, non-IID (independently and identically distributed) data partitions, and the limited real-time created in distributed real-time systems. The sensitivity analysis of the methods to delayed and out-of-date updates is performed, and the communication and computational costs at the central node are also refined. A semi-hypothetical, realistic experimental study is conducted using non-IID datasets with failure modes including node shutdown and system time drift. Comparative findings from the study showed that FedAvg degrades sharply with high age and skewed participation; reliable aggregation can unexpectedly amplify the impact of age-old but structurally consistent ages; and adaptive methods exhibit a nontrivial tension between rapid convergence and instability when delay patterns change over time.</jats:p>