Loading Events

April 14 April 17

The “Timely and Private Machine Learning over Networks” is a workshop (WS16) organized for the IEEE WCNC 2024 conference in Dubai, United Arab Emirates.

The burgeoning advancements from machine learning and wireless technologies are forging a new paradigm for future networks, which are expected to possess higher degrees of intelligence via inference from extensive data sets and respond to local events in a prompt manner. Due to the sheer volume of data generated by the end devices and the increasing concerns about sharing private information, on-device machine learning models such as federated learning have emerged from the intersection of artificial intelligence and edge computing. In contrast to the conventional machine learning methods, this distributed learning paradigm brings the statistical models directly onto the device for local training, where only the intermediate parameters (e.g., gradients) shall be exchanged amongst the participating agents for aggregation and improvement. The local copies of the model on the devices bring along great advantages of reducing network latency and preserving data privacy. However, realizing such a scheme entails addressing new challenges that necessitate a fundamental departure from standard methods designed for distributed optimizations. Specifically, in many applications, the features/states associated with an agent vary over time. Hence, the learned parameters must be updated accordingly to ensure desirable inference accuracy. However, different agents in a network typically have various processing power and local dataset sizes. As a result, the time spent in their local training varies. Moreover, owing to the difference in the communication bandwidth, the parameters delivered from one agent to the others may be outdated. Besides, inter-node communications are often affected by channel noise and fading, leading to transmission failure and/or corruption of the received message. On the one hand, these factors exacerbate information lag across the learning agents, which can defect the system performance in terms of convergence rate and prediction accuracy, especially for real-time applications. On the other hand, the corrupted and stalled information also enhances end-users’ privacy, as the instantaneous, accurate information is inaccessible. To that end, this workshop aims to foster discussion, discovery, and dissemination of novel ideas and approaches in the interplay between timeliness and privacy in machine learning over networks. We solicit high-quality original papers on topics including, but not limited to:

  • Robust distributed learning algorithms against staleness in information exchange
  • Timeliness-aware distributed algorithms for networked machine learning systems
  • Fundamental limits of network parameters on the performance of distributed learning systems
  • Networking protocols to improve timeliness and privacy in distributed learning
  • Impact of network topology on the timeliness of distributed machine learning algorithms
  • Robust and Private distributed reinforcement/meta/deep learning and other novel learning paradigms
  • Novel methods for distributed machine learning with limited communication resources