Download
Abstract
This thesis investigates the dual challenges of communication efficiency and Byzantine robustness in decentralized multi-agent reinforcement learning. We develop algorithms that reduce inter-agent communication overhead while maintaining convergence guarantees even in the presence of adversarial (Byzantine) agents, bridging theoretical foundations with practical protocol design for large-scale distributed systems.