Foundation Model

Introduction of "Cloud-Edge Collaborative Large Models"

In pursuit of building open, intelligent, and efficient AI large models, we aim to address the challenges posed by diverse data and resources distributed across edge devices, which can significantly impact the performance and scalability of large models.

(ICML2024) Causally Motivated Personalized Federated Invariant Learning with Shortcut-Free Information-Theoretic Regularization

This paper introduces a novel method called FedPIN (Personalized Invariant Federated Learning with Shortcut-Averse Information-Theoretic Regularization) to address the out-of-distribution (OOD) generalization problem in personalized federated learning (PFL). By leveraging causal models and information-theoretic constraints, this approach aims to extract personalized invariant features while avoiding the pitfalls of spurious correlations.

(INFOCOM2024) Tomtit: Hierarchical Federated Fine-Tuning of Giant Models based on Autonomous Synchronization

With the rapid advancement of giant models, the paradigm of pre-training models followed by fine-tuning for specific downstream tasks has become increasingly popular. In response to the challenges faced by adapter-based fine-tuning due to insufficient data, and the scalability and inflexibility issues of existing federated fine-tuning solutions, we introduce Tomtit.

(NeurIPS2023) SwapPrompt:Test-Time Prompt Adaptation for Vision-Language Models

In this paper, we propose SwapPrompt, a novel framework that can effectively leverage the self-supervised contrastive learning to facilitate the test-time prompt adaptation. SwapPrompt employs a dual prompts paradigm, i.e., an online prompt and a target prompt that averaged from the online prompt to retain historical information.