• Cloud-Edge Collaborative Large Models: We focus on building open, intelligent, and efficient AI large models that cater to the diverse data and resources distributed across edge endpoints. Our goal is to satisfy the multi-faceted demands of large model training, fine-tuning, inference, and deployment, while optimizing the model construction process through intelligent means to enhance performance. We aim to drive the widespread adoption of AIGC in vertical application scenarios, achieving deep technology integration and maximum value creation.
  • AI Computing Cyberinfrastructure: We are building a federated edge intelligence platform tailored for large models, leveraging our ‘algorithm-network-intelligence’ integrated technology to design algorithms that adapt large models to edge environments based on ‘hybrid expert models’. By harnessing edge computing network technology, we integrate fragmented computing resources at the edge, enabling large models to run on edge devices and supporting various generative AI capabilities. This will reduce hardware costs and expand the spatiotemporal scope of large model services.
  • Trustworthy AI Governance: As large models are increasingly deployed, their security concerns are becoming more pronounced. We are committed to researching the security challenges faced by large models, including data poisoning and adversarial attacks, with the goal of building secure, trustworthy, and robust AI models that promote the development of trustworthy AI governance.
  • AI4Science: AI technology has made breakthroughs in challenging tasks such as weather forecasting. We focus on training and developing ultra-high-resolution meteorological large models driven by data, as well as researching AI assimilation algorithms and extreme disaster prediction (e.g., FengWu-GHR), contributing to the advancement of scientific research.
.js-id-AI-Computing-Cyberinfrastructure