✨
AI Connect Whitepaper
  • Project Introduction
  • 1. Project Background and Vision
  • 2. Introduction to Computing Power Networks
    • 2.1 Data as a New Factor of Production, Giving Rise to Computing Power Networks
    • 2.2 The Current Distribution of Computing Power Supply and Demand Shows Decentralized and Unbalanced
    • 2.3 The Metaverse Era, Where New Technologies Like VR and AR Are Closely Related to High Bandwidth a
    • 2.4 Computing Power Networks Refer to the Integration of Cloud, Network, and Edge for Unified Comput
    • 2.5 Computing Power Networks Build the Network Foundation for the Development of the Metaverse
  • 3. Market Demand for the Development of Computing Power Networks
    • 3.1 The AI Wave Boosts Computing Power Demand, and Achieving Scalability of Intelligent Technology R
    • 3.2 Industry Applications of Large Model Training Also Require a Large Amount of Intelligent Computi
    • 3.3 The AI Application Has a Long Tail Effect, and Achieving Scalability Requires First Achieving Un
  • 4. AIConnect Computing Power Network Construction Plan
    • 4.1 Computing Power Supply Services
    • 4.2 The Infrastructure and Marketization of Computing Power Scheduling
    • 4.3 The Commercial Demand for Computing Power Scheduling
  • 5. Global Development of Computing Power Networks and DePIN
    • 5.1 Development Advantages of DePIN Combined with AI and Crypto
    • 5.2 AIConnect's Investment in AI Edge Computing Model Training and Development
  • 6. Joint Construction and Participation in AIConnect's Computing Power Network
    • 6.1 Introduction to the Role of AIC Token Assets in AIConnect
    • 6.2 Design Advantages of AIC Tokens
    • 6.3 Introduction to the Business Model of AIC Tokens
      • 6.3.1 AIC Token Production
      • 6.3.2 Node Participation
  • 7. Development Roadmap
Powered by GitBook
On this page
  1. 3. Market Demand for the Development of Computing Power Networks

3.1 The AI Wave Boosts Computing Power Demand, and Achieving Scalability of Intelligent Technology R

Previous3. Market Demand for the Development of Computing Power NetworksNext3.2 Industry Applications of Large Model Training Also Require a Large Amount of Intelligent Computi

Last updated 1 year ago

Machine learning has entered the era of large models, with the training and iteration of general large models like ChatGPT greatly increasing the demand for intelligent computing power. After successful deployment of models, a large amount of intelligent computing power is also needed for inference. From the perspective of model training, the computational power for machine learning training can be roughly divided into three periods. The first period was before 2012, when training computing power roughly followed Moore's Law, doubling approximately every 20 months. With the advent of the deep learning era, the rate of computing power doubling accelerated to 56 months. Around 2015-2016, the era of large models began, during which the growth of computational volume slowed down, with a doubling time of about 10 months. However, the overall training computational volume was 2 to 3 orders of magnitude (OOM) larger than the systems of the deep learning era. By the end of 2022, with the success of ChatGPT leading a new wave of AI, general large models such as Bert, GPT4, and Wenxin Yiyan have been released domestically and internationally. These large models require trillions, or even quadrillions, of parameters, as well as thousands of GB of high-quality data, significantly increasing the demand for intelligent computing power. In addition, as models mature and are promoted, the intelligent computing power required for model inference will gradually increase, with its proportion continually rising.