Why AI And Networks Must Be Scaled To Ever Match Human Intelligence

By Hannes Gredler, CTO and Founder at RtBrick

Let’s take a minute to assess the current state of AI when it comes to human intelligence. Some experts are predicting that as soon as 2028, AI could be smarter than people. In fact, the newest generation of large language models (LLMs) is already in the works and potentially launching by the end of 2023. These could be 5-20 times more advanced than current GPT-4 models

However, several developments still need to take place before AI will match or surpass human intelligence. Mainly, achieving this entails reaching the point where AI has the same level of Input/Output (I/O) capacity as humans, or the ability to receive vast amounts of data points and sensory information at high speeds and process it in real-time. AI still needs extensive training to accomplish this, as processing a query today can take anywhere from seconds to minutes.

So, considering this, are the expectations for AI to beat out human intelligence feasible eventually? Yes, but to get there, two key advancements must take place.

Condition #1: Scaling AI

First, AI technology needs to scale. Advanced semiconductors are needed to process the large amounts of data involved in developing AI. As a result, expanding network computing and storage resources in data centers and AI clouds is key – especially where, for certain applications, the I/O capacity likely won’t be strained. 

Anticipating this requirement and aiming to get ahead in the AI race, China recently announced its plans to increase the country’s computing power by 50% by 2025. As part of this initiative, they’re prioritizing areas such as memory storage and networks for transmitting data, as well as arranging to build more data centers.

In addition to increasing these resources, developers also need to enable AI to work in real-time. This isn’t too far off. On top of the more advanced AI models currently in production, within five years, experts predict companies will be training models that are over a thousand times larger than GPT-4

Once AI is able to operate in real-time, it can help conduct essential tasks such as tracking, analyzing, and predicting weather patterns or viral outbreaks (like another pandemic). Or, on a smaller but still significant scale, it could notify customers of product issues at or near the point of discovery.

Condition #2: Scaling networks

On the other side of things, no matter if processing technology scales, our current network capacity is insufficient and will hold AI back from reaching its full potential. Why? The short answer is because our current networks weren’t built with AI in mind. To deliver vast amounts of data at high speeds, there needs to be significant capacity in our networks, especially at the network edge. This is something we’re currently lacking, and things won’t change unless carriers stop taking the same approach to mobile and broadband network buildouts that they have for 20+ years.

Instead of using monolithic systems that integrate hardware and software from a single vendor, disaggregation is the key to fully supporting AI. This decoupled approach enables carriers to implement bare-metal switches into telecoms systems and use software and hardware from different vendors. On top of significant cost advantages, disaggregated networks offer greater capacity and faster speeds for both individuals and devices.

In terms of AI, this would provide sufficient capacity at the network edge to match the computing power of the core. The result? Optimal performance. 

No time to wait

It’s no secret that companies and individuals in the AI industry are eager to continue advancing the technology’s capabilities – and have no plans to slow down any time soon. Current projections show the size of the AI market growing by 120% year-over-year, and 83% of companies are claiming the technology is a top priority in their business plans.

At the moment, growing user demand and companies’ eagerness to develop is outpacing the progress that’s being made with scaling AI technology and network capacity. Unfortunately, this means we could reach our limit faster than we expected.

So, a message to all parties: if you want AI to reach its full potential, then there’s no time to wait. Let’s start making these improvements now!

Hannes Gredler, CTO – RtBrick
As company founder and CTO, Hannes leads the vision and direction of RtBrick. He has 20+ years of expertise in engineering and supporting roles working with Alcatel (now Nokia Networks) and Juniper Networks. Hannes is also a co-author and contributor to multiple Internet Engineering Task Force (IETF) drafts and is a regular speaker at industry events and conferences. He holds 20+ patents in the IP multi-protocol label switching (IP/MPLS) space.

error: Content is protected !!