AI: The New Internet for Your Computer Networks?
Hey everyone, John here! Welcome back to the blog where we untangle the sometimes-knotty world of AI and tech. Today, we’re diving into some exciting news from Cisco Live, a big tech conference. Their CEO, Chuck Robbins, said something that really grabbed my attention: he compared the impact AI is having on computer networks to how the internet felt way back in the 1990s!
Imagine a time before everyone was online, when computers mostly talked to themselves or a few others in the same building. Then, boom! The internet, with something called TCP/IP, came along and smashed those walls down, connecting everything to everything. It was a revolution! Robbins believes AI is about to do something just as big for the networks that run our digital lives.
He says AI is forcing us to make our networks much smarter – more programmable (meaning we can tell them what to do more easily), more observable (so we can see what’s happening inside them), and better at optimizing themselves (making sure everything runs smoothly and fast). And here’s the kicker: what starts as a big deal for the network gurus and platform teams will eventually make its way to the folks who build our apps and software – the developers.
From Dumb Pipes to Smart Platforms: Your Network Gear is Getting an Upgrade!
Lila, my trusty assistant, you know how we talk about the “cloud” and how it changed software development?
Lila: “I think so, John! You said it’s like renting computing power and tools instead of buying and managing all the hardware yourself, right? And you mentioned things like ‘containers’ and ‘APIs’ make it easier for different software pieces to work together?”
Exactly, Lila! Well, a similar change is now coming to networking. Thomas Graf, a super-smart CTO from Cisco (and the brain behind a cool open-source tool called Cilium), explained that old-school network gear like routers and switches are evolving. They’re not just going to be simple traffic cops anymore; they’re becoming programmable platforms themselves.
Think about it: traditionally, if you wanted a firewall (to block bad internet traffic) or a load balancer (to spread out website visitors so one computer doesn’t get overwhelmed), you’d need separate, clunky boxes. Graf said that with new technology, like switches boosted with something called DPUs, these functions can be built right into the switch itself! It’s like your Wi-Fi router suddenly also becoming your home security system and your traffic manager, all in one neat package.
Lila: “Hold on, John. What’s a DPU? That sounds a bit technical.”
Great question, Lila! Think of a DPU, or Data Processing Unit, as a special mini-computer or a co-processor that lives inside network equipment, like a switch. Its job is to take over some of the heavy lifting that the main computer brain (the CPU) of the switch or server would normally do, especially tasks related to networking, security, and storage. It’s like having a dedicated assistant for your network gear, making it faster and more efficient at its specific jobs, and even allowing it to run its own little software programs directly on the network device.
So, instead of needing separate virtual machines or ‘sidecar containers’ (think of these as little helper programs running alongside your main application) to handle these network tasks, the DPU-enhanced switch can do it directly. This, combined with other clever technologies like eBPF (which lets developers safely run custom programs deep inside the operating system, like in the Linux kernel, to see and control what’s happening) and tools like Tetragon for security, means that managing things like firewalls, network segmentation (dividing your network into secure zones), and even just seeing what’s going on, can all be done through code. It’s much more flexible!
Lila: “Okay, eBPF sounds a bit like giving developers special x-ray glasses to see inside the computer’s core operations and even tweak them safely? And what was that about ‘ticket ops’ versus ‘GitOps’?”
You’re spot on with eBPF, Lila! It’s a powerful way to get visibility and control. As for ‘ticket ops’ versus ‘GitOps’:
- Ticket ops is the old way. If a developer needed a network change, they’d file a ticket, wait for a network engineer to manually make the change, then wait for confirmation. It could be slow and error-prone. Imagine ordering a pizza by sending a letter and waiting for the chef to write back before they start cooking.
- GitOps is the new, cooler way. It uses the same tools software developers use to manage their code (like Git, a version control system) to manage network configurations. Changes are made in code, reviewed, and then automatically applied. It’s faster, more reliable, and everything is tracked. Think of it like ordering your pizza through an app where you customize everything, hit ‘order,’ and the kitchen gets the exact instructions instantly.
This shift means networking is becoming more like software development – more automated and agile.
AI to the Rescue! Fixing Network Problems Smarter, Not Harder
Now, let’s talk about how AI itself is jumping into the networking game. Cisco announced some cool AI tools at the event. One is a special AI model built just for security, trained on millions of security-related pieces of information. Another is called the “Deep Network Model,” designed to help with network operations, and it even has a cool, interactive interface called AI Canvas.
Lila: “An ‘agentic UI experience’ with AI Canvas? What does ‘agentic’ mean in this context, John?”
Good catch, Lila! An ‘agentic UI’ means the user interface, or the way you interact with the system, has an AI agent working with you. Think of it like having a super-smart assistant built into the software. Instead of just clicking buttons and looking at charts, you might be able to ask the AI Canvas questions in plain English like, “Why is this part of the network slow?” or “Show me any unusual security events from last night.” The AI agent then goes off, analyzes data, and presents you with answers or suggestions. It’s more like having a conversation with your network tools.
David Zacks from Cisco put it nicely: the AI isn’t necessarily smarter than a human network engineer, but it has access to way more data and can process it incredibly fast. Imagine trying to read every single status update from every piece of equipment in a giant network – impossible for a human! But an AI can sift through all that telemetry (that’s the data streaming from all the network devices) and spot patterns or problems that a human might miss.
Lila: “So, ‘telemetry’ is just a fancy word for all the data and measurements that network devices send out to tell us how they’re doing?”
Precisely! It’s like the vital signs your doctor checks – heart rate, blood pressure – but for network equipment. AI can watch these vital signs at a massive scale, use what Cisco calls “machine reasoning” (which is like the AI thinking through the data), and then give engineers clear, actionable insights. This is becoming super important for keeping networks reliable.
And it gets even more interesting: the folks at Cisco think that soon, developers will start using these AI-powered systems before their new AI applications go live. They’ll be able to simulate how their AI apps will behave under heavy network load, automatically finding any performance problems or bottlenecks before they affect real users. It’s like stress-testing a bridge design on a computer before you actually build the bridge.
Building a New Foundation: How AI is Reshaping Our Digital Roads
A big theme at Cisco Live was that AI isn’t just another application running on the network; it’s forcing a complete redesign of how applications and the underlying infrastructure (the network, the computers, everything) work together. The lines are blurring.
Jeetu Patel from Cisco explained that AI models (the ‘brains’ of AI applications) are getting smaller and more specialized, while the computer chips (the silicon) are becoming more programmable. This means new AI features can be developed and rolled out much faster. The AI model is becoming an integral part of the application itself. When you update the app, you’re often updating the AI model too.
Lila: “Silicon? Is that just another word for computer chips, John?”
You got it, Lila! Silicon is the primary material used to make microchips, the tiny electronic circuits that are the brains of computers, smartphones, and yes, even advanced networking gear. So when they say “silicon is becoming more programmable,” it means the chips themselves are becoming more flexible and adaptable to different tasks, especially for AI.
This close link between the app’s logic and the AI hardware (like specialized chips for AI calculations, known as inference hardware) means we need to rethink how everything is architected. For AI applications to work well, especially those using Large Language Models (LLMs – think of things like ChatGPT that understand and generate human-like text), developers need to see how their AI model design, the network’s capacity (bandwidth), and where the AI calculations happen (inference placement) all affect each other.
Lila: “LLMs, those are the really chatty AIs, right? And you said they are sensitive to latency and congestion. Can you break those down?”
Absolutely. Large Language Models (LLMs) are indeed the AIs that are very good with language.
- Latency is just a techy term for delay. Imagine you ask an LLM a question. Latency is the time it takes for your question to travel to the AI, for the AI to figure out an answer, and for the answer to travel back to you. If there’s high latency, it feels slow and laggy. LLMs often need to process a lot of data, so minimizing this delay is crucial.
- Congestion is like a traffic jam on the internet. If too much data is trying to flow through a part of the network at once, it gets clogged up, leading to… you guessed it, more latency and slower performance!
These issues can be invisible until your AI application suddenly starts performing poorly because the network can’t keep up.
So, at Cisco Live, they were talking about strategies like:
- Mapping AI tasks directly to how the network is laid out (the network topology).
- Spreading out the AI processing work across different paths in the network (pipeline parallelism).
- Choosing the best place to run AI calculations based on network conditions.
- Even pre-loading bits of AI models (called model shards) closer to where they’ll be needed.
Lila: “Model shards? Are those like breaking a big AI brain into smaller pieces and storing them in different spots for quicker access?”
That’s a perfect analogy, Lila! A big AI model can be huge. ‘Sharding’ it means breaking it into smaller, more manageable pieces (shards) that can be distributed across different servers or locations. This way, when a request comes in, it might only need to access a nearby shard, or different shards can work in parallel, speeding things up and reducing network strain. It’s all about making the data flow as efficiently as possible because performance isn’t just about how powerful your computer chip is anymore; it’s about where, how, and how fast the data moves through the network.
Developers in the Driver’s Seat: Controlling the Network with Code
So, does all this mean that software developers will soon be able to directly tell the network what to do, using code?
Jim Frey, an analyst from Enterprise Strategy Group, pointed out that giving developers more control over the network (something they even have a term for: NetDevOps) has been a dream for years. There’s even a community, the Network Automation Forum, dedicated to making it happen. But it’s been super tricky because different network equipment makers have their own closed systems and ways of doing things, with no common standards.
Lila: “NetDevOps? Is that like DevOps, but specifically for networking, John? Combining network operations with software development practices?”
Exactly right, Lila! DevOps is all about bringing development and IT operations teams closer together, automating processes, and speeding up software delivery. NetDevOps applies those same principles to networking – using code, automation, and collaboration to manage and configure networks more efficiently.
But here’s the exciting part: Frey says AI is changing the game. Network teams and the companies that make network gear are having to adapt to this new AI-driven world. They need to find a way to make networks programmable in a way that fits with how other parts of the digital infrastructure (like computing and storage) are managed.
Cisco seems to believe that a future ‘control plane’ (think of this as the network’s brain or management system) could give AI developers declarative access to things like network bandwidth, specific latency requirements, or even how the network handles traffic at a very detailed level (what they call Layer 7 behavior, which looks at the actual application data).
Lila: “Declarative access? Does that mean developers just declare what they want the network to do, like ‘I need this connection to be super fast and low latency,’ and the network figures out how to make it happen?”
Spot on again, Lila! That’s the essence of declarative. Instead of giving step-by-step instructions (imperative), you declare the desired end state. It’s like telling a self-driving car “take me to the library” (declarative) versus telling it “turn left, drive 2 miles, turn right…” (imperative). This would make it much simpler for developers to ensure their AI applications get the network performance they need.
Patrick LeMaistre from Cisco summed it up powerfully: “We’re building for AI not as a workload, but as the platform. That changes everything.” This means AI isn’t just one more thing the network has to support; the network itself is being fundamentally redesigned around the needs of AI.
My Thoughts on All This
John: It’s truly fascinating to see this shift. I remember when programming network devices was a very specialized, almost arcane art. The idea of AI not only running on networks but actively reshaping them, and developers getting more direct control, feels like a massive leap. It reminds me of the early days of the web – full of challenges, but also bursting with potential. We’re on the cusp of networks becoming truly intelligent partners in our digital endeavors.
Lila: Wow, John! It sounds like the internet is getting a super-brain upgrade. The thought of AI helping to build and manage the very pathways our information travels on is a bit mind-boggling, but also really exciting! If it makes things faster, more reliable, and helps new AI applications work even better, I’m all for it. It feels like we’re just scratching the surface of what’s possible!
This article is based on the following original source, summarized from the author’s perspective:
Cisco Live: AI will bring developer workflow closer to the
network