Network configuration is essential to ensure the reliable and secure operation of computer systems. This process ranges from hardware setup—such as routers, switches, and firewalls—to the design of protocols, IP addressing schemes, and the definition of routing and security policies. The goal is to ensure that the network behaves as expected.
Historically, network configuration has been performed manually, requiring specialized expertise and the development of domain-specific languages. However, as networks grow in size and complexity, manual configuration becomes increasingly inefficient and error-prone.
To address the challenge of complexity, network configuration has become a central pillar in automation frameworks. The most recent major proposal for network automation, as Nelson Fonseca explains, is the Zero-touch Network & Service Management (ZSM) paradigm. It encompasses a set of advanced capabilities, including self-healing, self-monitoring, self-optimization, and, crucially, self-configuration. These automation concepts are part of a broader field that includes autonomic networks and cognitive networks, which have been discussed for over 20 years and are closely related to zero-touch networking.
Within the network management framework, the path toward self-configuration involves specifying the network through intentions (intent-driven networks). These intentions are expressed in natural language. As a result, a network administrator—or even a non-expert—can simply state what they want to be implemented in the network.
This relates to the concept of programmable networks, where high-level APIs allow users to program their networks by specifying how they should behave to support applications. The major challenge in this context is translating these natural language intentions into syntactically and semantically valid network configuration specifications.
Intent-driven self-configuration based on Large Language Models (LLMs) has demonstrated significant potential. However, the use of traditional LLMs presents barriers such as high computational cost and intensive resource consumption. LLMs are typically cloud-based, requiring substantial infrastructure and energy usage.
To overcome these limitations, a lightweight approach based on a Small Language Model (SLM) was presented. Unlike LLMs, SLMs can be deployed locally (on-premise), with lower energy consumption.
This innovative approach employs a fine-tuned SLM built on an agent-based architecture. Fine-tuning is essential to provide the SLM with domain-specific knowledge, enabling it to generate appropriate network configurations.
By leveraging parameter-efficient techniques, this framework enables the rapid translation of configuration requests—expressed in natural language—into valid network configurations. The ability to operate entirely on-premise ensures efficiency, accuracy, and privacy.
This strategy points to a safe and practical path toward automated, intent-driven self-configuration in next-generation systems.