In Focus Network

From prompt to provisioning: AI as your new Network Orchestration assistant

The path to Network Automation

Full automation of network services is not a trivial task. Even something routine, such as the provisioning of a Layer 2 circuit, requires putting together a detailed orchestration pipeline. The workflow steps involve reading from the single source of truth, selecting the right resources, sequencing multiple API calls, handling errors, and ensuring compliance with internal service models. In mature R&E networks, where usually each environment is a blend of legacy systems and modern platforms, this process becomes even more demanding.

Traditionally, service automation means that engineers need to develop orchestration workflows by hand, relying on deep system and network knowledge and hours of scripting, coding and testing. Nowdays, we on the verge of a shift in the approach with a new kind of coding opportunity. Vibe coding is a software development approach that uses AI to generate functional code from natural language prompts. In essence, based on this approach you can use an AI assistant to build and maintain automation pipelines. Given the increasing service demands managers face, vibe coding might be the game-changer that can help all NRENs, especially the small ones.

Vibe coding!

We recently tested the idea of using vibe coding for service design and workflow development from scratch in the GP4L testbed. The task was straightforward on the surface: provision a Layer 2 service across the network using existing systems (Maat as a single source of truth, LSO and Ansible for network configuration change) and corresponding APIs. But instead of writing the workflow in Airflow from scratch, we asked an AI assistant, ChatGPT-4o, to help us build the orchestration logic. The only input we gave was the high-level service description, information about the available systems and their APIs, and finally a request to make the solution TM Forum ODA compliant.

From there, the process became a dialogue. We described what the workflow should do, and the AI responded by generating building blocks of code. The blocks in this case were Airflow workflow (i.e DAG) and its tasks that implement the orchestration sequence. At first, the AI handled the structure: setting up task dependencies, determining where to fetch topology data, where to apply constraints, and where to insert decision points. Then, with a few more prompts, it began to fill the rest of the details, suggesting API calls, parameter formats, and error handling logic. The results were not perfect on the first try, but they were quite close. With a number of corrections and iterations, we were able to obtain a working automation flow.

One of the most impressive outcomes was the AI assistant’s performance at the high-level service modelling stage. The AI proved especially adept at producing TM Forum–compliant service definitions that align with standardised APIs and resource structures. This allowed us to establish a consistent, standards-aligned design framework from the beginning. The assistant not only understood the TM Forum design patterns but used them to guide the structure of the workflow and the relationships between services and resources. For teams already working with TM Forum Open APIs, this capability adds enormous value.

Speed and adaptability

The experiment is especially valuable because it showcases the speed and adaptability of the approach. Traditionally, creating such workflows from scratch takes days of effort, especially if the orchestration logic needs to comply with TM Forum service models and Open APIs. In our test, we completed the initial scaffolding in under a day and had a functional prototype several times faster compared to the manual development. Furthermore, the AI assistant suggestions were right on track when aiming to make the solution production ready, prompting to add parameter validation, failback, and error handling. Over the course of the experiments, we found that the AI was also particularly good at handling the “boring but necessary” parts of the job, that developers often overlook and don’t want to waste too much time on, such as setting up the correct structure, repeating standard validation steps, and remembering the exact format of parameters. This helps engineers to focus on the tricky parts: how to translate service intent into something the infrastructure can actually execute.

The AI’s initial outputs were over 80% correct when generating the high-level scaffolding and around 50% accurate when producing detailed implementation logic. We also concluded that it quickly improved with feedback. Because the assistant works interactively, we didn’t have to start from scratch when something was off. Instead, we simply pointed out the issue and asked for a fix. In many cases, the AI was able to correct itself immediately. This kind of interaction felt like collaborating with a junior engineer that is always happy to oblige and improve, and never forgets what you told them five minutes ago.

Over the full span of the test, the AI-assisted approach cut development time by more than half compared to our traditional process. The even more important benefit is that the produced workflows are easier to review and maintain due to the generated standardised and readable code.

AI coding is real

Hence, vibe coding isn’t science fiction, it is (not perfect) reality. AI is now capable of significantly contributing in the design process by translating structured intent into executable orchestration steps. The key is to treat the assistant as a collaborator. It won’t get everything right immediately, but it can generate results fast, and those results improve with every prompt. For teams working in environments where service onboarding speed is critical and automation needs are growing fast this approach offers a real advantage.

Of course, using AI in this way also means adapting ourselves. Engineers need to shift their thinking from scripts to prompts. Managers need to ensure that intent models and data sources are clearly defined with readily available clear documentation, so that the AI gets a standard input easy to parse. In this way, teams can reuse and quickly adapt AI-suggested templates.

This website presents more information on the topic.

https://geant-netdev.gitlab-pages.pcss.pl/gp4ldocs/guides/playground/ai_workflows/idea/

About the author

Roman Łapacz

About the author

Sonja Filiposka

Prof. Sonja Filiposka is a full professor at the Faculty of Computer Science and Engineering (FINKI), part of the Ss. Cyril and Methodius University. She is actively involved in the GÉANT projects since GN3+ working on network services development, automation and orchestration.

About the author

Karl Meyer

Product Marketing Manager for GÉANT

Skip to content