vExpertAI Logo vExpertAI

Technology Stack & Infrastructure

Core Technologies

LLM Inference: HuggingFace

Dedicated endpoints

Messaging: Redis

A2A queues + cache

Automation: Scrapli

Async SSH/NETCONF

UI: Streamlit

Dashboard (port 8501)

Infrastructure

Orchestration: Docker

7 services

Network Lab: Azure EVE-NG

3 Cisco routers

Parsing: TextFSM + Genie

CLI extraction

Observability: Grafana

Metrics + traces

Performance

Deployment

  • Docker on single host
  • GPU for LLM inference
  • SSH tunnel to Azure

Targets

  • LLM: <3s (p95)
  • MCP: <5s (p95)
  • MTTR: <5min