Telecommunication service providers face the challenge of maximizing return on investment (ROI) on massive 5G investments. Efficient infrastructure deployment is particularly difficult in areas with real-time demand fluctuations, such as stadiums and tourist spots. One key challenge lies in comprehending usage patterns and connectivity performance from the end-user's perspective. A new paradigm is emerging: Customer and AI-driven network investments. This paradigm is being realized as a TM Forum catalyst project to build an AI-driven platform for network investments. Imagine every customer's device acting as a real-time sensor for a service provider’s network. This is the promise of decentralized physical infrastructure networks (DePIN), an approach that puts customer experience at the center of every network decision. A new model for network intelligence Red Hat, along with other industry leaders, are collaborating on this TM Forum catalyst project to develop a solution that combines the power of crowdsourced data with advanced analytics and artificial intelligence (AI) to deliver actionable insights. TM Forum is a global industry association that helps its members digitally transform through collaboration, innovation, and the development of standards and best practices. They provide resources like the open digital architecture, open APIs, and catalyst innovation programs to help members reduce costs, accelerate time-to-market, and deliver new services. The platform is built on three core pillars: - User insights: With user consent, a lightweight application on customer devices collects quality-of-experience (QoE) and radio metrics to create a high-resolution, low-cost view of network performance. Following the DePIN approach, users are rewarded with on-chain tokens to create a transparent and verifiable incentive system. - AI-powered analytics platform: Crowdsourced data from end user devices is fed into an open source data lake (Red Hat OpenShift Data Foundation running on Red Hat OpenShift), in Apache Iceberg table format. By deploying the Trino operator and Trino model context protocol (MCP) server on Red Hat OpenShift, AI agents (built using Red Hat AI and the llama-stack agentic framework) use natural language to query data stored in OpenShift Data Foundation to obtain specific reports on network performance. - Network evolution: Machine learning models analyze data, predict future congestion, and simulate scenarios to recommend targeted network upgrades based on customer impact and cost efficiency. Industry collaboration drives innovation This project is a joint effort by several key players in the telecommunications and technology sectors, and demonstrates how open source technology and industry collaboration solves complex and real-world challenges to unlock new value. - NTT Group, Spark New Zealand, Ooredoo Kuwait, and other champions listed in TM Forum catalyst project are leading proof of value (PoV) implementations for tangible business benefits and technical feasibility. They are responsible for sourcing internal testers who will use the Netradar application to activate data collection. - Netradar provides the technology for collecting, storing, and analyzing 24/7 near real-time data from end-user devices. Their predictive AI delivers the critical customer and network insights that serves as the basis for the larger solution. - Red Hat provides the cloud-native application and AI platform for the solution. Using Red Hat OpenShift, and Red Hat OpenShift AI, the platform leverages a large language model (LLM) to process the raw data and generate insightful reports. Network insights give service providers the ability to anticipate future needs rather than reacting to network issues. The Red Hat application and AI platform provides robust and crucial security mechanisms to mitigate the risks associated with AI models throughout their lifecycle. - Mlnetworks provides an AI-powered data platform that transforms network operations for service providers, solving the challenges of traditional, reactive manual processes that lead to inefficient capital expenditure (Capex) and high operational expenditure (Opex). By leveraging Netrader's crowdsourced raw data and a service provider’s operational support system data, the platform enables proactive, data-driven decision-making. The path to business value through customer and AI-driven network investments Immediate and direct insights of user experience from crowdsourced data directly from end-user devices is a compelling business benefit. The ability to analyse data in human language, benchmark user experience directly with competitors, and use closed-loop feedback provides better customer experience and reduces churn. These capabilities translate into optimized CapEx and an improved customer experience that directly boosts revenue and net promoter scores (NPS): - Targeted capacity additions result in a 10% decrease in CapEx but keeps customer perceived quality at the same level. - 10-15% improvement in customer retention by addressing their issues proactively through quick end-to-end root cause analysis, leading to improvements in NPS. There are also operational benefits, unlocked through the investment of training, and new culture and skill development. With service provider personnel having the requisite skills to manage AI-driven network workflows and integrating new tools safely into procedures, they will deliver measurable results that include higher network availability and performance. With a clear people strategy tied to business outcomes, service providers can transition to more pro-active and strategic operations. Continuous improvement To be most effective, service providers must move beyond outdated methods to measure network performance. Expensive and time-consuming drive-tests and siloed data leads to misdirected capital expenditure, and it delays issue resolution. This reactive approach is inefficient and can degrade customer experience. The solution is meaningful insight into user experience from crowdsourced data, directly from the devices of your end user. Through industry collaboration, Red Hat is helping make this pragmatic model of quality assurance a reality. Resource The adaptable enterprise: Why AI readiness is disruption readiness About the authors More like this Is your organization AI-ready? 3 tests of enterprise adaptability Forging the open path: How Red Hat engineering is adopting AI and what it means for open source Technically Speaking | Driving healthcare discoveries with AI Technically Speaking | Security for the AI supply chain Browse by channel Automation The latest on IT automation for tech, teams, and environments Artificial intelligence Updates on the platforms that free customers to run AI workloads anywhere Open hybrid cloud Explore how we build a more flexible future with hybrid cloud Security The latest on how we reduce risks across environments and technologies Edge computing Updates on the platforms that simplify operations at the edge Infrastructure The latest on the world’s leading enterprise Linux platform Applications Inside our solutions to the toughest application challenges Virtualization The future of enterprise virtualization for your workloads on-premise or across clouds