The clock is ticking for Communications Service Providers (CSPs). To survive in the telecommunications (telco) industry, every CSP must transition to a TechCo model and beyond. StarHub’s journey shows how it can be done, focusing on 3 non-negotiable pillars: polycloud agility, secure agentic AI, and digital trust.
In this blog, we’ll discuss the massive technological transformation that StarHub, and the telco industry as a whole, is currently undergoing. You’ll also learn why It's clear that the days of the traditional CSP are behind us, and why the future belongs to TechCo and beyond.
Polycloud agility: The engine for real-time OpEx and QoE optimization
StarHub has been implementing a major technology transformation for the last 3 years, and we are now well into the second phase. One of the most significant shifts is the move from a hybrid cloud model to a polycloud architecture.
Polycloud is more than just using multiple public clouds. It means having a substantial number of applications and workloads running across various cloud environments, similar to having a horizontal cloud but with the additional intelligence to select the optimum compute environment for each workload in real time. We believe this will create a readiness to adopt AI, especially agentic AI frameworks that can support across the core to the edge and with multiple cloud providers in between.
The selection criteria is no longer simply "on premise" or "public cloud,” decisions mostly driven by cost. Now, it's about how we can deliver the best QoE to our users in real time by selecting the best compute, hyperscaler, or edge platform while simultaneously controlling cost. Workloads have diverse demands. Our goal is to move to an architecture that can select the best environment to run these workloads for our consumer and business clients, in real time or near real time. Examples include moving low latency demanding workloads to the edge, data analytics heavy apps to a public cloud, and compliance-driven apps to a private cloud.
The AI platform provides comprehensive and critical security mechanisms to mitigate the risks associated with AI models throughout their lifecycle, laying the groundwork for true autonomous networks. To support a security-focused ecosystem end to end, supply chain security for models and AI applications is being explored, applying existing cloud-native approaches and considering AI-specific deviations. This embraces the crucial security mechanisms to mitigate the risks associated with AI models.
The core motivation? To intelligently manage and reduce OpEx while delivering superior QoE across mobile, broadband, and cloud networks.
From CSP to Trusted Service Provider: The power of trust and identity
As a critical national information infrastructure provider, StarHub’s data sovereignty is paramount. Their network assets produce over 700TB of Critical Information Infrastructure (CII) data every day. This data must be stored, processed, and available to government authorities for forensic and threat management purposes, all while adhering to the strictest data governance and security policies.
For certain enterprise clients, the requirement for local, on-premise computing is a must. StarHub meets this need by deploying their polycloud platform directly on their premise, which then manages and monitors, ensuring the highest level of service assurance, orchestration, and cluster automation.
StarHub builds trust and transparency through open scrutiny, democratizing access to safety tools; accelerating innovation and adoption via collaboration; establishing standards, guidelines, and procedures; addressing unique AI security challenges; and taking advantage of its community engagement model and open standards.
This focus on trust and security is what leads to ultimate evolution: from a CSP to a Techco and eventually to a Trusted Service Provider (TSP).
Strategic and diverse approaches as the design principles
When it comes to security, there are multiple considerations and use cases to understand and manage. As an industry, we must focus our investments and ideas on creating a resilient ecosystem. Red Hat remains a pillar of this innovation, bringing enterprise-grade security to the open source community to ensure these advancements benefit everyone globally. Our approach balances technological innovation with commercial freedom , individual rights, and regulatory and government compliance.
To understand the future of device security, consider a smart home equipped with webcams and mobile surveillance.
- The home uses a digital lock with your personal codes.
- In an emergency, the lock can be overridden by a security provider or designated authorities.
- Importantly, every digital lock must be certified by the security provider before installation to ensure it meets safety standards.
If we apply this analogy to the current technological landscape, the roles shift as follows:
- The house is a mobile or any internet of things (IOT) device.
- The digital lock is an e-SIM enabled device or AI agent.
- The authorities are government regulators.
- The security provider is a telco.
In this model, a telco-approved AI agent monitors the device. It has the authority to intervene or overrule processes if a rogue program attempts to take control over any connected device or the network.
While app stores and Google stores currently certify software before an app is allowed to run, a critical vulnerability remains: runtime updates. Most modern scams occur because platforms lose oversight once an app or browser is running and updated in real time.
eSIM for AI models
Our current SIM cards already act as a powerful foundation of trust. Because they are tied to verified identities and addresses, they act as a natural deterrent to malicious activities. We are now working with regulatory authorities to extend this proven identity-based trust model to the world of AI.
The current AI landscape is expanding rapidly, often fraught with security concerns, especially with the increasing number of new and unknown AI models. To address this, we propose an eSIM-like identification mechanism for every AI model:
- Model ID: We will create an ID that specifies the manufacturer, the software developer, and the model's history.
- Trust score: We will assign a trust score, noting any past incidents or verification status.
When a model connects to our 5G network, it must undergo a rigorous authentication process to receive its unique, trusted ID. This enables us to uphold a higher standard of security through active, real-time monitoring. If a model exhibits suspicious behavior—such as communicating with unapproved IP addresses—we can terminate the connection immediately without disrupting other services and notify the user.
By taking advantage of Starhub’s trusted, regulator-vetted AI Service Registry, we secure the platform at the connectivity layer. This allows customers to subscribe to a Verified AI Catalog without having to worry about model security. The StarHub platform becomes the essential layer of trust, enabling a security-hardened, digital lifestyle.
- Building a supply chain of security-hardened microservices
Software supply chain attacks result in close to US$4.91, and it is the number two attack vector. The average time to identify and contain those breaches are the longest to resolve. It has become the most common cause of AI security incidents.
To combat these risks, the Secure Software Development Framework (SSDF) should be used to manage identity and access control. By applying SSDF principles, we can safeguard the entire lifecycle—generation, signing, updating, and distribution—of Software Bill Of Materials (SBOMs) for specific products including eSIM, data, integration, and microservices.
Compliance must be continuous, built into the day-to-day IT operations and user activities. This approach will protect the entire organization and ensure the customer data privacy remains the top priority within our trusted eSIM AI-models.
- Quantum in mind
Implementing the quantum-safe initiative integrates Post-Quantum Cryptography (PQC) into core products using NIST-approved algorithms (e.g., ML-KEM/ML-DSA). This protects data against the “harvest now, decrypt later” threat, while enabling a gradual transition through hybrid modes and ensuring crypto-agility for future algorithms as standards evolve.
- Blind computing
Lastly, blind computing ensures data is never exposed during storage, transfer, or processing using 2 key components.
- Crypto/blockchain (The Vault) to manage Data-at-Rest: It ensures your data is stored immutably and effectively shredded (encrypted/sharded) across a decentralized network so no single entity holds the file.
- Confidential Containers (CNCF sandbox project): The Safe Room to handle Data-in-Use. It provides the only security-focused environment in the world where that data can be decrypted and processed data without the host machine seeing it.
Using crypto/blockchain and Confidential Containers (CoCo) achieves the combined workflow.
Store (crypto): Encrypted data is uploaded to a decentralized network. Only the mathematical "fingerprint" (hash) is recorded on the blockchain for proof of ownership.
Fetch (CoCo): Your CoCo, running inside a security-hardened hardware enclave, pulls the encrypted data from the storage network.
Process (TEE): The trusted execution environment (TEE) retrieves the decryption key, decrypts the data only inside the CPU memory, performs the computation (e.g., AI training), and immediately re-encrypts the result.
Save: The encrypted result is sent back to the decentralized storage.
- The storage provider sees only encrypted shards.
- The cloud compute provider sees only encrypted memory.
- You get the results without building your own datacenter.
The transition from a Communications Service Provider to a TechCo—and, then, to a Trusted Service Provider—is no longer optional. It’s the strategic imperative. StarHub’s journey shows that success is defined by the convergence of 3 pillars: polycloud agility for real-time service optimization, security-focused agentic AI built with non-negotiable guardrails, and cross-domain automation for maximum OpEx reduction. Achieving this level of hardened, automated convergence requires a unified, open hybrid cloud platform. This ability to safeguard and streamline the entire digital lifecycle will be the most critical KPI for sovereign TechCos in the years ahead.
Explore the full story of StarHub’s evolution with these video resources:
Resource
The adaptable enterprise: Why AI readiness is disruption readiness
About the authors
More like this
Sovereign AI architecture: Scaling distributed training with Kubeflow Trainer and Feast on Red Hat OpenShift AI
Red Hat Enterprise Linux now available on the AWS European Sovereign Cloud
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds
Red Hat Blog