AI-native telecom: where AI brings money and where it stays a toy
AI works well in three telecom categories and badly in two. A breakdown across seven real use cases with the economics of each — from support deflection and fraud detection to pilots with low ROI.
Discuss Your ChallengeHonesty test
Before any AI project there is one question to ask. If you took the best experts and paid them what the AI project costs, what result would they deliver? If it is comparable to AI — you need experts, not AI. If AI is faster or cheaper — you need AI.
This test eliminates 60% of AI initiatives at an early stage. Much of what is launched today as an “AI project” is automation that has long been done with ordinary programming, or analyst work that does not need ML. By tagging it as AI the operator gets it more expensive and harder, with no ROI.
The remaining 40% are scenarios where AI genuinely produces a meaningful advantage. In telecom they group into seven use cases with different economics. Three of them pay back within 6-12 months and scale. Two deliver only with long-term investment in data. Two more usually stay in pilot and never reach production.
Three use cases where AI pays back fast
Support deflection. Part of the call volume in the contact centre is repeat questions (balance, tariff, how to top up, how to add a package). These calls can be moved to a chatbot with NLU. With a properly built model, deflection rate reaches 30-50% of repeat-question volume. The cost of one contact in the contact centre runs to several thousand UZS. Cutting category volume by 30% is a significant effect.
Conditions for success. A clean FAQ database and a 12+ month contact history for training. A clear boundary: the chatbot does X, the human does Y, with handover including context. Willingness to invest in continuous training (the model degrades within 6-9 months without updates). Payback — 6-9 months when scoped correctly.
What does not work. A “universal AI assistant” that handles everything. It will do everything mediocrely. Better — specialised bots on 5-10 frequent scenarios with guaranteed handover to a human upon deviation.
Fraud detection. Telecom has several fraud patterns where AI catches more than rule-based systems. Dealer fraud (fictitious activations for commission). SIM swap attacks followed by money withdrawal. Bonus abuse (repeated promo code use through manipulation). Subscription fraud (services activated without customer consent). Anomaly detection on ML models catches patterns hard to describe with rules.
Conditions for success. A large historically labelled set of fraud cases. Case management — detection is half the work, a process is needed afterwards. Continuous learning — fraudsters adapt.
Payback depends on current losses. If the operator spends 1-3% of revenue on fraud, AI cuts that by 30-60%. At a large operator this is a major saving.
Churn prediction for retention. A model predicting churn probability in the next 30-60 days hands the retention team a prioritised customer list. Without the model the team works reactively (the customer has already announced intent to leave). With the model — proactively.
Conditions for success. Good signals (see the churn war room article). An operating model for action — without it the model produces forecasts without consequence. Continuous validation — the model degrades when customer behaviour changes or MNP arrives.
Payback — 9-12 months with proper integration into retention processes. The model itself does not generate money; the saving from retained customers exceeds model cost.
Two use cases with long ROI
Next Best Action / offer optimisation. ML decides which offer to show to which customer in which channel at which moment. Theoretically the most attractive AI application in telecom. In practice the hardest.
Difficulties. Historical data on thousands of offer combinations is needed. Experimental infrastructure (A/B tests with control group) is needed. Tolerance for experiments with a long feedback loop (often results visible after 2-3 months). Integration with the decision engine. An experimentation culture — not “let’s send to everyone”.
If all conditions are present, conversion grows by 15-30% relative to rule-based campaigns. That matters. But the conditions rarely all hold together. More often the model is built but used only partially, and the effect stays at 5-10%.
Payback — 18-24 months with the right experimental culture. Without it — may not pay back.
Network experience prediction. A model predicting where network degradation will occur in the coming hours and turning that into proactive maintenance or customer notifications.
Difficulties. Per-cell per-minute network data is needed. Integration with physical operations. Accuracy must be high enough that proactive actions are justified. It must connect to commercial actions.
When mature, a meaningful reduction in customer-impacting incidents and effective maintenance planning. Payback 18-30 months.
Two use cases that often stay in pilot
Voice biometrics for authentication. Using voice as part of multi-factor authentication. Technically works. Practically — fragile. Telephone call quality varies. The user gets frustrated when not recognised. Conversion to an alternative method (PIN, OTP) is high. In mass telecom, voice biometrics rarely produces ROI greater than a simple password manager or risk-based authentication.
When it works. Premium customer service where stakes are high and friction tolerance exists. In mass scenarios — rarely.
Generative AI for marketing content. Generation of SMS, push, email content through an LLM. Technically simple. In practice gives a marginal lift with significant operational risk (the text may be off, off-brand, legally problematic).
When it works. Fast iteration on ad copy at large volume. In standard offer campaigns the marketer’s time saving is small relative to risk.
What is needed before AI
Before launching an AI project the data foundation has to be checked.
Customer master data. One customer — one ID. If billing, CRM and app have different IDs for the same customer, the model trains on errors. MDM first, then ML.
Event collection. AI on batch data works poorly. Real-time or near-real-time events are mandatory. With nightly extracts only, every model reaction lags by a day.
Data labelling. Supervised learning needs labels. If the operator has no process for labelling fraud or churn cases — process first, model second.
Compliance and governance. The personal data law applies to AI as to any data processing. Consent must cover data use for personalisation. The audit log must capture automated decisions. Without these systems AI creates regulatory risk.
Decision authority. AI recommends. Who takes the final decision and bears responsibility for it? Without a human owner, AI throws forecasts into the air.
Discussion points for the committee
How many AI projects have been launched in the last 18 months and how many are in production? If the ratio is worse than 1 in 3, the issue is in the pilot-to-production process, not in use case selection.
In which three operator processes is most human time being spent today on repetitive tasks with structured data? These are the first AI candidates.
Which data is not in the right quality to run AI on the priority use case? That is the first investment, not a model.
Who in the team has both ML understanding and a deep grasp of the operator’s business? If nobody — a hybrid role is needed (head of AI or chief data officer), not an extra ML engineer.
When AI is not a priority
If the operator’s core processes run on manual approvals and spreadsheets, AI does not fix that. Basic workflow automation first, then AI.
If data quality is poor and the team is not ready to invest 12-18 months in data foundation, AI will produce results the team cannot trust.
If the company has no experimentation culture and every decision is taken “the way it always was”, AI recommendations will be ignored. Technology without culture delivers zero ROI.
If the budget covers a one-time AI project without continuing costs (training data updates, model retraining, monitoring) — the model will go stale in 6-12 months and the project will stall.
How SamaraliSoft can help
AI Use Case Selection & Operating Model — analysis of AI portfolio applicability to the operator. Selection of 1-2 use cases for a 90-day pilot. Measurement framework for honestly evaluating ROI. A 12-18 month roadmap with go/no-go gates. An honest answer on which use cases are not yet ready for the operator and why.
Related reading
- /en/insights/telecom-subscriber-intelligence-operating-model/ — operating model around data
- /en/use-cases/telecom-churn-war-room-mnp/ — retention in the MNP era
- /en/insights/telecom-ai-first-use-case/ — choosing the first AI use case
- /en/insights/telecom-anti-scam-layer/ — anti-scam at network level
Sources
What else is worth exploring
Topics from the same area we usually explore together
CRM
Not an off-the-shelf CRM, but a properly built customer management contour — from first contact to loyalty.
→SolutionBI
Analytics is not pretty charts on the wall. It's the answer to 'why?' before the problem becomes a loss.
→SolutionContact Center
The contact center is not a phone station — it's the point where a client decides: stay with you or leave. The question is how it's built…
→SolutionIntegrations
Integrations are invisible but critical. When they work — systems talk. When they don't — data is lost and people copy from window to…
→I do not just write about this. I can come in, examine your situation and design a solution for your specific landscape.
Discuss applying this →Ready to discuss your challenge?
Tell me what's not working or what needs to be built. First conversation — no obligations.
Usually respond within a few hours