The story so far: To stamp out widespread instances of fraudulently obtained SIM cards being used for financial and other cyber scams across the country, the Department of Telecommunications (DoT) has begun rolling out an artificial intelligence-based facial recognition tool called “Artificial Intelligence and” (or ASTR, pronounced “astra”, the Hindi word for weapon) “facial recognition solution for verification of telecom SIM subscribers”. ASTR has already been deployed in several states including Haryana, Gujarat, Maharashtra, Tamil Nadu and Kerala. It is worth noting that while the Ministry of Defense has published success stories of phony cellular link downs using ASTR this year, India still has no regime for personal data protection or AI-specific regulations.
Why is artificial intelligence used to detect telecom fraud?
On May 25, Punjab Police said they had blocked 1.8 million SIM cards allegedly activated with fake identities. Of these, 500 connections were made using a person’s photo but other accompanying KYC (Know Your Customer) parameters such as names, proof of address, etc. To. Haryana’s Nuh (formerly Mewat) district was described as the “new Jamtara” (the Jharkhand region notorious for such scams) when police arrested 66 suspects for allegedly sending around 28,000 people across the country with 99 fake SIMs -Cards worth 100 crore ₹ had cheated cards. Meanwhile, Karnataka lost ₹363 crore to cyber fraud in 2022, an average of nearly ₹1 crore per day. According to the DoT, a large proportion of financial cyberfrauds use fake cellular connections that exploit anonymity.
Also read | Most fake identity SIM cards activated in Tamil Nadu were procured from other states
With around 117 million subscribers, India is the second largest telecommunications ecosystem in the world. While manually identifying and comparing the large number of subscriber verification documents such as photos and evidence is a daunting task, the Department of Defense says it intends to use the facial recognition detection-based “Indigenous and NextGen Platform” ASTR to analyze the entire subscriber base of all telecommunications service providers (TSPs ). In addition, it is noted that the traditional text-based analysis currently available is limited to finding similarities between proofs of identity or address and verifying that this information is correct. However, it cannot search through photographic data to identify similar faces.
What is ASTR and how does it detect fake SIM connections?
Face recognition is an algorithm-based technology that creates a digital map of the face by identifying and mapping a person’s facial features, and then matches that against the database to which it has access.
In 2012, the DoT had instructed all TSPs to share their subscriber database, including users’ pictures, with the department. The ASTR analyzes subscriber images from this database provided by TSPs and uses facial recognition technology (FRT) to classify them into groups of similar-looking images. In the next step, it compares the associated textual subscriber details with images in the database and uses a string-matching concept called “fuzzy logic” to identify roughly similar-looking users’ names or other KYC information and group them together. The final step is to determine if the same face (person) has purchased SIM cards with multiple names, dates of birth, bank accounts, proof of address and other KYC documents. Alternatively, ASTR will also detect when more than eight SIM connections have been made on behalf of one person, which DoT rules do not allow. ASTR’s face recognition technology recognizes facial features by mapping 68 features on the front of the face. It characterizes two faces as similar when there is a 97.5% match.
What concerns are associated with the use of facial recognition AI?
The use of FRT brings problems related to misidentification due to the inaccuracy of the technology. An algorithmic FRT trained on certain datasets can be limited in its knowledge, ie it can make technical errors due to occlusion (a partial or complete obscuration of the image), poor lighting, facial expression, aging, etc. Errors in the FRT are also related to the under-representation of certain groups of people in the training datasets. Studies of FRT systems in India indicate a disparity in error rates based on identifying Indian males and Indian females. Extensive research worldwide has found that accuracy rates drop sharply depending on race, gender, skin color, etc. This in turn can lead to a false positive, where an individual is misidentified as someone else, or a false negative, where an individual is not verified as themselves.
Other ethical concerns about FRT relate to privacy, consent, and mass surveillance. The Supreme Court in the Puttaswamy case had recognized the right to information autonomy as an important part of the right to privacy enshrined in Article 21. FRT systems consume and compute large amounts of facial biometric data, both for training and for operation. In many cases, an individual may not have control over the processing of their data, or may not even be aware of it. This could – and has – led to unlawful arrests and exclusion from social security systems.
In the case of ASTR, Medianama, a digital policy observer and news organization, noted in its findings that at the time of connection, there was no public notice of the use of ASTR for user data provided to TSPs. An RTI filed by the publication with the DoT did not reveal any information about how ASTR protects data or how long customer data is retained. The DoT also did not provide a copy of the contract signed with the developer of the technology. This raises questions about privacy and consent, even if ASTR is used on the presumed consent principle mentioned in the now-withdrawn Data Protection Act.
What legal framework governs this technology in India?
In several jurisdictions around the world that use FRT in their administration and privately, players must comply with local regulations on the protection of personal data in the absence of AI regulations.
For example, in the European Union, where a specific AI law is currently being drafted, FRT tools must comply with the strict privacy and security rules of the 2018 General Data Protection Regulation. This is also the case in Canada.
India has no data protection law after the government last year withdrew the 2019 Personal Data Protection Act after a joint parliamentary committee recommended sweeping changes. The center presented a new draft of the bill this year, but it has not yet been tabled in parliament. Second, there is no FRT specific regulation in India either.
However, NITI Aayog has published several papers outlining India’s national strategy to harness the potential of AI wisely. It states that use should be with consent and voluntary, and FRT should never become mandatory. It should be limited to cases where the public interest and constitutional morality can be reconciled. Improved automation efficiency should not, per se, be considered sufficient to justify the use of FRT. It remains to be seen whether ASTR meets this definition and the 2017 Puttaswamy ruling.
This is a premium item available exclusively to our subscribers. Read more than 250 premium articles like this every month You have reached your limit for free articles. Please support quality journalism. You have exhausted your free item limit. Please support quality journalism. X You have read {{data.cm.views}} of {{data.cm.maxViews}} free articles. X This is your last free item.