Curated by THEOUTPOST
On Thu, 14 Nov, 4:03 PM UTC
5 Sources
[1]
Event Report: SEBI Platform and Influencer Regulations
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use On November 8, MediaNama conducted a virtual discussion on 'SEBI Platform and Influencer Regulations'. The discussion focussed on what the recently proposed framework for recognizing 'Specified Digital Platforms' can mean for SEBI's jurisdictional scope, ad and content space, safe harbour for platforms, etc. The discussion also explored the various ways the proposed AI/ML filters could affect content and ads. Download a copy of the event report The Securities Exchange Board of India has proposed to recognize 'specified digital platforms' (SDPs) to streamline the dissemination of financial advice. Experts at MediaNama's discussion, however, pointed out that not only is the vague language of the proposed regulation leaves room for regulatory overreach, but the regulation itself might be repetitive and redundant in the context of existing regulations. The proposed "specified digital platforms" offer SEBI a structured channel to authorise financial content; however, this also centralises significant regulatory power, potentially limiting the diversity of financial viewpoints online. Discussants warned of a chilling effect, as content creators may self-censor to avoid inadvertent regulatory violations, which could reduce innovation and restrict financial education. Furthermore, SEBI's assumption that registration ensures fraud prevention may overlook instances where regulated institutions themselves have participated in misleading practices. SEBI also seems to be following 'disguised legislation' a tactic becoming increasingly popular with government bodies, where regulations are introduced through circulars rather than formal rules, thus bypassing standard legislative scrutiny. The framework's stipulations risk limiting diverse voices in financial commentary, as independent advisors face increased regulatory burdens compared to established media. Additionally, SEBI's control over financial education content -- if it suggests inducement -- further blurs the lines, potentially stifling open financial discourse and innovation across digital platforms. Speaking about the effectiveness of AI/ML based solutions for financial misinformation, discussants pointed out that these solutions can work by identifying behavioural patterns rather than relying solely on content analysis. However, AI's limitations in distinguishing between educational and advisory content, as seen with SEBI's proposed regulations, mean that securities experts are needed to help develop algorithms that track securities-related content accurately. Further, SEBI's prescriptive approach also places a strain on platforms. For instance, self-declared advertising poses a detection issue, especially when influencers classify their own content as ads, raising questions about the efficacy of keyword and signal-based AI in identifying securities ads. One of the discussants argued that the belief in a technological solution for everything, especially by regulators like the IT Ministry, may lead to overly simplistic approaches that overlook the broader, cascading impacts of deploying AI to filter complex financial content. There was concern that SEBI's proposed regulatory framework for monitoring financial content could set a precedent for other sectoral regulators to implement similar speech restrictions. This might lead health or legal regulators, for example, to require that only qualified professionals make public statements in their fields, limiting broader participation in public discourse. Additionally, the increasing number of prescriptive regulations across sectors could impose a significant compliance burden on platforms, making affordable advertising difficult and potentially hindering access to economically viable marketing, especially for smaller businesses and users. The broad and somewhat vague requirements set by SEBI create additional challenges. Platforms typically operate on a "best effort" basis for moderating content, but SEBI's regulations would require compliance "to the satisfaction of SEBI," leaving much up to interpretation and causing confusion. This broad mandate, coupled with the pressure to comply within a 24-hour turnaround, could force platforms to add multiple layers of moderation. If content does not clearly violate securities regulations, it may require extra review steps, making it challenging for platforms to meet the strict time limits without risking the removal of legitimate content. India's regulatory landscape for digital content, especially in the financial sector, faces challenges as various entities and ministries propose overlapping frameworks. While SEBI's regulations aim to address misinformation and fraud, existing laws like the Information Technology Act, 2000, and the Misleading Ads guidelines under the Consumer Protection Act already cover many of these areas. Some argue that additional regulations from SEBI create redundancies and could benefit from better inter-regulatory coordination. SEBI's draft regulations also raise free speech concerns by imposing broad monitoring requirements and content moderation obligations on platforms. This approach could lead to pre-censorship, undermining the intermediary status of platforms, as they would need to pre-approve financial content before publication discussants argued. Such proactive moderation risks stifling legitimate discourse, particularly around personal financial commentary, which may appear as advice but falls under free speech. Additionally, the rules may inadvertently favour large, registered entities, like banks, over independent voices on platforms such as YouTube, which often provide diverse and knowledgeable perspectives. By requiring SEBI registration for financial content, the regulations might restrict access to a wide array of financial insights, limiting open discussion and potentially harming consumer education in the long run. Google, Amazon, Info Edge, HSBC, Snap Inc., Wipro, KPMG, Takshashila Institution, Koan Advisory Group, CUTS, Deloitte, Shardul Amarchand Mangaldas, Saraf and Partners, IndusLaw, Ikigai Law, Broadband India Forum, Apollo 24/7, Nagarro, Jagran New Media, Spice Route Legal, The Asia Group, The Quantum Hub, The Dialogue, IT for Change, NLSIU, The Caravan, Apco Worldwide, Sarvada Legal, DeepStrat, The Agni Inc., Mettle Legal, Genesys International Corporation Ltd, Quinte Financial Technologies, WebOSTA Services, JNU, Department of Law (Central University of Kashmir), Malayala Manorama, Aakhya India, Australian High Commission, University of Queensland, Evoc and more. MediaNama hosted this discussion with support from Meta.
[2]
On SEBI, regulating online content and free speech #Nama
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use In October this year, the Securities and Exchange Board of India (SEBI) released a consultation paper aimed at regulating unregistered "finfluencers." The proposal suggests recognising certain digital platforms as "Specified Digital Platforms" (SDPs). It outlines measures these platforms must take to prevent illegal activities, such as offering financial advice or making performance claims about securities without proper registration. SEBI recommends the use of AI and machine learning tools to detect unlawful content, the issuance of verification badges for registered entities, and the creation of reporting and action mechanisms for illegal content within defined time limits. Additionally, SEBI expects platforms to share data with them to promote transparency in addressing securities-related violations. Speaking at MediaNama's recent discussion, "SEBI Platform and Influencer Regulations", Natasha Agarwal, Senior Research Fellow, TrustBridge Rule of Law Foundation, Deepak Shenoy, Founder and CEO, Capital Mind and Bhargavi Zaveri-Shah, Researcher Financial Regulation, University of Singapore, Vakasha Sachdev, Senior Manager, Government Affairs and Communications at Logically, Puneeth Nagaraj, Partner, Public Policy and Regulatory Affairs, Shardul Amarchand Mangaldas & Co, engaged in a broader conversation about balancing the regulation of financial content with the protection of free speech. Nikhil Pahwa, founder, MediaNama introduced the challenge of controlling online speech for a massive population, such as India. The key issue is that speech regulation often gets delegated to platforms, which are left to decide what should be allowed or removed. Deepak Shenoy says at some level this battle has been lost even by governments. He states "This funda (formula) that somebody will decide for you what you need to hear, I think, should be something of the past. We should not be trying to regulate what people should hear. That is basically what dictatorships do, and we are not one." Shenoy emphasised that regulation should be limited to harmful or illegal speech, rather than a blanket censorship. Pahwa read out a comment by an anonymous attendee which stated that "Unlike political discourse which falls under the ambit of free speech, financial advice carries tangible economic risks and can result in significant monetary losses for individuals acting on such advice. The lack of accountability and regulatory oversight in the promotion of investment products has led to real-world financial harm as many retail investors, often lacking the expertise to scrutinise these recommendations, have suffered due to inaccurate or biased information. This comparison fails to recognize that the financial industry imposes stringent requirements for a reason. Anyone offering financial products or investment advice must register and be held to professional and legal standards to ensure the protection of public funds. Allowing unqualified individuals to recommend stocks without oversight would compromise investor protection mechanisms and erode trust." Shenoy rejected the comparison between political and financial disinformation. He observed that political misguidance poses a far greater threat to society than financial fraud, arguing that the freedom of speech should extend to both, with people making their own decisions. Shenoy stated "I disagree with absolutely every single line of that entire paragraph that you read out. If we were to choose the wrong people to lead us, that is way worse than economic damage that is done by any standard... I think you choose the wrong leaders, you will lose not just your money, you will lose your peace of mind, probably your freedom, and way more than anything else." Bhargavi Zaveri Shah stated that financial markets thrive on diverse information suggesting that even misleading information can aid price discovery. She adds that existing regulations, such as those enforced by SEBI, already cover fraudulent financial practices and can be applied to online spaces. She says "One public good that the financial markets generate is a price. The process of price discovery. The process of price discovery benefits from more information rather than less information. For every piece of wrong information, there will be counter views that will be coming in. This is well published in academic research in the financial market space that actually more information helps improve price discovery, not degrade it." Puneeth Nagaraj questioned the effectiveness and fairness of using AI tools to monitor content, especially when it comes to preemptively censoring financial advice. He points out that such measures could infringe on free speech and blur the line between platforms as intermediaries and as content regulators. "Re-censorship is not allowed in India. Many Supreme Court cases have held this by mandating questionable AI/ML solutions to proactively monitor content. And again, given the broad wording of the regulation, I can't speak to the technology part of it, but it would essentially mean you look through possibly all content or all content that relates to financial sector information and then sift through it before permitting on your platform" he said. It essentially means platforms would need to review all content, or at least all finance-related content, before allowing it on their site. This approach would not only undermine the platform's role as an intermediary (since intermediaries are not supposed to control the content posted by users), but it would also infringe upon the free speech rights of individuals, as they would not expect their content to be filtered out in advance. Nagaraj explains that India recognizes commercial speech as part of free speech under Article 19, but with certain conditions, such as the requirement for public interest. He notes that commercial speech is usually subject to more scrutiny than other types of speech. However, he argues that the issue lies not in the regulation of unregistered content but in the proactive monitoring and content-blocking measures being proposed. He points out that these actions are being suggested for a problem that could be addressed through other means. To justify censorship, a much higher threshold is needed, as outlined in Article 19, which covers grounds like public order, morality, and state security. Nagaraj argues that the proposed measures do not meet these criteria, and if they were to, they would require a more narrowly tailored approach. Vakasha Sachdev critiques the inconsistency in the treatment of financial advice across platforms. He questions why unregistered advice on social media is considered a problem, but the same advice in traditional media like newspapers is acceptable. He says that if someone writes a financial advice column in a newspaper, even without proper registration, it's not considered investment advice under the regulations. This creates a contradiction, as the same advice offered on a social media platform would be problematic, but when presented through more conventional mass media outlets is allowed. Sachdev states "It's a very weird thing where you're saying, okay, when it's done online on a social media platform, then it is a problem. But if it's done through any other traditional mass public consumption system, it's fine." Shah raises concerns about AI models being used to censor content, pointing out that these models often "hallucinate" (make mistakes), which could lead to the incorrect removal of legitimate content. She questions the reliability of AI for such sensitive tasks, especially without a clear level of accuracy. She asks, "AI models hallucinate, okay? And you want to pre-censor speech on the basis of AI models that actually hallucinate. I think that's a very high threshold for you to meet, that the scale of the problem has reached to the proportion that I'm going to use AI models which are known to hallucinate. What is the level of accuracy you want on this?". Natasha Agarwal highlights a significant issue in the current regulatory approach: influencers and content creators have no direct recourse to engage with SEBI if their content is removed. Instead, they can only address grievances through platform-specific processes, which may not be sufficient. "The SEBI Draft Circular doesn't foresee any complaints by the influencer themselves. They are only talking about disputes or complaints on the platform. It's arguably you can see that the influencers themselves can only take action against the intermediary under the intermediary guidelines and then approach the grievance officer under those guidelines. But there's no way for them to engage with SEBI" she says. Sachdev points out the potential legal risk of over-censorship. He refers to Section 230 in the U.S., which was enacted to protect platforms from the fear of excessive censorship, suggesting that a similar problem could occur in India with stricter financial content regulation. He explains that fundamental rights under Articles 19 and 21 can be enforced not just against the state but also private entities. This raises concerns about the tendency for over-censorship rather than under-censorship, as platforms might act out of caution. He references Section 230 in the United States, highlighting that its creation was driven by the need to prevent platforms from excessively censoring content due to fear of legal repercussions.
[3]
Is the Digital Platforms Regulation of SEBI Justified Under SEBI Act
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use In October this year, The Securities and Exchange Board of India (SEBI) proposed a consultation paper to regulate unregistered "finfluencers" by recognizing certain digital platforms as "Specified Digital Platforms" (SDPs). The paper outlines preventive and curative measures these platforms must adopt to prevent illegal activities, such as unregistered entities offering financial advice or making performance claims related to securities. SEBI suggests using AI and machine learning tools to identify illegal content, provide verification badges to registered entities, and implement mechanisms to report and act on unlawful content within specified timeframes. Platforms must also collaborate with SEBI to share data and ensure transparency in tackling securities-related violations. Speaking at MediaNama's recent discussion titled "SEBI Platform and Influencer Regulations", Natasha Agarwal, Senior Research Fellow, TrustBridge Rule of Law Foundation, Deepak Shenoy, Founder and CEO, Capital Mind and Bhargavi Zaveri-Shah, Researcher Financial Regulation, University of Singapore, weighed in on the scope of regulation under the SEBI Act One of the key questions emerging from this regulation is whether SEBI's regulation of digital platforms, especially influencers in the financial sector, is within the scope of the SEBI Act, considering its indirect control over content outside its traditional jurisdiction, including paid content such as advertising. Natasha Agarwal, Senior Research Fellow, TrustBridge Rule of Law Foundation states "The scope of SEBI's power under the SEBI Act is fairly wide ... But we do need to question the manner in which they are trying to regulate digital platforms. They have proposed issuing this circular under Section 11.1 of the SEBI Act, which gives SEBI the power to protect the interests of investors and promote regulation of the stock market by such measures as it thinks fit." She further elaborated, "Under 11.2, there is some guidance on the types of measures SEBI can implement. They can register and regulate intermediaries or depositories, promote investor education, and prohibit fraudulent trade practices. While other jurisdictions also issue guidelines on content regulation, SEBI's approach seems to bypass the procedure in the SEBI Act, where regulations must be approved by parliament. Instead, SEBI is utilising Section 11.1 to expand its powers without parliamentary approval." Agarwal agrees, "Yes, I think SEBI is cognizant of the fact that it has wide powers under the SEBI Act. Off late, there has been a trend towards expanding its scope." She says that recently there has been a trend towards expanding its scope, similar to how it has been widening the definition of 'connected persons' under insider trading regulations. "SEBI is looking to expand its scope of powers and is using vague language under Section 11.1 to achieve that", she states. Deepak Shenoy, Founder and CEO CapitalMind elaborates, "Technically, this means I cannot hire an influencer who gives securities advice. For example, if I hire someone who has videos giving trading tips, like options trades, I cannot associate with them in a commercial capacity." However, he states that SEBI operates in a way that almost everyone is painted as a violator under the law, and enforcement will be based on SEBI's discretion. "But in general, the way SEBI operates is that almost everybody will be painted as a criminal in the wordings of the law. It will be up to them to enforce," he states. The real challenge is further exacerbated as it comes with the platforms themselves, especially since they can't effectively monitor all content, such as on platforms like Telegram. He further adds that the law could even impact platforms like Whatsapp, where one might forward videos or give trading advice. "The way the rules are written, it basically encompasses anything.." he states. "Right now, the way the rules are written, you can't do anything. You can't operate in any meaningful way at all", Shenoy adds. Shenoy explains that this method is often used to promote securities by giving trading advice, like suggesting specific options trades, and offering this advice for free on platforms like YouTube. However, this practice isn't much different from what's happening on TV channels, where similar advice is given. "This is no different from what happens on TV channels. But TV channels don't get affiliate revenue. They get advertising revenue. Anybody going on TV and giving advice on buying and selling, which happens on TV all the time, would technically be an advisor." Therefore, associating with those individuals will constitute a problem. Furthermore, he says that SEBI has been pressuring platforms like Youtube to take down videos from unregistered influencers giving investment advice. "Youtube has recently said yes and taken down a bunch of videos from people. Those people are complaining that why are you taking down my videos? TV channels videos do the same thing, which is tell people to buy and sell certain securities, and why are you taking on my videos, which are both free and I don't get any affiliate revenue. I just make ad revenue from YouTube..." Shenoy states that the line between advice and education is quite nuanced. For example, if someone says, 'Here's what I did with my money,' it could be seen as education, but if they say, 'I made a profit doing this, you should try it,' that becomes more of an inducement. SEBI seems to be stricter on content that promotes financial actions or gives specific advice. "All these laws, unfortunately, create the situation of extreme amounts of power that resides inside of the regulation or the regulator. In that sense, it creates that layer of arbitrariness, which we should not", states Shenoy. Bhargavi Zaveri-Shah, Researcher, Financial Regulation, University of Singapore raises concerns about SEBI's increasing powers, "Let's not forget that under the SEBI Act, every violation of a SEBI directive is a criminal violation. So SEBI chooses not to prosecute? Its benevolence. But actually it can choose to prosecute. So it's not just about a monetary penalty and they'll ban you from the markets and so on and so forth. It can choose to initiate criminal proceedings, and that's not something to be taken lightly about." She adds that the regulations already exist to tackle misinformation, such as the Investment Advisory Regulations, 2013 and the Research Analyst Regulations, 2014. There are existing frameworks to deal with fraudulent or misleading advice in the securities market. SEBI's proposals further widen its existing powers. The scope of what SEBI is banning keeps increasing and the number of people who are allowed to give investment advice keeps getting narrower. Zaveri responds that SEBI's approach could potentially reduce competition by narrowing the pool of influencers and advisors. "If you see the trajectory, the powers have been only growing. This means that the competition in this space is likely to reduce", she states. Nikhil Pahwa, Founder-Editor of MediaNama also shared an anonymous comment stating that SEBI's guidelines are a much-needed step to protect investor interest, particularly with the influx of new retail investors. Many first-time investors are drawn to financial products promoted by influencers on social media, which increases the risk of misinformation, misleading claims, and conflicts of interest. SEBI's efforts to regulate influencer activities by enforcing clear disclosures, ensuring compliance, and restricting unqualified advice is a step needed in the right direction. The commenter expresses support for SEBI's initiative to regulate influencers, ensuring transparency, accountability, and restricting unqualified advice. "SEBI should come down very hard on these platforms so that they can't continue to allow content without any moderation. There is a need to reduce these instances as retail investors may get swayed by very high return promises, and these companies will claim safe harbor." the commentator stated.
[4]
With Proposed SEBI Regulations, Platforms Must Use AI Detect Securities content, But Will It Work? #Nama - MEDIANAMA
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use In October this year, the Securities and Exchange Board of India (SEBI) came out with a consultation paper that instructs platforms about how they can prevent fraud, impersonation, claims by unregistered entities, and the presence of unregistered entities by relying on AI/ML-based solutions. Platforms also have to identify content (including advertisements) that is related to the securities market, created by SEBI-registered entities or their agents, provides education in the securities market or directs people to other mediums such as Telegram, WhatsApp, Phone, e-mail etc. They must implement said solutions to become a specified digital platform (SDP) under the regulator. One of the key questions emerging from this regulation is whether AI/ML-based solutions can effectively identify fraud/misinformation pertaining to the securities market. Speaking at MediaNama's recent discussion about SEBI regulations, Vakasha Sachdev, Senior Manager, Government Affairs and Communications at Logically said that while it is possible to track misinformation and disinformation using AI, the fact that SEBI is asking platforms to identify all sorts of other securities related content could prove to be challenging. "We did a bit of research on this at Logically, we were trying to understand the scope of financial misinformation, disinformation in India. What we found is that, yes, there are a lot of patterns that these people follow. But now this is the interesting thing. What we looked at were the cases where this was clearly attempts to mislead people, clear attempts to deceive people, the really fraudulent activity. Now, for that stuff, you can track it in different ways," he said, adding that the solution doesn't even have to look at the content itself but rather patterns of behavior to identify such individuals. Then the platform can have an expert in place to look at content. SEBI's framework asks platforms to go beyond this, he said explaining that while people could build models to identify all sorts of securities content. "You could build a model that would look at securities content. But the problem is in terms of that model being able to accurately find this distinction between education, [advisory] and what a regular person can say about this securities market. That, I think, will be a little bit difficult," Sachdev added. He mentioned that AI models would only be able to provide a probabilistic rating of whether something is financial advisories or educational content and relying on the AI's rating 100% could be dangerous. Similarly, within advertising content as well, AI can only access the distinctions between advisory and education with human intervention, Sachdev said. "You're going to have people employed by the platforms, who are like SEBI experts, regulation experts, to manage this process. They'll have to then help build the algorithms that the platforms are going to use to track this stuff," he explained. Sachdev explained that Logically has seen coordinated behavior to mislead people around Meta advertisements. He mentioned that these bad actors make accounts and groups with very minimal followers. "They'll put a post which will be talking about, 'Okay, you can use Bitcoin to invest in this and you can use that'. They put out a case for what they're basically suggesting you can do. Then they refer you to a closed room," he said, adding that the majority of the deceptive practices actually happen on encrypted channels on platforms like Telegram. However, to get people to those encrypted channels, these bad actors will first lure them in with content elsewhere. "You'll see advertisements, bought advertisements on Meta, which you'll see in the ad library, and they'll be then targeted at a small town. It'll be targeted in a particular area where they know there isn't that understanding of retail investment," Sachdev mentioned. To stop these bad actors from defrauding people, one has to track suspicious ads from these accounts with limited followers who make references to the securities market in tier 2 and tier 3 cities, he said. MediaNama's editor Nikhil Pahwa suggested that legitimate social media influencers also use advertising and other methods to influence people. To this Sachdev responded that platforms can tell apart legitimate and deceptive influence operations because the latter uses tactics like bots and referrals to other platforms. When asked how equipped platforms currently are to handle this deceptive content, Sachdev explained that with Meta, fact checkers signed up with Meta's third party fact checker program (3PFC) see all the content that uses flag as misinformation and also content flagged by Meta's algorithms. Pahwa mentioned that in the early years of the internet, people avoided platform censorship using leet speak, where people use numbers or special characters to replace letters in a word. So for instance, a word like "need" would become something like "n33d". "I think in a sense, the AI/ML is one of the ways in which you can address that problem. Because earlier you were relying on a non-adaptive system that can only look at, specific words it's specific phrases, and then you moderate based on that," Sachdev said, adding that with AI, you can build the model to encompass words which use alternative languages. "The problem is, of course, it will keep changing. There are people who will find newer and newer ways to get around this," he explained. He said that this can make it harder for platforms to take action against content within SEBI's prescribed timeline which requires platform to identify content within 24 hours and block/take down problematic content within 72 hours. For ads, SEBI requires platforms to identify problematic content within 24 hours and block it before it goes live. Sachdev said that ideally, companies should be able to build their AI systems to be able to go through short links and identify what the link says. "It is a little bit spotty. Sometimes it will work, sometimes it won't. [However] you can I think, build that capability for sure. But again, there will be challenges with that. I think in terms of the false positives, I think, you are in a situation where all your tech on this will only be able to really give you a probabilistic rating. The wider you make the net, the more false positives you're going to get out of it," he pointed out. To this, Pahwa asked him whether the model could identify links in screenshots or images. "I can create a short URL or a t.me link in an image and upload it. When it's a mix of text and image, where there might be a URL and an image, how feasible is it to be able to screen for that as well?" he questioned. Sachdev responded that companies could build OCR (optical character recognition) into their AI models, which should be able to scan content. However, the fact that the content Pahwa suggested would be a mix of image and text could make it trickier to tackle, he said. "We've seen some amount of success with being able to do OCR reading of text on images. That can be done, and that can be done on an AI/ML system as well. But I mean, it is a big challenge. It's again going to be it will raise so many issues with you have people trying to finding more and more ways to get around those things," he said, emphasising that platforms are better of trying to identify patterns of deceptive behavior rather than looking at specific posts. "If someone is just posting images of particular kinds, and then you then cross-link that with their other posts and the people who they're sharing and the people who they're connected with, you could come up with a TTP framework -- a Techniques Tactics Procedures framework, which you could try to identify using an AI system," he said. However, you would have to spend a lot of time training these systems, and not all platforms would be able to do that in-house, he explained. Tamoghna Goswami, Head of Policy at ShareChat mentioned that the way SEBI defines advertising in its consultation paper is fairly broad. "How the word advertisement is defined here is that that is intended to promote the sale and in addition, can include a content that is identified or classified," she said, adding that this definition could also cover self-declared advertisements by influencers. She questioned that it was unclear how a platform would be able to identify such a self-declared ad. Goswami pointed out that as per the consultation paper, SEBI prescribes methods for how platforms need to identify content, stating that this may not work well with a technological solution like AI/ML.
[5]
On SEBI's rules to use AI to detect securities content work? #Nama
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use In October this year, the Securities and Exchange Board of India (SEBI) came out with a consultation paper that instructs platforms about how they can prevent fraud, impersonation, claims by unregistered entities, and the presence of unregistered entities by relying on AI/ML-based solutions. Platforms also have to identify content (including advertisements) that is related to the securities market, created by SEBI-registered entities or their agents, provides education in the securities market or directs people to other mediums such as Telegram, WhatsApp, Phone, e-mail etc. They must implement said solutions to become a specified digital platform (SDP) under the regulator. One of the key questions emerging from this regulation is whether AI/ML-based solutions can effectively identify fraud/misinformation pertaining to the securities market. Speaking at MediaNama's recent discussion about SEBI regulations, Vakasha Sachdev, Senior Manager, Government Affairs and Communications at Logically said that while it is possible to track misinformation and disinformation using AI, the fact that SEBI is asking platforms to identify all sorts of other securities related content could prove to be challenging. "We did a bit of research on this at Logically, we were trying to understand the scope of financial misinformation, disinformation in India. What we found is that, yes, there are a lot of patterns that these people follow. But now this is the interesting thing. What we looked at were the cases where this was clearly attempts to mislead people, clear attempts to deceive people, the really fraudulent activity. Now, for that stuff, you can track it in different ways," he said, adding that the solution doesn't even have to look at the content itself but rather patterns of behavior to identify such individuals. Then the platform can have an expert in place to look at content. SEBI's framework asks platforms to go beyond this, he said explaining that while people could build models to identify all sorts of securities content. "You could build a model that would look at securities content. But the problem is in terms of that model being able to accurately find this distinction between education, [advisory] and what a regular person can say about this securities market. That, I think, will be a little bit difficult," Sachdev added. He mentioned that AI models would only be able to provide a probabilistic rating of whether something is financial advisories or educational content and relying on the AI's rating 100% could be dangerous. Similarly, within advertising content as well, AI can only access the distinctions between advisory and education with human intervention, Sachdev said. "You're going to have people employed by the platforms, who are like SEBI experts, regulation experts, to manage this process. They'll have to then help build the algorithms that the platforms are going to use to track this stuff," he explained. Sachdev explained that Logically has seen coordinated behavior to mislead people around Meta advertisements. He mentioned that these bad actors make accounts and groups with very minimal followers. "They'll put a post which will be talking about, 'Okay, you can use Bitcoin to invest in this and you can use that'. They put out a case for what they're basically suggesting you can do. Then they refer you to a closed room," he said, adding that the majority of the deceptive practices actually happen on encrypted channels on platforms like Telegram. However, to get people to those encrypted channels, these bad actors will first lure them in with content elsewhere. "You'll see advertisements, bought advertisements on Meta, which you'll see in the ad library, and they'll be then targeted at a small town. It'll be targeted in a particular area where they know there isn't that understanding of retail investment," Sachdev mentioned. To stop these bad actors from defrauding people, one has to track suspicious ads from these accounts with limited followers who make references to the securities market in tier 2 and tier 3 cities, he said. MediaNama's editor Nikhil Pahwa suggested that legitimate social media influencers also use advertising and other methods to influence people. To this Sachdev responded that platforms can tell apart legitimate and deceptive influence operations because the latter uses tactics like bots and referrals to other platforms. When asked how equipped platforms currently are to handle this deceptive content, Sachdev explained that with Meta, fact checkers signed up with Meta's third party fact checker program (3PFC) see all the content that uses flag as misinformation and also content flagged by Meta's algorithms. Pahwa mentioned that in the early years of the internet, people avoided platform censorship using leet speak, where people use numbers or special characters to replace letters in a word. So for instance, a word like "need" would become something like "n33d". "I think in a sense, the AI/ML is one of the ways in which you can address that problem. Because earlier you were relying on a non-adaptive system that can only look at, specific words it's specific phrases, and then you moderate based on that," Sachdev said, adding that with AI, you can build the model to encompass words which use alternative languages. "The problem is, of course, it will keep changing. There are people who will find newer and newer ways to get around this," he explained. He said that this can make it harder for platforms to take action against content within SEBI's prescribed timeline which requires platform to identify content within 24 hours and block/take down problematic content within 72 hours. For ads, SEBI requires platforms to identify problematic content within 24 hours and block it before it goes live. Sachdev said that ideally, companies should be able to build their AI systems to be able to go through short links and identify what the link says. "It is a little bit spotty. Sometimes it will work, sometimes it won't. [However] you can I think, build that capability for sure. But again, there will be challenges with that. I think in terms of the false positives, I think, you are in a situation where all your tech on this will only be able to really give you a probabilistic rating. The wider you make the net, the more false positives you're going to get out of it," he pointed out. To this, Pahwa asked him whether the model could identify links in screenshots or images. "I can create a short URL or a t.me link in an image and upload it. When it's a mix of text and image, where there might be a URL and an image, how feasible is it to be able to screen for that as well?" he questioned. Sachdev responded that companies could build OCR (optical character recognition) into their AI models, which should be able to scan content. However, the fact that the content Pahwa suggested would be a mix of image and text could make it trickier to tackle, he said. "We've seen some amount of success with being able to do OCR reading of text on images. That can be done, and that can be done on an AI/ML system as well. But I mean, it is a big challenge. It's again going to be it will raise so many issues with you have people trying to finding more and more ways to get around those things," he said, emphasising that platforms are better of trying to identify patterns of deceptive behavior rather than looking at specific posts. "If someone is just posting images of particular kinds, and then you then cross-link that with their other posts and the people who they're sharing and the people who they're connected with, you could come up with a TTP framework -- a Techniques Tactics Procedures framework, which you could try to identify using an AI system," he said. However, you would have to spend a lot of time training these systems, and not all platforms would be able to do that in-house, he explained. Tamoghna Goswami, Head of Policy at ShareChat mentioned that the way SEBI defines advertising in its consultation paper is fairly broad. "How the word advertisement is defined here is that that is intended to promote the sale and in addition, can include a content that is identified or classified," she said, adding that this definition could also cover self-declared advertisements by influencers. She questioned that it was unclear how a platform would be able to identify such a self-declared ad. Goswami pointed out that as per the consultation paper, SEBI prescribes methods for how platforms need to identify content, stating that this may not work well with a technological solution like AI/ML.
Share
Share
Copy Link
SEBI's new regulations for digital platforms aim to control financial misinformation but raise concerns about free speech, AI-based content moderation, and regulatory overreach.
The Securities and Exchange Board of India (SEBI) has proposed a new regulatory framework aimed at controlling the dissemination of financial advice on digital platforms. This move comes in response to the growing influence of unregistered "finfluencers" and the potential risks they pose to investors 1.
SEBI's proposal introduces the concept of "Specified Digital Platforms" (SDPs) and outlines several measures:
Critics argue that SEBI's approach may infringe on free speech rights and lead to over-censorship. The broad language of the regulations could potentially limit diverse voices in financial commentary and stifle open financial discourse 1.
While AI tools are proposed for content monitoring, experts question their effectiveness in distinguishing between educational and advisory content. The limitations of AI in accurately identifying securities-related content raise concerns about potential over-censorship or missed violations 4.
Some experts question whether SEBI's regulation of digital platforms falls within the scope of the SEBI Act. The use of Section 11 of the Act to expand SEBI's powers without parliamentary approval has been criticized as a form of "disguised legislation" 3.
The regulations could significantly impact how platforms operate and how content creators share financial information:
While SEBI aims to protect investors from fraudulent practices, there are concerns that overly strict regulations could hinder innovation in financial education and commentary. The challenge lies in striking a balance between investor protection and maintaining a diverse, open financial discourse online.
As the debate continues, stakeholders from various sectors are calling for a more nuanced approach that addresses the legitimate concerns of investor protection while preserving the benefits of open financial discussion in the digital age.
Reference
[4]
A comprehensive look at India's efforts to develop its AI ecosystem, covering regulatory challenges, data access issues, and strategies for fostering innovation while addressing privacy concerns.
4 Sources
4 Sources
The Global South faces unique challenges in balancing AI innovation with data protection, as discussed at PrivacyNama 2024. Issues include regulatory gaps, enforcement difficulties, and the complexities of using non-personal data in AI development.
4 Sources
4 Sources
Experts discuss the complexities of developing AI while adhering to privacy laws, highlighting the need for 'Privacy by Design' and addressing challenges in data governance and regulatory compliance.
3 Sources
3 Sources
SEBI Chairperson Madhabi Puri Buch discusses the regulator's efforts to streamline IPO processes, enhance compliance reporting, and balance innovation with investor protection in the Indian financial markets.
3 Sources
3 Sources
The Securities and Exchange Board of India (SEBI) has issued new guidelines requiring investment advisers and research analysts to disclose their use of AI tools to clients. This move aims to enhance transparency and protect investor interests in the rapidly evolving financial landscape.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved