Around 2012, there was a furore in Indian capital markets over the increasing use of algo trading, where trading is done through algorithms fed into computers. But by the time public at large became aware of it, exchanges had already set-up colocation facilities in their premises and had allowed large institutional investors to run their trading algos directly through these facilities.

The proximity as well as automation improved the speed, giving larger institutional players and brokerages the upper hand in trade execution. The trading algorithms flooded the exchange server with orders, in high frequency trading, which crowded out orders punched by individual investors.

Regulators were rather slow to realise the growing clout and increasing risks from machine-driven trading. The first circular from SEBI on algo programs was issued in 2012, almost three years after direct market access was provided to FPIs and large players, and two years after colocation facilities were set up by exchanges.

By 2012, algo trading already accounted for one-third of the exchange turnover in India and almost 70 per cent of the turnover in the US markets. Their usage has continued to grow with algo trading currently accounting for more than 60 per cent of cash and futures and options market in India. Algos, however, do have benefits. They enhance market liquidity, help price discovery and reduce volatility, to a large extent.

Will the advent of artificial intelligence disrupt capital markets in a similar way? A chapter in IMF’s recent Financial Stability Report on advances in AI and its implications for capital market activities, has some answers. It finds that capital market players are proceeding cautiously with adoption of AI. But once it catches on, trading turnover and volatility will increase and so will trading concentration among larger players. Regulators need to be ready with a framework to ensure an even playing ground.

Slow progress in AI adoption

While AI and Gen AI have excited many industries in the last few years, machine learning and AI based computations have been used by financial markets since the first decade of this millennium.

But the AI tools currently being used in capital markets deploy simple algorithms whose outcome is certain. Investment managers are proceeding slowly in using GenAI where large amounts of funds are involved. While the inputs are being used in taking investment decisions, doing away with human intervention completely, is still some way off.

An experiment is going on in the US with AI driven ETFs (exchange traded funds) where the investment selection and adjustments are purely AI driven. But total assets garnered by these ETFs is just a little over $800 million and account for a paltry 0.002 per cent of US equity market cap. This implies a cautious stance, among investors as well as investment managers.

There are many use-cases where advanced AI tools can be employed. It can be used to analyse and interpret content from social media and other public forums where people give their views and generate trading signals based on these views. AI can help discover the price of illiquid assets by using the price movement of other asset classes and instruments. Forward looking indicators can be developed and tested with the help of these tools.

Risks from adoption

There are however many risks from advanced AI. The main risk is that AI driven programs can spot trading opportunities and react much faster, driving up volumes on exchanges. The IMF research shows that the portfolio turnover grew much faster in AI powered ETFs compared with other ETFs.

Higher volumes can be a boon as well as a bane. Higher liquidity is good but if most of these transactions are spoofs (not intended to be executed), they can clutter the exchange servers making it more difficult for orders of small investors to go through.

There is also the risk of AI driven algos colluding with each other and manipulating the system if they have access to price-sensitive information or latency advantage, by being in colo facility.

There could be risk of herding or market concentration if AI models developed by few vendors become more popular and are used extensively. With individual investors unable to afford these systems, they will be unable to execute trades fast, making it almost impossible for their trades to go through.

There is also the possibility of market manipulation through deep fakes or misinformation by the AI programs. “Some participants mentioned market fragility issues — including the drying up of market liquidity, excess volatility, and flash crashes — arising from fast-paced decision making and ineffectiveness of guardrails.,” says the Financial Stability Report.

Regulatory oversight

Regulators need to watch this space closely and be ready with regulations to prevent market manipulation, trading disruption or inequity in trading access.

Availability of trading data is critical for testing AI driven models and companies with access to such data can be ahead of others in creating trading systems which gain popularity. In the colo scam, it was found that Ajay Shah accessed data from the NSE to test algo trading system. Rules for providing trading data by stock exchanges need to be such that all players have access to similar data.

Regulators should keep in mind the warning in the IMF report that though the unit cost of training in AI models has declined, the top 10 largest models are getting more complex leading to higher cost in training. This creates the risk of concentration as few private sector developers can dominate the market. Companies which have access to non-public trading and client data have significant advantage as this data is essential to test the models.

Review of circuit breakers and trading margins should be done at more frequent intervals once AI based trading grows, to check rapid unidirectional moves which can disrupt markets. Regular stress tests and risk mapping should be done on the dependency on data, AI models and the exchange and clearing infrastructure. Regulatory framework needs to be drawn up for AI third party service providers and cyber attack protocols need to be strengthened.