Top

Effective Stock Habbits

  /  Investing   /  The Untested Assumptions in SEC Chair Gensler’s Pivot to AI

The Untested Assumptions in SEC Chair Gensler’s Pivot to AI

Jack Solowey

Crypto startups and venture capitalists are not the only ones pivoting to artificial intelligence (AI). Recently, SEC Chair Gary Gensler delivered remarks to the National Press Club outlining his concerns about AI’s role in the future of finance.

In those high‐​level remarks, Gensler shared his anxiety that AI could threaten macro‐​level financial stability, positing that “AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator.”

This fear largely rests on a pair of debatable assumptions: one, that the market for AI models will be highly concentrated, and two, that this will cause financial groupthink. There are important reasons to doubt both premises. Before the SEC or any regulator, puts forward an AI policy agenda, the assumptions on which it rests must be closely scrutinized and validated.

Assumption 1: Foundation Model Market Concentration

Chair Gensler’s assessment assumes that the market for AI foundation models will be highly concentrated. Foundation models, like OpenAI’s GPT‑4 or Meta’s Llama 2, are pre‐​trained on reams of data to establish predictive capabilities and can serve as bases for “downstream” applications that further refine the models to better perform specific tasks.

Because upstream foundation models are data‐​intensive and have the potential to leverage downstream data for their own benefit, Gensler is concerned that one or a few model providers will be able to corner the market. It’s understandable that one might assume this, but there are plenty of reasons to doubt the assumption.

The best arguments for the market concentration assumption are that natural barriers to entry, economies of scale, and network effects will produce a small number of clear market leaders in foundation models. For instance, pre‐​training can require a lot of data, computing power, and money, potentially advantaging a small number of well‐​resourced players. In addition, network effects (i.e., platforms with more users are more valuable to those users) could further entrench incumbents, either because big‐​tech leaders already have access to more training data from their user networks, because the model providers attracting the most users will come to access more data to further improve their models or some combination of both.

But the assumption that the market for foundation models inevitably will be concentrated is readily vulnerable to counterarguments. For one, the recent AI surge has punctured theories about the perpetual dearth of tech platform competition. With the launch of ChatGPT, OpenAI—a company with fewer than 400 full‐​time employees earlier this year—became a household name and provoked typically best‐​in‐​class firms to scramble in response. And while it’s true that OpenAI has made strategic partnerships with Microsoft, OpenAI’s rise undermined the conventional wisdom that the same five technology incumbents would enjoy unalloyed dominance everywhere forever. The emergence of additional players, like Anthropic, Inflection, and Stability AI, to name just a few, provides further reason to question the idea of a competition‐​free future for AI models.

In addition, the availability of high‐​quality foundation models with open‐​source (or other relatively permissive) licenses runs counter to the assumed future of monopoly control. Open‐​source licenses typically grant others the right to use, copy, and modify software for their own purposes (commercial or otherwise) free of charge. The AI tool builder Hugging Face currently lists tens of thousands of open‐​source models. And other major players are providing their own models with open‐​source licenses (e.g., Stability AI’s new language model) or relatively permissive “source available” licenses (e.g., Meta’s latest Llama 2). Open‐​source model availability could have a material impact on competitive dynamics. A reportedly leaked document from Google put it starkly:

[T]he uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch. I’m talking, of course, about open source.

Lastly, Gensler’s vision of a concentrated foundation model market itself rests in large part on the assumption that model providers will continuously improve their models with the data provided to them by downstream third‐​party applications. But this too should not be taken as a given. Such arrangements are a possible feature of a model provider’s terms but not an unavoidable one. For example, OpenAI’s current data usage policies for those accessing its models through an application programming interface (API), as opposed to OpenAI’s own applications (like ChatGPT), limit (as of March 2023) OpenAI’s use of downstream data to improve its models:

By default, OpenAI will not use API data to train OpenAI models or improve OpenAI’s service offering. Data submitted by the user for fine‐​tuning will only be used to fine‐​tune the customer’s model.

Indeed, providers of base models may not always benefit from downstream data, as finetuning a model for better performance in one domain could risk undermining performance in others (a dramatic form of this phenomenon is known as “catastrophic forgetting”).

Again, this is not to say that foundation model market concentration is impossible. The point is simply that there also are plenty of reasons the concentrated market Gensler envisions may not come to pass. Indeed, a source Gensler cited put it well: “It is too early to tell if the supply of base AI models will be highly competitive or concentrated by only a few big players.” Any SEC regulatory intervention premised on the idea of a non‐​competitive foundation model market would similarly be too early.

Assumption 2: Foundation Model Market Concentration Will Cause Risky Capital Market Participant Groupthink

The second assumption underpinning Gensler’s financial fragility fear is that a limited number of model providers will lead to dangerous uniformity in the behavior of market participants using those models. As Gensler put it, “This could encourage monocultures.”

Even if one accepts for argument’s sake a future of foundation model market concentration, there are reasons to doubt the added assumption that this will encourage monocultures or herd behavior among financial market participants.

While foundation models can be used as generic tools out of the box, they also can be further customized to users’ unique needs and expertise. Finetuning—further training a model on a smaller subset of domain‐​specific data to improve performance in that area—can allow users to tailor base models to firm‐​specific knowledge and maintain a degree of differentiation from their competitors. This complicates the groupthink assumption. Indeed, Morgan Stanley has leveraged OpenAI’s GPT‑4 to synthesize the wealth manager’s own institutional knowledge.

Taking a step back, is it more likely that financial firms with coveted caches of proprietary data and know‐​how will forfeit their competitive advantages, or that they will look to capitalize on them with new tools? Beyond training and finetuning models around firm‐​specific data, firms also can maintain their edge simply by prompting models in a manner consistent with their unique approaches. In addition, firms almost certainly will continue to interpret results based on their specific strategies, cultures, and philosophies. Lastly, because there are profits to be made from identifying mispriced assets, firms would be incentivized to spot others’ inefficient herding behavior and diverge from the “monoculture”; they may even devise ways to leverage models for this purpose.

At the very least, as with model market concentration, more time and research are needed before the impact of the latest generation of AI on financial market participant herding behavior can be assessed with enough confidence to provide a sound basis for regulatory intervention.

Conclusion

Emerging technologies can, of course, be disruptive. But before regulators assume novel technologies present novel risks, they should test and validate their assumptions. Otherwise, one can reasonably doubt regulators when they proclaim themselves “technology neutral.” As SEC Commissioner Hester Peirce noted last week regarding the SEC’s proposed rules tackling a separate AI‐​related concern—conflict-of-interest risks from broker‐​dealers’ and investment advisers’ use of “predictive data analytics”—singling out a specific technology for “uniquely onerous review” is tantamount to “regulatory hazing.”

Another word of caution is warranted: even where regulators do perceive bona fide evidence of enhanced risks, they should be wary of counterproductive interventions. To name just one example, heightened regulatory barriers to entry could worsen the very concentration in the market for AI models that Gensler fears.

Post a Comment