Saving Dividends for the Likely Flotation of OpenAI.

Reserving cash dividends from investments that have experienced temporary declines permit occasional new investment acquisition.

In blog posts past, I reference a policy of thoughtful and careful equity selection, likening each and every possible purchase to the using up of a very finite punch card. On portfolio investments that increase in price, dividends are reinvested. However, periodic declines, even of superlative selections, tend to result in the accumulation of smaller, almost minute, percentage sums for a time.

It is these small sums that permitted the acquisition of companies such as Taiwan Semiconductor and Palantir years past. No matter how miniscule the initial amount placed into an absolute outperformer, once the temptation to trade has been removed from the picture, tiny sums can compound into impressive amounts.

Recent interviews by Sam Altman of OpenAI have telegraphed, for those willing to listen, his ultimate goal to build out value of the non-public company, over the longer term.

Several recent media events featured the CEO opining his vision for OpenAI. The public at large and analysts have focused their attention upon the modeling of “AI as a Service” revenues. This business intends to generate recurring and reliable revenue streams through providing varying AI suites, including Chat GPT for private businesses, government agencies and the public. That’s where the revenue models cluster.

An even larger revenue opportunity may arise from the monetization of novel discoveries based upon the AI output, potentially dwarfing recurring subscriptions. Sam Altman talks, with great conviction, of the potential for many pharmaceutical incremental compounds to be produced via AI, breakthroughs, for diseases and afflictions that the current system has found impossible to crack.

What is the value to the world, and to a private valuation, should Open AI ultimately come up with the data crunching output that cures, not just treats, one or more cancers?

Is is a $1 trillion US? $2 trillion? $5 trillion?

Consider that the combined market cap of the two leading players in the diabetes and weight loss space currently exceeds $1.4 trillion US. Would not a value to be conferred upon a discovery of a more useful weight loss compound, by OpenAI, exceed that valuation?

Inferentially, any potential valuation, mathematically, should surpass, by some margin, the net amount of capital being amortized and expended on the data centers being built, the storehouses of knowledge where AI will sift, sort and learn. The most recent sum mentioned, on a single data center, post Trump victory, was a $500 + billion amount from divisions of Softbank and Oracle. There are several other clusters presently under construction in the western hemisphere.

We are no longer extrapolating values in the tens of billions or hundreds of billions, now the modeling moves into the trillions.

The working pharma R&D model is to take profits generated from their existing suites of pharmaceutical product and purchase biotech or pharma research oriented entities that have identified a promising compound.

It is a crude, brute force, system that benefits the mergers and acquisitions departments of investment houses and often results in bidding wars for highly sketchy players in pharma. Post acquisition, most big pharma digs into the compounds, only to determine that they have been had. 99% of the purchases, give or take, tend to fail and are quietly written off over time on income statements. Yet, despite the sorry record, often a hundred billion or more in annual capital globally is used to make small acquisitions by bigger pharma or venture capitalists. The process is highly inefficient and goes a long way towards explaining why big pharma earns so much on drug sales, while confounding investors with lackluster results.

This “greater fool” proposition involved in the purchase of tiny pharma research shops begs for a better mousetrap. For lack of a superior proposition, big pharma indirectly expends, fruitlessly, what is deemed to be (for tax purposes) an acquisition; in fact, they are just purchasing somebody else’s research and adding it to inventory.

What if ALL that upfronting of amortization and expense were removed from the drug development system? What if the skeevy actors were completely removed from the equation, so that bad data was immediately debunked prior to getting into phase trials? What if only successful drugs were brought to the portfolios of big pharma? Just how much would a major global pharmaceutical company pay for an identified compound that had almost no risk of failing the phase testing and FDA reviews and could turn to sales within just a year or so? Pharmaceutical producers would, in all likelihood, pay far more for compounds that were almost assured to be beneficial, because the chaff would have been removed in the AI crunching beforehand.

This, analysts believe, represents the promise of Open AI. Once all the medical and pharmaceutical research data has been acquired, agglomerated and crunched on the planet, discoveries will be made. Meaningful improvements will be found on existing compounds.

The business risk for those hoping to employ OpenAI; rather than offering the potential as a growth accelerant and expense reducer, to the chagrin of big pharma investors, a supposed collaboration could as readily represent a 21st century “Trojan Horse” .

Each and every major and minor pharmaceutical and biotech company possesses a proprietary body of accumulated knowledge in the form of research papers, working thesis, failed phase trial data. They consider this to be their competitive edge over peers, even when unproductive. Peer reviewed publications and ultimate phase trial output represents the very smallest fraction of work done. Nobody wants to give away any of their intellectual property to a competitor who might combine it with their own research line, and “voila”, crack the code to a seemingly fruitless line of inquiry. So, they archive and silo all this data, some going back a century.

The AI pitch, by Sam Altman, to pharma and medical companies of all stripes, is that all this siloed data may be sifted, sorted and machine learned, at a presumptive steep fee, for the benefit of the corporation that entrusts OpenAI (or a competing artificial intelligence business) with that data. As with any other Veblen good, the steeper the fee, the greater the degree of implicit trust in the system and the greater the number of users who will sign up to have their data aggregated.

The problem is that, like the Chinese government, which forces every meaningful company doing business in China to release intellectual property per license, for the Chinese government to ultimately use as they see fit, after a requisite time lapse, OpenAI will, through the machine learning algorithms, be in possession of the solutions. And, the desired or optimal solutions might not actually exist in the data of a single pharmaceutical company, or even several, but perhaps a dozen or more working on similar lines. It will, almost assuredly, be OpenAI that will crack, previously assumed to be intractable problems, using disparate data sourcing from across the planet.

I believe the entire pharmaceutical industry, is making a Faustian bargain when permitting OpenAI to gain access/control of all this research.

Given the potential sums involved, why would OpenAI choose to bequeath to any single private pharma a formulation or cure, easily worth billions, tens of billions, hundreds of billions or even trillions, for a monthly license fee? It is a ridiculous notion on its face. After all, it will be the AI algorithms that will do ALL of the work, ultimately, to come up with the discoveries. The only contribution from big and small pharma alike will be the feeding of their prior data into systems, for a time.

The mantra of OpenAI was about the good of the world, prior to shifting to a for-profit corporation.

There is always a sum of money that gives pause to altruistic thinking. Trillions, or tens of trillions, seems about the right set of numbers to shift an already fluid set of principles.

In recognition of the potential sums involved, my thinking, regarding the investment potential of OpenAI, is moving beyond a simple “artificial intelligence as a service model”. I now glimpse, through the dense fog, the likelihood for multiple sources of revenue. Subscription sales represent that Trojan horse, permitting OpenAI to access and harvest all the data and knowledge of every licensee. OpenAI has the potential to be THE global repository of all collected knowledge of the planet and will do with it what it will, for the benefit of OpenAI and not the licensees.

The only variable in the revenue model will be the financial split. Will OpenAI choose to keep ALL of the financial benefit of discoveries, most of it or some of it?

If investors think that the concentration of capital on the planet is accelerating, just wait until OpenAI gains momentum with a first major discovery.

The combined market cap of the S&P 500 and the NASDAQ 100 exceeds $84 trillion, more of less. 5 knowledge based/tech companies collectively account for 15%+ of that weighting and they all represent widgets, componentry, for the OpenAI proposition.

For my portfolio, an acquisition of OpenAI will represent both an offensive and a defensive purchase.

By offensive, I consider the upside potential on a myriad of revenue streams that should ensue once all of the global information has been deposited under the processing systems of OpenAI.

As a defensive purchase, entire industries could be laid to waste once discoveries are made. I require a “CYA” (cover your backside) hedge.

Conceivably, multiple investments in my portfolio could be negatively impacted if/as/when an OpenAI discovery is announced. There will likely be no telegraphing of developments and we probably won’t even know what sector evolves until after the fact, so individual equity hedging is both expensive and theoretically pointless. Most of the market cap of my impacted investment, once investors realize that the impairment is permanent rather than transitory, will flow away from what I now own, and towards OpenAI.

Therefore, for my purposes, an investment in OpenAI represents a form of equity disaster insurance, ironically, insurance needed against OpenAI itself.

I referenced the potential value of a better diabetes drug discovery from an OpenAI division. What damage would it bring about to any investor who holds shares in one, or both, or the existing duopoly, should OpenAI keep that discovery and monetize it separately from the existing publicly traded pairing? This is just one example, plucked out of a hat. Any pharma, any medical company, any biotech, they all have the same risk of becoming immediately obsolete on an AI breakthrough.

What damage could OpenAI do to market caps of EVERY knowledge based industry participant that they choose to compete against? Because unlike humans, who require sleep, sustenance, who are prone to error and have other things to do in their lives which take them away from the lab and from the computer, the artificial intelligence algorithms have no such constraints.

The potential for industry upheaval goes well beyond pharma; it should extend into any and every knowledge based industry. Differing from humans, AI should also be superlative multitaskers.

Make no mistake; AI is coming for not only knowledge based jobs, it has the potential to completely wipe many knowledge based, publicly traded companies, right off the map.

OpenAI might be the apex predator within capital markets. Knowledge based companies hiring OpenAI likely haven’t fully thought it through. Almost nobody has. Could this be the equivalent of a knowledge based “industrial revolution”; one in which most existing knowledge based businesses are cast aside in an evolutionary capital cycle?

Corporations are paying an Apex predator to produce superior versions of themselves, potentially leading to their own extinction.

Forearmed is forewarned.

Posted in Open Blog

Leave a Reply

Recent Comments