May 02, 2023 Equities

Artificial Intelligence – The Journey Begins

The first quarter of 2023 will likely go down in history as the time when the world became aware that the “cognitive” abilities of information machines hit a tipping point.

  • Machine learning, for very specific tasks, has been with us for a while
  • Large Language Models (LLMs) powering Generative AI, is what’s new
  • The utility and diversity of uses of LLMs is genuinely astonishing
  • Far from perfect
  • Regulation?
  • Too much enthusiasm?
  • Rapid recent progress, it will likely continue
  • What could this mean for industries, stocks and portfolios?

The first quarter of 2023 will likely go down in history as the time when the world became aware that the “cognitive” abilities of information machines hit a tipping point. This is when a handful of Generative Artificial Intelligence (AI) services those can seemingly converse like a human, by instantaneously drawing on vast pools of knowledge and crafting a response in seconds, became available to the world. While the known capabilities of Generative AI are quite impressive, there seems to be a large set of unknown/yet to be discovered abilities as new ones are being discovered rapidly. We are ever mindful of the risk of getting caught up in the moment, and we do see some risk that the near-term potential is overstated, but it also seems likely the long term potential of this technology is not yet understood, is very likely understated.  In this piece we attempt to tackle where we are in the machine learning journey (accelerating), their capabilities and use cases (rapidly evolving), and implications for companies, industries, and positioning of portfolios.


Machine learning, for very specific tasks, has been with us for a while

Many of the online services we use, particularly the big ones are powered by some form of machine learning. Recommendation engines are prolific in our online interactions. We can talk to our devices (“Alexa turn on the living room light”) because of natural language processing. An array of behind the scenes applications of machine learning automate and speed up much of what we do digitally. These are several different approaches which generally fall into the category of “narrow artificial intelligence”, those that are good for one specific task, often far better than humans. But that’s all they can do. The holy grail of AI is Artificial General Intelligence, those that can operate across broad knowledge domains. To be sure, we still appear to be far away from Artificial General Intelligence, but the ascent of more recent innovations, particularly Generative AI, has advanced the state of the art beyond models that have more general applications. As the name implies, Generative AI can generate new content (text, software code, pictures, video, sound), can “create”, based on a user prompt, effectively drawing on embedded connections of the “training data”.


Large Language Models, or LLMs powering Generative AI, is what’s new

Large Language Models, or LLMs powering Generative AI, is what’s new, ushered in by the private company Open AI in partnership with Microsoft. There are two important advancements: 1) this AI technology can effectively converse in text, sounding very human like – this is new, and a very effective form of Natural User Interface, or ways of interacting with the data/information in the world’s vast computing infrastructure and, more profoundly, 2) unlike narrow AI, these LLMs can, in effect, make more general connections across knowledge domains.  By looking at vast quantities of text, these LLMs are trained to calculate the probability of what the next word is in a sequence. Somehow, this reveals structure in the data, reveals patterns of humans’ collective reasoning and creativity.  How this works is admittedly hard to conceptualize.  This is true of most forms of machine learning – the responses are not based on logic/causality, as it seems to us, these algorithms don’t “reason”, they simply find statistical connections in the data on which they are trained. By then applying these learned “behaviors” to new data sets, they can “infer” what should come next in a sequence, thought, image, etc.  To the surprise of most, this technology is proving effective far beyond what might be expected. It would be hard to argue that LLMs would be able to ace the bar exam and many others, but they can.  Bill Gates, who is not prone to hyperbole, recently wrote an essay titled “The Age of AI has begun”[1] with the summary line “AI is as revolutionary as mobile phone and the Internet”. The last big proclamation like this he made was in his November 2010 essay “The Economics of the Cloud”[2] arguing the virtues and profound impact of cloud computing, well before they were truly appreciated.


The utility and diversity of uses of LLMs is genuinely astonishing

It’s not an exaggeration to say the applications of how this technology can be used is still not even close to being understood.  In this case we think of text as language, but to the machine the definition of “language” can be broader, including math, chemistry, physics, imagery, sound/music, and very likely other bodies of knowledge.   The capabilities of having effectively a knowledge graph across truly immense bodies of data can be applied in multiple ways.  It’s already known that LLMs can discover novel compounds, new drugs, far faster than our current approaches.  A daily scan of news articles, academic journals, twitter/sub stack/medium etc. shows new things being discovered at a very rapid pace. 


Far from perfect

Very unlike “computers”, LLMs can be inaccurate, or even relay completely fabricated ideas, presented in very persuasive ways. The program is simply connecting words/phrases that statistically go together, but that doesn’t ensure an accurate picture of reality. A large part of developing, or “training” these large models is related to bias, safety and accuracy, putting guardrails in place. The technique Reinforcement Learning from Human Feedback (RHLF) is a critical part of the model training process. RHLF has helped to greatly reduce errors, but they are far from perfect. Even so, it seems likely the integration of this technology across large parts of our personal and professional lives is an inevitability.


Regulation?

Unlike social media, governments around the world have not waited, they seem keenly focused on the risks.  Italy had temporarily banned ChatGPT, the leading Generative AI service. Others could follow.  However, it seems less likely for regulation to truly slow progress for several reasons, definition and enforcement.  Defining what exactly “it” is, becomes a challenge. We call them LLMs but they include a range of other related technologies.  And, how exactly would this be policed, would (could?) regulators check all the code running in all data centers somehow?  Will each country erect a Great Firewall as China has (which has proven to be porous).  And even if they could “ban” “LLMs and Gen AI”, would countries risk falling behind, would the US really want to slow the progress of AI and let China, and Russia, pull ahead at some point.  Copyright is another tricky question as highlighted by the recent viral “Fake Drake” song, which was created, not by the artist, but by AI.  There are thorny issues. But none of this is likely to truly halt the progress.


Too much enthusiasm?

Our guess is that the impact of this technology is overestimated near term, but underestimated long term, however we have more conviction in the second part of that statement.   The impact of most of the truly profound innovations, of which machine learning is one, have proven to be overhyped in the near term while at the same time being “underappreciated” longer term. The hope over the “Internet” in the late 90s highlights this well. the ways in which it now supports our daily lives is far greater than most truly understood back in those early days.  The same is likely true of LLMs. What is so impressive is how well this technology can do many tasks, how many new uses are being discovered.  It’s fair to say that the really big AI apps and companies have yet to be discovered and started.


Rapid recent progress, it will likely continue

GPT4, which is likely the leading LLM, reportedly has 1 trillion parameters, or characteristics defining the model. To date, making models larger has made them far more useful, the scaling benefits have been substantial. However, there is a growing belief in some parts of the AI community that the marginal benefits of larger models are diminishing.   There is clearly the potential that progress slows, that the extrapolation of current trends proves optimistic. It is also relevant to note prior concerns of diminishing scaling benefits have proved incorrect, in part because there are a host of other technologies/techniques, beyond pure size, that add materially to the utility of these models, but also there is still much utility to discover in what we currently have.   We point out that, in semiconductors, one of the essential operative principles in “Moore’s Law”, that is Dennard scaling (addresses the use of power at smaller sizes) hit a wall around 2007, yet our semiconductors have since become far more powerful due to optimization of many of the other drivers.  Innovation finds a way.


What could this mean for industries, stocks and portfolios

In the near term we are likely to see an arms race to build up the biggest and best LLMs, and the beginnings of applications that leverage the power of this new AI.  There are real barriers to building a state of the art LLM -- these LLMs could confer significant long term advantage as they are the knowledge foundation on which many others can build specific applications. The requisite AI expertise is in short supply, it can take years to develop a model, not to mention a lot of capital. In the medium term we are likely to see a large range of companies that build use case and industry vertical specific solutions on top of these LLMs, and this is where there could be a significant shake up of industry power structures, the ascent of impressive new companies, and the disruption of some of today’s leaders. LLMs provide the potential for significant automation, and like other big inflections, a range of companies will be at risk – understanding these early/as they unfold will be an essential skill in active management.  In the long term, there is real potential that we all have our own personal AI’s, or maybe our own personal copilots for different functions. With this backdrop, this is our current framework for understanding change, identifying new opportunities, and in assessing risk:

  1. The arms dealers, those providing the technology and infrastructure to develop. This is among the clearest areas of opportunity now. The obvious area is the silicon, particularly graphics processing units (GPU) based systems. There will likely other areas that become increasingly obvious. One potential area is the owner of data/images/videos/content that is useful for training these massive models. Some owners are already starting to license the use of this data for model training.
  2. Change in the coding ecosystem. As it turns out, these models seem to be very good at writing software code. There already are several “copilots” (write the easier, but time consuming parts) widely available.  A first order impact is that the quality and quantity of code will increase.   Those that provide the infrastructure for more software, particularly things like security and observability, stand to benefit.  Conversely, those companies that provide no and low code solutions may need to pivot.
  3. Customer service/interface: One of the biggest benefits of cloud, mobile and digital transformation is the ability to eliminate intermediaries and digitally interact directly with customers.  LLM/Generative AI take this to a new level given the ability to automate that interaction. Contact centers and the companies providing the requisite software, are a very obvious area to benefit given the emergence of very capable and intelligent chatbots.
  4. Back office automation. Robotic process automation or RPA is the process of automating repetitive IT functions historically done by humans. Like customer service/interface LLMs/GenAI have the potential to turbocharge this trend.
  5. Productivity Solutions: Imagine having a legion of highly capable interns that can work 24/7 ready to compose presentations, take notes and summarize meetings, send emails, research new areas, and so on. We are seeing the beginnings of this with Microsoft 365 Copilot. 
  6. New Software solutions/companies that leverage LLMs: Those that embrace the trend early have potential to create substantial differentiation.  Consider how much more useful customer relationship management software could be with truly smart machine assistants. Or Education technology, which has both significant opportunity for some, but risk for others.  Also, there are likely new applications that are only possible with this new technology that we will be learning about in the coming months and years.  In short, this technology has potential to create significant differentiation. It’s a bit early to be calling winners and losers here, but we think not for long, this could develop quickly. 
  7. Online Advertising and Marketing. Many parts of the process of advertising and marketing online, including developing materials and strategy, can be vastly aided by Generative AI. Those companies with the assets and resources (very likely the very big platforms) could have an increasing advantage.
  8. Search: A key milestone in Generative AI is Microsoft’s demonstration of Bing with ChatGPT, a more conversational and potentially much more useful way to search the world’s information. While there is very valid debate as to whether Bing could see market share gains vs the very dominant Google, perhaps the even bigger implication is the possibility to improve conversion. Less than 3% of current internet searches monetize. If a more useful and conversational interface can improve that by just 100bps, that could be a 1/3 uplift in monetization.
  9. The owners of the LLMs which include many of the very largest technology companies. The barriers to entry are significant and the utility of the models is very open ended.  While it stands to reason that the value of the models is substantial, the owners all have substantial businesses some which may also be negatively impacted to the extent the existence of these models allows others to enter new markets. Search is one example. The mix for each company is different and getting into the pros/cons for each of the major companies is beyond the scope of this piece  But the effect in aggregate is more likely  to be a net positive for them – a basic tenet behind the success of many of these companies is applying information and digital technologies to many corners of the global economy not previously as affected by IT (retail, advertising, entertainment, etc.), and the use of greater “intelligence” stands to further accelerate that trend.
  10. Scientific Innovation. Better knowledge systems have potential to meaningfully increase our level of scientific understanding, the true foundational layer which has powered much of the world’s progress. Over the last several years, AI based solutions have discovered new patterns in math, physics, and chemistry. Google’s DeepMind has developed the ability to predict how proteins fold. Imagine if solutions to some of the hardest problems in biology, physics etc challenges such as nuclear fusion and room temperature superconductors (just to name two) suddenly became more in reach. This trend is admittedly not investable at the moment, but as students of innovation, we view this as an area where the opportunity set is likely to widen considerably. 

Machine learning has already been an important part of technology and more broadly, our lives but LLMs and Generative AI is turbocharging the trend.  While we can’t really know where this journey will lead, it looks big, and we’re only really at the very beginning. The above list is just a starting point, it will change many times as the picture develops.   As this happens, new industry leaders will be born, some existing ones will fade in a cloud of disruption, with the overall effect likely to be a net positive for the Science & Technology universe and beyond, as has been with all of the other really big inflection points in technology.

 

More topics

See all articles

All opinions and claims are based upon data on 05/02/2023 and may not come to pass.

This information is subject to change at any time, based upon economic, market and other considerations and should not be construed as a recommendation.

CIO View