Five Things GenAI Can and Can’t Do | by David Hundley | Oct, 2023

An introductory guide for business leaders as to what Generative AI can or cannot do

Cover photo created by the author

It’s hard to believe that it’s still not quite been a year since ChatGPT’s launch, and we have seen Generative AI (GenAI) take the world by storm. From large language models (LLMs) to stable diffusion models for image generation, it is really quite remarkable what this new technology can do. A friend described it to me as the first time AI has felt tangible to them, as if what we only dreamed about through science fiction has now become reality.

Naturally, this has given business leaders pause to wonder what GenAI can or can’t do to transform their business processes. There are certainly many cool things you can do with GenAI, but there are also some misconceptions floating around out there that business leaders should be cautious about. The focus of this post is to share with you some of the core things that GenAI can do while also tempering one’s expectations to caution on what it cannot do.

Perhaps one of the most classic use cases I’m hearing across all industries is the ability to use large language models (LLM) in particular to condense a lot of information into something far more digestible. For example, you can take a transcribed dialogue from a meeting and use GenAI to summarize the information into a few key bullets. Additionally, you can take a large legal document and have an LLM pull out the most relevant bits of information. Of course, you should always be cautious to verify that the output of the LLM is correct, but this can save a ton of time in many different business contexts. I highly expect this to continue to gain traction in more and more industries over time.

Perhaps one of the greatest misconceptions about LLMs is that they can think. In reality, LLMs are simply word prediction machines, albeit these models are so interestingly precise that it almost appears as if they are emulating true consciousness. Because LLMs act on probabilities between words, it can never be truly certain of its final output…

Source link

Be the first to comment

Leave a Reply

Your email address will not be published.