A love of the good, a love of hard work, and a love of creating


In my last post, I confessed my fanship with Julia Child and love of tweaking recipes in the kitchen. I’m pretty sure she would approve. She once said, “To be a good cook you have to have a love of the good, a love of hard work, and a love of creating.”

Those three traits – love of the good, love of hard work, and love of creating – are also the same traits that are required to build a Middle Layer AI model. 

If Foundation models are the original recipes, then Middle Layer AI models are those recipes tweaked further. Think about bread. There’s a basic recipe for bread involving the four common ingredients of flour, yeast, salt, water. You can make a plain loaf of bread using those ingredients. That’s sort of the equivalent of a Foundation Model. However, the recipe for bread can be tweaked in endless ways to create everything from a baguette for a sandwich, a brioche for dessert, or even a pretzel to wash down with a beer. Those tweaked recipes are the equivalent of Middle Layer AI models. These Middle Layer models are better for the specific purposes for which they were designed than the Foundation models they were created from. (A salty, chewy pretzel pairs way better with a Hefeweizen beer than a slice of Wonder bread!)

The Importance of Middle Layer AI Models

Foundation AI models like OpenAI’s GPT are already incredibly powerful, so why even bother with building a Middle Layer Model?  

As powerful as Foundation AI such as GPT-4 are, they do have limitations. Large Language Models (LLMs) in the Foundation Layer are generally created using large public datasets that enable the Foundation model to generate responses across a broad range of topics. However, an LLM in the Foundation Layer can’t generate an accurate response when it lacks the underlying data. In other words, while it knows a little about a lot of things, it doesn’t know what it doesn’t know.  

Therefore, as the bread example illustrates, a Middle Layer model is better than a Foundation layer model for the purpose for which it was trained. By incorporating additional data to a Foundation model, the Middle Layer model enables the Foundation model to do things it wouldn’t otherwise be able to do, while retaining the powerful capabilities of the pre-trained Foundation model. (BTW, this makes a Middle layer model built on a pre-trained large language model more versatile and useful than a model retrained from scratch on only proprietary data.)

How boodleGPT is Gonna Spice Things Up

OpenAI’s GPT models, like other Foundation Layer large language models, are amazing in their ability to generate a wide range of responses. Yet they have their limitations.

Let’s ask ChatGPT how many people in Zip Code 22314 are highly likely to donate to children’s causes:

ChatGPT admits its limitations and attempts to be helpful, but it doesn’t answer the question because it can’t. It doesn’t have specific data it can use to generate an accurate response, so it provides a generic response instead.

Let’s run the same request through a very, very early version of boodleGPT (aka boodleGPT Alpha):

That’s much better! The answer is specific and directly answers the question asked. The underlying GPT LLM is able to incorporate boodleAI’s insights to generate a detailed, precise response.

Now let’s ask ChatGPT for a donor profile of the people who live Zip Code 22314 who are highly likely to donate to children’s causes:

Once again, ChatGPT admits its limitations and is unable to answer the question asked because it lacks the necessary data. It does try to be helpful by providing general information, but that response is not based on actual data about likely donors in Zip Code 22134.

Let’s run the same request through boodleGPT Alpha:

By providing the model access to boodleAI’s insights, the LLM is able to generate a specific, detailed answer that is far more useful to a fundraiser.

While the above examples are philanthropy focused, boodleGPT will be just as useful for commercial applications:


boodleGPT Alpha:

Note: future versions of boodleGPT will generate answers that are far more verbose and explanatory — right now we’re focused on making sure boodleGPT generates relevant and accurate answers.

Now imagine the possibilities when GPT can incorporate boodleAI’s 35 BILLION insights about 250 million adult Americans. That’s the power of boodleGPT and what we’re excited about putting in the hands of nonprofits and businesses. 

However, the above examples make this look way easier than it actually is. In the next post, we’ll talk about the challenges in creating Middle Layer AI models and why it takes “a love of the good, a love of hard work, and a love of creating.”


AI for the Next Era | Greylock

Generative AI: How To Spot Real From The Hype | Moveworks

ChatGPT, LLMs, and Foundation models — a closer look into the hype and implications for startups | by Oliver Molander | Better Programming

The generative AI revolution has begun—how did we get here? | Ars Technica

Co-Founder and Co-CEO of boodleAI, France has been on the founding team of multiple successful companies across the fields of law, aerospace, defense, government services, and technology. His favorite startups are aged 8, 5, and 3 and love ice cream.

Connect with France on LinkedIn. He welcomes messages and feedback.