Today I am debuting V1.0 of The Journey Sage Finder. It uses advanced RAG (Retrieval-Augmented Generation) AI, a technology that searches across all of an influencer’s content, pinpointing exact moments in videos where specific topics are addressed. While this search is specific to The Journey Sage the underlying system can easily be used on many sets of videos and text.

The Journey Sage is focused on MDMA therapy for PTSD. This search will go on Jill’s website. You can ask it questions specific to her subject such as “How did Jill find this therapy?”, “What is a Hippie Flip?” or “What is integration?”.

I’ve been delving deeply into AI research and its latest developments, of late focusing more on high-level aspects such as legal implications and cutting-edge advancements, rather than direct, hands-on implementation. But, to use an analogy, I’m well aware that there’s a huge difference between being a food critic and a chef. Both are important, but the further away a food critic is from being a chef, the less they truly understand what goes into the product. Hands-on preparation gives a level of clarity and depth that the food critic can never get otherwise. Recently, I was given a golden opportunity to dive back into the AI kitchen and start to whip up some dishes of my own.

A few short weeks ago I had lunch with a friend who is the author of “The Journey Sage,” book and website, as well as the creator of numerous The Journey Sage videos. She mentioned how influencers are pushed to ‘be everywhere’ and post content on YouTube, Instagram, TikTok, X, etc. This allows them to meet their potential audience ‘where they are’, but also results in the content being spread around various sites. She expressed that people would ask her questions that had been answered in videos, but that she was often not able to remember which social media site the video in question was on, let alone which video.

This seemed an interesting task for RAG AI, a newer frontier in AI wherein AI is used to search and answer questions on a limited set of data instead of on the entire internet. This required several steps to achieve the desired outcome:

  1. Gather the videos and other data from the different social media providers (Note: V1.0 focuses on YouTube, her book and website. Other social media will be added soon).
  2. Format the data so the Large Language Model (LLM), a key part of the AI, could handle it more easily.
  3. Create indices for the search to be quicker and more accurate.
  4. Write the RAG search, combining the data with whatever the user asks, and send it to the LLM to process.
  5. Design a User Interface to take questions and display results.

One of the axioms of projects is that no matter how much you attempt to plan it out, there are always additional steps or things that don’t go as planned. That’s true in the best of projects, but here I was diving into depths that were completely new to me or that I had not been hands-on with for a decade or more.

Things I knew about but were harder than expected were:

  1. Working with different RAG AI vector indexes.

2. Refining the prompt that tells the LLM exactly what to do in natural language.

Luckily, very early in the project I talked to my friend Siddha, who is a true AI expert. She recommended I look into LlamaIndex to help with the AI components and Streamlit to handle the UI. These tools provide such a solid foundation that I was able to focus on the tasks needed for my desired outcome rather than fighting with the tools. The learning curve was extensive, to be sure, but LlamaIndex and Streamlit are both straightforward and powerful tools with extensive documentation. I had used neither before but came up to speed with them quickly.

There is a big difference between building a prototype and building a production-ready application. To continue the chef analogy, it’s one thing to create the dish; it’s another thing entirely to make sure that all the dishes for a table are ready at the same time and can be served together. With code, deploying a production system means making sure all of the right code gets to the right place at the right time. Unlike my blog, where a typo is just embarrassing, one misspelling in code can make the entire project fail to work.

This requires a few things I had not been hands-on with in decades.

  1. Source control. Always knowing exactly which version of code you have in different places at any time. The best code in the world is no good if you don’t know exactly what is where.
  2. Deployment. Moving what you create to where it will run. This is nowhere near as simple as it sounds and worthy of its own series of blog posts.

Also, worth its own post is how some things I thought would be trivial was where most of the time was spent. A stark reminder that inflexibility on preferences makes costs to build a project skyrocket.

What I Learned

I learned a huge amount by pulling together and deploying this product myself. First and foremost was that: ideas are easy; execution, however, is the hard part. In fact, it’s 10% about great ideas and 90% about great execution. This was a fairly complex idea, taking data from disparate sources to be searched and guiding people to the relevant spot in the right video.

If the idea was complex, the implementation was far more complex. In addition to having to learn more hands-on details about RAG AI and mastering LlamaIndex, advancing the prototype to the production-ready version presented here necessitated my immersion in no fewer than five technologies or concepts that were either new to me or that I hadn’t used for over a decade. These included Streamlit, WordPress, embedding techniques, managing static server files, and Git.

On top of this, these tasks must be repeatable and automated. Changes I make on my machine must seamlessly and quickly be moved to the website where you see them here. But only those changes I have tested can move and I need to be able to revert to past versions, if needed. I got to see first-hand why a number of the DevOps ideas, such as continuous integration, are so important to build in from the start. Trying to add them later is like trying to expand a building’s foundation after the building is fully built; possible, but the longer you wait the harder it will be and the more likely it will be to impact the entire structure.

No doubt I will blog more in the coming days about what I’ve learned, but I cherish having been given the opportunity to dive deep and build this from scratch. It reminded me of how complex a project is, even after you have all the technical needs worked out. Having the idea is only one very small piece. Good execution is the hard part.


Discover more from Lowry On Leadership

Subscribe to get the latest posts sent to your email.

5 responses to “Debut of v1.0 of The Journey Sage Finder”

  1. […] instance, I recently completed my RAG AI The Journey Sage Finder. Within a day of completing it, I stumbled upon something similar that someone else had built. Here […]

  2. […] first piece of my project, which I debuted in The Journey Sage Finder, was to learn the fine points of having a Retrieval-Augmented Generation AI (RAG AI) take data from […]

  3. […] content of, the results were inaccurate and incomplete. I had battled a similar issue when I built The Journey Sage Finder, a small RAG AI app that pulled all of the YouTube videos for an influencer, transcribed them, […]

  4. […] my project, The Journey Sage Finder, I needed to scrape thejourneysage.com for answers to common questions. This required only a single […]

  5. […] like something I could do easily with AI, and I was looking for a project. So, I pulled together The Journey Sage Finder tool to solve this issue for her. The results were good, far better than ChatGPT alone, with the […]

Leave a Reply

Quote of the week

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

~ Sam Altman (apocryphal)

Designed with WordPress

Discover more from Lowry On Leadership

Subscribe now to keep reading and get access to the full archive.

Continue reading