Data augmentation with LlamaIndex
Large Language Models (LLMs) continue to amaze us with their capabilities. However, the utilization of LLMs in production AI applications requires the integration of private data. Join us as we have a captivating conversation with Jerry Liu from LlamaIndex, where he provides valuable insights into the process of data ingestion, indexing, and query specifically tailored for LLM applications. Delving into the topic, we uncover different query patterns and venture beyond the realm of vector databases.
Changelog++ members save 1 minute on this episode because they made the ads disappear. Join today!
- Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com
- Fly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
- Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You iterlly can’t get any faster!
- Jerry Liu – Twitter, GitHub
- Chris Benson – Twitter, GitHub, LinkedIn, Website
- Daniel Whitenack – Twitter, GitHub, Website
Something missing or broken? PRs welcome!
(00:00 ) - Welcome to Practical AI
(00:43 ) - Jerry Liu
(04:28 ) - LlamaIndex
(08:12 ) - What do I get?
(11:23 ) - More power less work
(13:26 ) - Fitting the pieces together
(16:49 ) - 3 Levels of integrating
(19:13 ) - How to think about indexing
(21:44 ) - Defining embedding
(23:09 ) - Index vs Vector storage
(25:07 ) - Alternatives
(30:41 ) - Query scheme workflow
(35:22 ) - Evaluating responses
(40:03 ) - Awesome new stuff
(43:23 ) - Links and show notes
(44:05 ) - Outro