LLM Testing: Monitoring, and Continuous Improvement
In Chapter 9, you learned how to deploy an AI application to production and use the LangGraph Platform to host and debug it. While your
In Chapter 9, you learned how to deploy an AI application to production and use the LangGraph Platform to host and debug it. While your
So far, we’ve covered the core concepts and tools needed to build an AI application. You’ve learned how to use LangChain and LangGraph to generate
Large Language Models offer incredible opportunities for building innovative applications, but they also come with significant limitations. The key to success isn’t avoiding these constraints,
In the previous chapter, we explored agent architecture—the most advanced LLM architecture we’ve encountered so far. It combines chain-of-thought prompting, tool use, and looping into
In AI, the concept of agents traces back to long-established principles. As defined by Stuart Russell and Peter Norvig in their textbook Artificial Intelligence (Pearson,
We’ve covered core aspects of LLM application features, such as: Now, the pressing question is: How do we integrate these components into a cohesive application
In Chapter 3, you learned how to give an AI chatbot up-to-date, relevant context so it can respond accurately. However, a production-ready chatbot must also
In the previous chapter, we learned how to process data and store embeddings in a vector store. This chapter focuses on efficiently retrieving the most
In the previous chapter, you learned the core building blocks of an LLM application with LangChain and built a simple chatbot that sends a prompt
The Preface introduced the power of LLM prompting and showed how different techniques—especially when combined—can dramatically change model outputs. The core challenge in building effective