Building LLM Powered Applications delves into the fundamental concepts, cutting-edge technologies, and practical applications that LLMs offer, ultimately paving the way for the emergence of large foundation models (LFMs) that extend the boundaries of AI capabilities. The book begins with an in-dept
Building LLM Powered Applications: Create intelligent apps and agents with large language models
β Scribed by Valentina Alto
- Publisher
- Packt Publishing
- Year
- 2024
- Tongue
- English
- Leaves
- 343
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Synopsis
Get hands-on with GPT 3.5, GPT 4, LangChain, Llama 2, Falcon LLM and more, to build LLM-powered sophisticated AI applications Key Features
Embed LLMs into real-world applications
Use LangChain to orchestrate LLMs and their components within applications
Grasp basic and advanced techniques of prompt engineering
Book DescriptionBuilding LLM Powered Applications delves into the fundamental concepts, cutting-edge technologies, and practical applications that LLMs offer, ultimately paving the way for the emergence of large foundation models (LFMs) that extend the boundaries of AI capabilities. The book begins with an in-depth introduction to LLMs. We then explore various mainstream architectural frameworks, including both proprietary models (GPT 3.5/4) and open-source models (Falcon LLM), and analyze their unique strengths and differences. Moving ahead, with a focus on the Python-based, lightweight framework called LangChain, we guide you through the process of creating intelligent agents capable of retrieving information from unstructured data and engaging with structured data using LLMs and powerful toolkits. Furthermore, the book ventures into the realm of LFMs, which transcend language modeling to encompass various AI tasks and modalities, such as vision and audio. Whether you are a seasoned AI expert or a newcomer to the field, this book is your roadmap to unlock the full potential of LLMs and forge a new era of intelligent machines.What you will learn
Explore the core components of LLM architecture, including encoder-decoder blocks and embeddings
Understand the unique features of LLMs like GPT-3.5/4, Llama 2, and Falcon LLM
Use AI orchestrators like LangChain, with Streamlit for the frontend
Get familiar with LLM components such as memory, prompts, and tools
Learn how to use non-parametric knowledge and vector databases
Understand the implications of LFMs for AI research and industry applications
Customize your LLMs with fine tuning
Learn about the ethical implications of LLM-powered applications
Who this book is for Software engineers and data scientists who want hands-on guidance for applying LLMs to build applications. The book will also appeal to technical leaders, students, and researchers interested in applied LLM topics. We don't assume previous experience with LLM specifically. But readers should have core ML/software engineering fundamentals to understand and apply the content.
]]>
β¦ Table of Contents
Cover
Copyright
Contributors
Table of Contents
Preface
Chapter 1: Introduction to Large Language Models
What are large foundation models and LLMs?
AI paradigm shift β an introduction to foundation models
Under the hood of an LLM
Most popular LLM transformers-based architectures
Early experiments
Introducing the transformer architecture
Training and evaluating LLMs
Training an LLM
Model evaluation
Base models versus customized models
How to customize your model
Summary
References
Chapter 2: LLMs for AI-Powered Applications
How LLMs are changing software development
The copilot system
Introducing AI orchestrators to embed LLMs into applications
The main components of AI orchestrators
LangChain
Haystack
Semantic Kernel
How to choose a framework
Summary
References
Chapter 3: Choosing an LLM for Your Application
The most promising LLMs in the market
Proprietary models
GPT-4
Gemini 1.5
Claude 2
Open-source models
LLaMA-2
Falcon LLM
Mistral
Beyond language models
A decision framework to pick the right LLM
Considerations
Case study
Summary
References
Chapter 4: Prompt Engineering
Technical requirements
What is prompt engineering?
Principles of prompt engineering
Clear instructions
Split complex tasks into subtasks
Ask for justification
Generate many outputs, then use the model to pick the best one
Repeat instructions at the end
Use delimiters
Advanced techniques
Few-shot approach
Chain of thought
ReAct
Summary
References
Chapter 5: Embedding LLMs within Your Applications
Technical requirements
A brief note about LangChain
Getting started with LangChain
Models and prompts
Data connections
Memory
Chains
Agents
Working with LLMs via the Hugging Face Hub
Create a Hugging Face user access token
Storing your secrets in an .env file
Start using open-source LLMs
Summary
References
Chapter 6: Building Conversational Applications
Technical requirements
Getting started with conversational applications
Creating a plain vanilla bot
Adding memory
Adding non-parametric knowledge
Adding external tools
Developing the front-end with Streamlit
Summary
References
Chapter 7: Search and Recommendation Engines with LLMs
Technical requirements
Introduction to recommendation systems
Existing recommendation systems
K-nearest neighbors
Matrix factorization
Neural networks
How LLMs are changing recommendation systems
Implementing an LLM-powered recommendation system
Data preprocessing
Building a QA recommendation chatbot in a cold-start scenario
Building a content-based system
Developing the front-end with Streamlit
Summary
References
Chapter 8: Using LLMs with Structured Data
Technical requirements
What is structured data?
Getting started with relational databases
Introduction to relational databases
Overview of the Chinook database
How to work with relational databases in Python
Implementing the DBCopilot with LangChain
LangChain agents and SQL Agent
Prompt engineering
Adding further tools
Developing the front-end with Streamlit
Summary
References
Chapter 9: Working with Code
Technical requirements
Choosing the right LLM for code
Code understanding and generation
Falcon LLM
CodeLlama
StarCoder
Act as an algorithm
Leveraging Code Interpreter
Summary
References
Chapter 10: Building Multimodal Applications with LLMs
Technical requirements
Why multimodality?
Building a multimodal agent with LangChain
Option 1: Using an out-of-the-box toolkit for Azure AI Services
Getting Started with AzureCognitiveServicesToolkit
Setting up the toolkit
Leveraging a single tool
Leveraging multiple tools
Building an end-to-end application for invoice analysis
Option 2: Combining single tools into one agent
YouTube tools and Whisper
DALLΒ·E and text generation
Putting it all together
Option 3: Hard-coded approach with a sequential chain
Comparing the three options
Developing the front-end with Streamlit
Summary
References
Chapter 11: Fine-Tuning Large Language Models
Technical requirements
What is fine-tuning?
When is fine-tuning necessary?
Getting started with fine-tuning
Obtaining the dataset
Tokenizing the data
Fine-tuning the model
Using evaluation metrics
Training and saving
Summary
References
Chapter 12: Responsible AI
What is Responsible AI and why do we need it?
Responsible AI architecture
Model level
Metaprompt level
User interface level
Regulations surrounding Responsible AI
Summary
References
Chapter 13: Emerging Trends and Innovations
The latest trends in language models and generative AI
GPT-4V(ision)
DALL-E 3
AutoGen
Small language models
Companies embracing generative AI
Coca-Cola
Notion
Malbek
Microsoft
Summary
References
Packt Page
Other Books You May Enjoy
Index
π SIMILAR VOLUMES
ChatGPT and the GPT models by OpenAI have brought about a revolution not only in how we write and research but also in how we can process information. This book discusses the functioning, capabilities, and limitations of LLMs underlying chat systems, including ChatGPT and Bard. It also demonstrates,
<p><span>Do not just talk AI, build it: Your guide to LLM application development</span></p><p></p><p></p><p></p><p><span>Key Features</span></p><p><span>β Explore NLP basics and LLM fundamentals, including essentials, challenges, and model types.</span></p><p><span>β Learn data handling and pre-pro
<p><span>Do not just talk AI, build it: Your guide to LLM application development</span></p><p></p><p></p><p></p><p><span>Key Features</span></p><p><span>β Explore NLP basics and LLM fundamentals, including essentials, challenges, and model types.</span></p><p><span>β Learn data handling and pre-pro
Building, Training, and Hardware for LLM AI is your comprehensive guide to mastering the development, training, and hardware infrastructure essential for Large Language Model (LLM) projects. With a focus on practical insights and step-by-step instructions, this eBook equips you with the knowledge to
Use LLMs to build better business software applications Autonomously communicate with users and optimize business tasks with applications built to make the interaction between humans and computers smooth and natural. Artificial Intelligence expert Francesco Esposito illustrates several scenarios