Large Language Models (LLMs) are reshaping how software applications are built. This intensive one-day course shows you how to go beyond experimenting with chatbots and instead build real-world AI applications powered by LLMs.
The training is practical from start to finish. You will begin with prompt engineering basics, progress to advanced techniques for structured outputs and reasoning, and then learn how to connect LLMs to external data sources using Retrieval-Augmented Generation (RAG). The course also introduces the Hugging Face ecosystem, giving you the skills to apply pre-trained models for tasks such as text classification and sentiment analysis.
By the end of the course, you will have written working Python code, implemented multiple AI-powered applications, and developed the confidence to adapt these methods to your own projects.
Target Audience:
This course is designed for software developers, data scientists and technical professionals who want to build practical applications with LLMs. It is well suited for teams planning to add AI features to existing products, as well as individuals looking to understand how to implement large language models in production environments.
Prerequisites:
To get the most from this course, participants should have:
- Working knowledge of Python programming
- Basic understanding of REST APIs and JSON
- Familiarity with command-line environments
- A grounding in software development principles
Check out our other AI courses here!
5 lesson covers:
Foundations of Prompt Engineering
Learn the structure of effective prompts, how system, user and assistant messages work together, and how parameters such as temperature affect responses. You will create and refine a customer support prompt that delivers consistent, high-quality answers.
Working with the OpenAI API
Set up and configure the Python client, understand authentication and rate limits, and implement API calls. You will write a question-answering system with proper error handling and response management.
Advanced Prompt Engineering Techniques
Move beyond simple prompts by implementing chain-of-thought reasoning, designing structured outputs and building templates for consistency. You will extract structured data from unstructured text using JSON-formatted outputs.
Retrieval-Augmented Generation (RAG)
Understand how RAG enhances LLM responses by connecting them with your own data. You will create embeddings, work with vector databases, and build a complete RAG pipeline to answer questions based on uploaded documents.
Introduction to Hugging Face
Discover the Hugging Face ecosystem and its tools for natural language processing. You will work with pre-trained models for sentiment analysis and classification, comparing results across architectures to select the most effective solution.