PEMCO – Prompt Engineering For Microsoft Copilot

Enquire/Book this course

  • This field is for validation purposes and should be left unchanged.
Print this page
  • Code: PEMCO
  • Duration: 1 Day
  • Price per delegate: £795.00 +VAT

Trained over 60000 delegates

Course delivered by industry expert instructors

Highly competitive pricing


Course Description

This 1-day masterclass teaches effective prompt engineering for Microsoft Copilot and/or ChatGPT.
Learn the RICE FACT framework for crafting prompts, understand LLM limitations, and master conversation management techniques.
Through hands-on exercises, you'll develop skills in advanced prompting patterns and build reusable templates for writing, research, data analysis, and coding assistance.

By the end of the day, you'll have the skills to consistently get high-quality, relevant responses from AI tools, dramatically improving your productivity and the value you extract from these powerful technologies.

Prerequisites

  • An appreciation of technology and an interest in AI/GenAI

Course Content

Introduction to Prompt Engineering

  • What is Prompt Engineering and why it matters
  • How Large Language Models work (simplified overview)
  • Limitations of tools like Microsoft CoPilot: hallucinations, knowledge cutoffs, biases
  • Overview of different prompting approaches

The RICE FACT Framework

  • Introduction to the RICE FACT framework for effective prompting
    • Role: Defining the AI's role or expertise
    • Input: Providing clear input and questions
    • Context: Setting the background and situation
    • Expectation: Specifying what you want as output
    • Format: Defining output structure (lists, tables, paragraphs, etc.)
    • Audience: Identifying who the response is for
    • Constraints: Setting boundaries and limitations
    • Tone: Defining the style and voice of the response
  • Practical exercises applying RICE FACT

Understanding Model Parameters and Behavior

  • What is temperature and how it affects responses
  • Understanding creativity vs. consistency in outputs
  • When to use different temperature settings
  • Other parameters that influence model behavior

Context Management and Token Limits

  • Understanding token limits in ChatGPT and other LLMs
  • How context windows work
  • Strategies for managing long conversations
  • Breaking down complex tasks to fit context limits
  • What happens when you exceed token limits

Conversation Management Techniques

  • Starting effective conversations with good initial prompts
  • Continuing existing conversations and maintaining context
  • Branching conversations: when and how to start fresh threads
  • Using conversation history effectively
  • When to reset and start a new conversation

Reusable Prompts and Templates

  • Creating prompt templates for repeated tasks
  • Building a personal prompt library
  • Customizing and adapting existing prompts
  • Sharing and collaborating on prompts with teams
  • Using custom instructions and system prompts

Advanced Prompting Patterns

  • Chain-of-thought prompting for complex reasoning
  • Few-shot prompting with examples
  • Zero-shot vs. few-shot approaches
  • Multi-step prompting for complex tasks
  • Iterative refinement: improving responses through follow-up prompts

Practical Applications and Best Practices

  • Common use cases: writing, research, analysis, coding assistance
  • Verifying and fact-checking AI responses
  • Combining ChatGPT with other tools and workflows
  • Privacy and security considerations
  • Ethical use of AI tools
  • Troubleshooting common prompting problems
   
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.