diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/001_module2_prompt_no_Markdown.png b/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/001_module2_prompt_no_Markdown.png
deleted file mode 100644
index 608d68b1..00000000
Binary files a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/001_module2_prompt_no_Markdown.png and /dev/null differ
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/002_module2_summarize_reason_and_reccomend.png b/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/002_module2_summarize_reason_and_reccomend.png
deleted file mode 100644
index 76cb8bb9..00000000
Binary files a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/002_module2_summarize_reason_and_reccomend.png and /dev/null differ
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/003_module2_creating_links_from_alert_IDs.png b/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/003_module2_creating_links_from_alert_IDs.png
deleted file mode 100644
index ae978833..00000000
Binary files a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/003_module2_creating_links_from_alert_IDs.png and /dev/null differ
diff --git a/Technical Workshops/Markdown Workshop/readme.md b/Technical Workshops/Markdown Workshop/readme.md
deleted file mode 100644
index 31daba26..00000000
--- a/Technical Workshops/Markdown Workshop/readme.md
+++ /dev/null
@@ -1,33 +0,0 @@
-# Welcome to Microsoft Copilot for Security Labs!
-
-
-
-## Introduction
-
-This workshop is designed to help you get up to speed with Microsoft Security Copilot. It offers hands-on experience in using Markdown, which can significantly enhance how our Large Language Models (LLMs) reason through data and format the results you receive.
-
-## Recommendations
-
-It's highly recommended that you review the following two pages prior to starting these modules.
- - [Prompting in Microsoft Security Copilot](https://learn.microsoft.com/en-us/copilot/security/prompting-security-copilot)
- - [Create effective prompts](https://learn.microsoft.com/en-us/copilot/security/prompting-tips)
-
-## What is markdown
-
-Markdown is a lightweight markup language developed by John Gruber in 2004 that is used to format plain text. It allows you to easily add formatting elements such as headers, lists, links, images, bold or italic text, and more, using simple symbols or characters.
-
-## Why use markdown?
-Markdown improves human readability and acts as a delimiter, helping Large Language Models (LLMs) better understand user intent and desired output by providing structure and clarity.
-
-Using Markdown provides:
- - Clear separation of content
- - Enhancing context recognition
- - Distinguishing between input and output
- - Highlighting intent with emphasis
- - Prompting LLMs with specific formatting
- - Facilitating conditional output
-
-## Workshop Modules
-
-- [**Module 1 – Using Markdown for formatting**](./Module%201%20-%20Formatting%20with%20markdown)
-- [**Module 2 – Enhancing reasoning and formatting with markdown**](./Module%202%20-%20Enhancing%20reasoning%20and%20formatting%20with%20markdown)
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 1 - Basics of Prompt Engineering/images/001_module1_basic_elements.png b/Technical Workshops/Prompt Engineering Workshop/Module 1 - Basics of Prompt Engineering/images/001_module1_basic_elements.png
new file mode 100644
index 00000000..a91e6ed3
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 1 - Basics of Prompt Engineering/images/001_module1_basic_elements.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 1 - Basics of Prompt Engineering/readme.md b/Technical Workshops/Prompt Engineering Workshop/Module 1 - Basics of Prompt Engineering/readme.md
new file mode 100644
index 00000000..c625c696
--- /dev/null
+++ b/Technical Workshops/Prompt Engineering Workshop/Module 1 - Basics of Prompt Engineering/readme.md
@@ -0,0 +1,64 @@
+## Module 1 - Basics of Prompt Engineering - Security Copilot
+
+
+
+Authors: Rick Kotlarz
+Updated: 2025-April-4
+
+#### ⌛ Estimated time to complete this lab: 15 minutes
+#### 🎓 Level: 100 (Beginner)
+
+The following module demonstrates effective prompt engineering for those just getting started with Security Copilot.
+
+1. [How Security Copilot works](#initial-prompt)
+2. [Prompting basics](#prompting-basics)
+3. [Bad prompting](#bad-prompting)
+4. [Good prompting](#good-prompting)
+
+
+### How Security Copilot works
+
+Regardless of whether you're using the embedded or standalone experience, Security Copilot prompts are evaluated by the Orchestrator. The Orchestrator’s primary role is to interpret the prompt, check enabled plugins, and map values from the prompt to the appropriate fields within one or more skills. As you might expect, prompts that lack sufficient detail often result in poor or incomplete responses or no response at all.
+
+In the embedded experience, prompts are tied to a specific plugin based on the context. For example, if you're in a Purview DLP experience, asking questions about Intune or Defender will likely return no results or incomplete answers.
+
+In contrast, the standalone experience supports all enabled plugins and allows you to pivot freely across plugins and skills. All prompts operate within the security context of your current role. If your role lacks the necessary permissions, prompts requesting that data will not be processed.
+
+It's strongly recommended that you review the following two links before starting these modules.
+ - [Prompting in Microsoft Security Copilot](https://learn.microsoft.com/en-us/copilot/security/prompting-security-copilot)
+ - [Create effective prompts](https://learn.microsoft.com/en-us/copilot/security/prompting-tips)
+
+### Prompting basics
+
+While the order of these elements isn’t critical, including them in your prompts significantly improves the quality of the response.
+
+
+
+### Bad prompting
+
+Bad prompts contain vague or highly subjective elements related to **Goal, Context, Source, or Expectation**
+
+| Bad prompt examples | Reasoning why they're bad |
+|--------|--------|
+| Show me important alerts. | Using the word "important" without clearly defining what it means to you can lead to widely varying results. |
+| How is my security posture from Defender looking today? Show results in a table. | Security posture can refer to many different resources across the Security and Compliance stack. Since there is no single plugin or skill that provides a complete review, asking about overall security posture will result in a response based on incomplete information.
+| What's the compliance status of this entity? | Compliance could refer to areas within Intune, Purview, Entra, or other services, making the term too broad without additional context. |
+| Tell me the MFA status for device ASH-U2746 | Devices do not have an MFA status. Asking about this, instead of the MFA status of the user currently or most recently logged in, will result in an inappropriate or failed response. |
+| What's the MFA status for that user | Not referring to a named entity by a unique identifier in a prompt will almost always result in errors. It’s best to use a UPN, FQDN, Resource Object ID, or another identifier that is guaranteed to be unique within your organization.|
+
+### Good prompting
+
+Good prompts contain specific elements related to **Goal, Context, Source, and Expectation**
+
+| Good prompt examples |
+|--------|
+| Using the Defender XDR plugin, provide an SOC manager summary of all Defender incidents over the last 7 days |
+| Using NL2KQL for Defender, show me a list of alerts with 'phish' in the title for the last 30 days |
+| Using Intune, provide a table showing the last 3 devices that were enrolled and their Operating System |
+| Using Entra, what is the MFA enrollment status for user: john.smith@contso.com |
+| Using Purview show the last 30 DLP alerts. For each user listed, provide a count of how many times they were included. |
+| Using Purview, provide a list of all users that triggered DLP alerts over the last 30 days. For each user, provide their UPN and a count of how many alerts they were associated with. |
+
+---
+
+✈️ Continue to [Module 2 - Standardizing Responses with Markdown](.././Module%202%20-%20Standardizing%20Responses%20with%20Markdown)
diff --git a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/001_prompt_no_Markdown.png b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/001_prompt_no_Markdown.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/001_prompt_no_Markdown.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/001_prompt_no_Markdown.png
diff --git a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/002_AskGPT_Markdown_formatting.png b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/002_module2_AskGPT_Markdown_formatting.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/002_AskGPT_Markdown_formatting.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/002_module2_AskGPT_Markdown_formatting.png
diff --git a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/003_AskGPT_formatting_as_a_table_prompt.png b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/003_module2_AskGPT_formatting_as_a_table_prompt.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/003_AskGPT_formatting_as_a_table_prompt.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/003_module2_AskGPT_formatting_as_a_table_prompt.png
diff --git a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/004_AskGPT_formatting_as_a_list_prompt.png b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/004_module2_AskGPT_formatting_as_a_list_prompt.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/004_AskGPT_formatting_as_a_list_prompt.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/004_module2_AskGPT_formatting_as_a_list_prompt.png
diff --git a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/005_AskGPT_formatting_as_a_list_result.png b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/005_module2_AskGPT_formatting_as_a_list_result.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/005_AskGPT_formatting_as_a_list_result.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/005_module2_AskGPT_formatting_as_a_list_result.png
diff --git a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/006_prompt_that_includes_AskGPT_formatting.png b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/006_module2_prompt_that_includes_AskGPT_formatting.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/images/006_prompt_that_includes_AskGPT_formatting.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/images/006_module2_prompt_that_includes_AskGPT_formatting.png
diff --git a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/readme.md b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/readme.md
similarity index 62%
rename from Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/readme.md
rename to Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/readme.md
index eee97186..2ec27e3a 100644
--- a/Technical Workshops/Markdown Workshop/Module 1 - Formatting with markdown/readme.md
+++ b/Technical Workshops/Prompt Engineering Workshop/Module 2 - Standardizing Responses with Markdown/readme.md
@@ -1,19 +1,39 @@
-## Module 1 - Formatting with Markdown in Microsoft Security Copilot
+## Module 2 - Standardizing Responses with Markdown - Security Copilot
-
+
-#### ⌛ Estimated time to complete this lab: 15 minutes
-#### 🎓 Level: 100 (Beginner)
+Authors: Rick Kotlarz
+Updated: 2025-April-4
+
+#### ⌛ Estimated time to complete this lab: 30 minutes
+#### 🎓 Level: 200 (Beginner)
+
+1. [Introduction](#introduction)
+2. [What is Markdown and why use it](#what-is-markdown-and-why-use-it)
+3. [Initial prompt](#initial-prompt)
+4. [Formatting as a table](#formatting-as-a-table)
+5. [Formatting as a list](#formatting-as-a-list)
+6. [Combining a prompt with Markdown formatting instructions](#combining-a-prompt-with-markdown-formatting-instructions)
+7. [Available Markdown](#available-markdown)
+8. [Increasing efficiency](#increasing-efficiency)
+
+## Introduction
The following example prompts demonstrate how users can modify the output from a plugin skill using Markdown. Large Language Models (LLMs) interpret context and follow instructions more effectively when delimiters and Markdown are included in prompts. Although natural language can be used, it often requires more detailed explanations than most users are willing to provide. By offering clear instructions and utilizing Markdown, as covered in this module, you can reduce the likelihood of output variance.
-1. [Initial prompt](#initial-prompt)
-2. [Formatting as a table with AskGPT](#formatting-as-a-table-with-askgpt)
-3. [Formatting as a list with AskGPT](#formatting-as-a-list-with-askgpt)
-4. [Combining a prompt with Markdown formatting instructions](#combining-a-prompt-with-Markdown-formatting-instructions)
-5. [Available Markdown](#available-markdown)
-6. [Increasing efficiency](#increasing-efficiency)
+## What is Markdown and why use it
+[Markdown](https://commonmark.org/) is a lightweight markup language developed by John Gruber in 2004 that is used to format plain text. It allows you to easily add formatting elements such as headers, lists, links, images, bold or italic text, and more, using simple symbols or characters. Markdown enhances human readability, provides clear structure, and provides delimiters that helps Large Language Models (LLMs) better interpret user intent, instructions, and expected output.
+
+Using Markdown provides:
+ - Clear separation of content
+ - Enhancing context recognition
+ - Distinguishing between input and output
+ - Highlighting intent with emphasis
+ - Prompting LLMs with specific formatting
+ - Facilitating conditional output
+
+While this workshop doesn't cover syntax around the use of Markdown, you can easily find this information [here](https://commonmark.org/help/). Additionally on this website you can find a [10 minute interactive tutorial](https://commonmark.org/help/tutorial/), or [play with the reference CommonMark implementation](https://spec.commonmark.org/dingus/).
### Initial prompt
@@ -22,11 +42,15 @@ Running a prompt without specifying output expectations can lead to inconsistent
```
List the last 3 incidents from Defender.
```
+

-### Formatting as a table with AskGPT
+### Formatting as a table
+
+The /AskGPT skill bypasses plugins and interacts directly with the underlying LLM. We can use it to provide instructions that modify the default output format. Since we'll be submitting follow-up prompts, start by instructing it to take no action except to read the instructions. Then, specify that all subsequent outputs should follow the provided Markdown format.
+
+Note that this will not reformat existing results the instruction must be given **before** the prompt you want to affect.
-To modify the default output, we can use the /AskGPT skill and instruct the model to take no action other than reading the instructions. Then, specify that subsequent outputs should follow the provided Markdown format. Keep in mind that this will not reformat existing results; the instruction must be set **before** the prompt for which you want to change the output format.
```
/AskGPT No action is needed at this time, simply review the following instructions and respond with 'Ready!'. Instructions: All additional responses will be formatted to conform to the following Markdown example.
## Markdown example
@@ -34,13 +58,18 @@ To modify the default output, we can use the /AskGPT skill and instruct the mode
|-------------------------|-----------------|------------|-------------------|----------------------|--------------|-----------------|--------------------|-------------------|------------------------------|
| 2025-01-08T12:09:40.47Z | 1234 | Active | https://12.aka.ms | Multi-stage incident | High | John.Doe | Malware | True Positive | 2025-01-22T23:33:21.1733333Z |
```
-
+
+
+
+```
+List the last 3 incidents from Defender.
+```
Notice that re-running the [Initial prompt](#initial-prompt) now results with the first column of "Created Date," followed by "Incident ID," and then "Status," instead of "Incident ID," "Display Name," and "Severity." **Be sure to apply these instructions before formatting other prompts.** For better organization and easier access, consider saving this prompt in a promptbook.
-
+
-### Formatting as a list with AskGPT
+### Formatting as a list
Another example of Markdown formatting is shown below, using bullets, indentations, and a horizontal bar after each incident. In this example, the "Assigned To," "Classification," and "Determination" fields have been excluded from the formatted output by removing them from the Markdown example.
```
@@ -56,10 +85,13 @@ Another example of Markdown formatting is shown below, using bullets, indentatio
```
-
+
+```
+List the last 3 incidents from Defender.
+```
-
+
---
@@ -75,10 +107,11 @@ List the last 3 incidents from Defender. Ensure the output is formatted to confo
|-----------------|------------|-------------------|----------------------|--------------|-----------------|--------------------|-------------------|--------------------------|-----------------------------|
| 1234 | Active | https://12.aka.ms | Multi-stage incident | High | John.Doe | Malware | True Positive | 2025-01-08T12:09:40.47Z | 2025-01-22T23:33:21.1733333Z |
```
-
+
---
+
### Available Markdown
Given the speed of change within Security Copilot, the best way to visualize which Markdown syntax elements are available to you is by using the following prompt.
@@ -88,7 +121,6 @@ Please note that the standalone instance of Security Copilot currently does not
```
/AskGPT Assume the role of a Markdown syntax expert. Provide a comprehensive list of Markdown syntax elements, including tables, numbered lists, task lists, and horizontal rules. For each element, provide instructions on how I can replicate them myself and what it would look like when rendered.
```
-For the official Markdown specification visit [CommonMark](https://commonmark.org/help/)
### Increasing efficiency
@@ -98,4 +130,8 @@ All the Markdown formatting methods above include extra characters to help users
| **Incident ID** | **Status** | **Incident URL** |
| --- | --- | --- |
| 1234 | Active | https://12.aka.ms |
-```
\ No newline at end of file
+```
+
+---
+
+✈️ Continue to [Module 3 - Enhancing Reasoning and Responses with Markdown](.././Module%203%20-%20Enhancing%20Reasoning%20and%20Responses%20with%20Markdown)
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/001_module3_prompt_no_Markdown.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/001_module3_prompt_no_Markdown.png
new file mode 100644
index 00000000..153d9691
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/001_module3_prompt_no_Markdown.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/002_module3_summarize_reason_and_recommend.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/002_module3_summarize_reason_and_recommend.png
new file mode 100644
index 00000000..1c0bb86e
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/002_module3_summarize_reason_and_recommend.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/003_module3_creating_links_from_alert_IDs.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/003_module3_creating_links_from_alert_IDs.png
new file mode 100644
index 00000000..df122df1
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/003_module3_creating_links_from_alert_IDs.png differ
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/004_module2_final_prompt_with_Markdown.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/004_module3_final_prompt_with_Markdown.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/004_module2_final_prompt_with_Markdown.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/004_module3_final_prompt_with_Markdown.png
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_1.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_1.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_1.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_1.png
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_2.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_2.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_2.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_2.png
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_3.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_3.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_3.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_3.png
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_4.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_4.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_4.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_4.png
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_5.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_5.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_5.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_5.png
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_6.png b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_6.png
similarity index 100%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/images/005_module2_promptbook_step_6.png
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/images/005_module3_promptbook_step_6.png
diff --git a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/readme.md b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/readme.md
similarity index 66%
rename from Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/readme.md
rename to Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/readme.md
index 4ece09dc..d2852ddf 100644
--- a/Technical Workshops/Markdown Workshop/Module 2 - Enhancing reasoning and formatting with markdown/readme.md
+++ b/Technical Workshops/Prompt Engineering Workshop/Module 3 - Enhancing Reasoning and Responses with Markdown/readme.md
@@ -1,25 +1,26 @@
-
-
+## Module 3 - Enhancing Reasoning and Response with Markdown - Security Copilot
-## Module 2 - Enhancing reasoning and formatting with Markdown
+
-
+Authors: Rick Kotlarz & Craig Freyman
+Updated: 2025-April-4
-#### ⌛ Estimated time to complete this lab: 15 minutes
-#### 🎓 Level: 200 (Intermediate)
+#### ⌛ Estimated time to complete this lab: 30 minutes
+#### 🎓 Level: 300 (Intermediate)
-The following example prompts demonstrate how users can enhance both the output and the reasoning behind that output by using Markdown. Large Language Models (LLMs) interpret context and follow instructions more effectively when prompts include delimiters and Markdown. While natural language can be used, it often requires more detailed explanations, which many users may not be willing to provide. By offering clear instructions and utilizing Markdown, as outlined in this module, users can reduce the likelihood of output inconsistencies.
+1. [Introduction](#introduction)
+2. [Large Language Model prompt engineering best practices](#large-language-model-prompt-engineering-best-practices)
+3. [Initial prompt](#initial-prompt)
+4. [Summarizing, reasoning, and recommendations](#summarizing-reasoning-and-recommendations)
+5. [Creating links from alert IDs](#creating-links-from-alert-ids)
+6. [Ensuring consistent formatting](#ensuring-consistent-formatting)
+7. [Leveraging Promptbooks](#leveraging-promptbooks)
-LLMs often aren't fine-tuned to fully understand the specific context of a user's query. While platforms like Security Copilot help improve prompting by including system prompts that emphasize Responsible AI (RAI) and the intended use of plugins or skills, these might not always meet the user's expectations. To bridge this gap, users should provide additional context and direction, aligning with both the [Security Copilot prompt engineering best practices](https://learn.microsoft.com/en-us/copilot/security/prompting-tips) and general LLM prompt engineering best practices.
+## Introduction
-This module illustrates how users can combine the referenced Security Copilot prompt engineering best practices with general LLM prompt engineering best practices.
+The following example prompts demonstrate how users can enhance both the output and the reasoning behind that output by using Markdown. Large Language Models (LLMs) interpret context and follow instructions more effectively when prompts include delimiters and Markdown. While natural language can be used, it often requires more - explanations, which many users may not be willing to provide. By offering clear instructions and utilizing Markdown, as outlined in this module, users can reduce the likelihood of output inconsistencies.
-1. [Large Language Model prompt engineering best practices](#large-language-model-prompt-engineering-best-practices)
-2. [Initial prompt](#initial-prompt)
-3. [Summarizing, reasoning, and recommendations](#summarizing-reasoning-and-recommendations)
-4. [Creating links from alert IDs](#creating-links-from-alert-ids)
-5. [Ensuring consistent formatting](#ensuring-consistent-formatting)
-6. [Leveraging Promptbooks](#leveraging-promptbooks)
+LLMs often aren't fine-tuned to fully understand the specific context of a user's query. While platforms like Security Copilot help improve prompting by including system prompts that emphasize Responsible AI (RAI) and the intended use of plugins or skills, these might not always meet the user's expectations. To bridge this gap, users should provide additional context and direction, aligning with both the [Security Copilot prompt engineering best practices](https://learn.microsoft.com/en-us/copilot/security/prompting-tips) and general prompt engineering best practices that apply to most LLMs.
### Large Language Model prompt engineering best practices
@@ -31,26 +32,23 @@ Effective prompting is key to obtaining accurate, relevant, and useful responses
4. **Include Examples** - Provide one or more examples to show the desired pattern or style. This helps the model infer and replicate the expected response format.
5. **Be Clear and Specific** - Craft precise, unambiguous prompts, supplying enough context to help the model understand and fulfill the request accurately.
6. **Specify Tone** - If tone is important, specify it in the prompt. Review the model’s outputs iteratively and refine your prompts to improve the quality of responses.
-7. **Iterative Refinement** - Continuously review the model's outputs and adjust your prompts as needed to improve response quality.
+7. **Iterative Refinement** - Continuously review the model's outputs, adjust your prompts and iterate as needed to improve response quality.
### Initial prompt
-Running a prompt without specifying output expectations can lead to inconsistent formatting, such as alternating between tables and bullet points. When users don't provide detailed instructions on the desired output format, the skill will return all available data.
-
-In this example, I prompt Security Copilot to use the Purview plugin and retrieve the 10 DLP alerts with a severity of high over the past 30 days:
+Running a prompt without specifying output expectations can lead to inconsistent formatting, such as output alternating between tables and bullet points. In this example, I prompt Security Copilot to use the Purview plugin and retrieve the 10 DLP alerts with a severity of high over the past 30 days:
```
Using the Purview plugin, get the last 10 DLP alerts with a severity of high over the past 30 days. Format the output in a table.
```
I can download the response output by selecting **"Export to Excel"** or simply view it from within the browser by selecting the icon to the right of the "Export to Excel" button.
-
-
+
### Summarizing, reasoning, and recommendations
-Looking over these alerts, I can see that it includes a few of the same users. Rather than viewing each alert line-by-line in a table and having to mentally group alerts, I'll use the **/AskGPT** skill, as illustrated in [Module 1 - Formatting with markdown](https://github.com/Azure/Security-Copilot/tree/main/Technical%20Workshops/Markdown%20Workshop/Module%201%20-%20Formatting%20with%20markdown), to instruct Security Copilot to:
+Looking over these alerts, I can see that it includes a few of the same users. Rather than viewing each alert line-by-line in a table and having to mentally group alerts, I'll reprompt Security Copilot and instruct it to:
- Summarize all data, grouped by users.
- Highlight significant trends, behaviors, or issues observed.
@@ -58,9 +56,10 @@ Looking over these alerts, I can see that it includes a few of the same users. R
- Provide a summary with actionable recommendations to mitigate risk, prioritize incidents, and improve compliance.
```
-/AskGPT For each user, write a concise, actionable summary addressed to their manager. Begin the summary with the user's User Principal Name (UPN). Highlight significant trends, behaviors, or issues observed in their DLP alerts using chain-of-thought reasoning to identify patterns, similarities, or contributing factors across the alerts. Provide clear, actionable recommendations to mitigate risk, prioritize incidents, and improve compliance.
+Using the Purview plugin, get the last 10 DLP alerts with a severity of high over the past 30 days. For each user, write a concise, actionable summary addressed to their manager. Begin the summary with the user's User Principal Name (UPN). Highlight significant trends, behaviors, or issues observed in their DLP alerts using chain-of-thought reasoning to identify patterns, similarities, or contributing factors across the alerts. Provide clear, actionable recommendations to mitigate risk, prioritize incidents, and improve compliance.
```
-
+
+
### Creating links from alert IDs
Being satisfied with the summarizing, reasoning, and recommendations output, I'd like to include clickable links that make it easier to view the actual alert. While the initial output does not include a clickable link, it does include the Alert ID.
@@ -75,35 +74,39 @@ To create a link, I simply need to ensure that the data required to support a gi
Now that I have the Tenant ID, I can hardcode that part of the URL. From my experience working in Defender XDR and Purview, I know that alert IDs associated with Purview include a preceding 'dl' that I'll need to remove if I want to view these alerts from the Purview portal.
-To generate this URL, I will again use the **/AskGPT** skill and prompt Security Copilot to:
+To generate this URL, I will prompt Security Copilot to perform the following:
- Create a link for the alerts.
- Provide it with the root URL that contains my Tenant ID.
- Ask it to remove the first two characters of the alert ID.
-⚠️ Please read next prompt before executing and don't forget to insert your Tenant ID in the URL before pasting.
+⚠️ Don't forget to insert your Tenant ID in the prompt before pasting. Additionally only use this prompt if you prefer Purview portal links.
```
-/AskGPT For each user include a direct link to the alert in Microsoft Purview portal using the following URL format:
+Using the Purview plugin, get the last 10 DLP alerts with a severity of high over the past 30 days. For each user, write a concise, actionable summary addressed to their manager. Begin the summary with the user's User Principal Name (UPN). Highlight significant trends, behaviors, or issues observed in their DLP alerts using chain-of-thought reasoning to identify patterns, similarities, or contributing factors across the alerts. Provide clear, actionable recommendations to mitigate risk, prioritize incidents, and improve compliance.
+
+For each user include a direct link to the alert in Microsoft Purview portal using the following URL format:
Purview: [{Alert Title}](https://purview.microsoft.com/datalossprevention/alertspage/fullpage?tid={Replace-with-your-tenant-ID-and-remove-curly-brackets}&alertsviewid=overview&id={alertId with the first two 'dl' characters of the AlertID removed})
```
⚠️ If I'd prefer working out of Defender XDR instead of Purview, I can leave the preceding 'dl' characters in the Alert ID and use the Defender XDR URL `https://security.microsoft.com/alerts/{alert ID}`, modifying the prompt accordingly. To avoid confusing the LLM, I'll choose either Purview or Defender XDR before pasting the prompt.
```
-/AskGPT For each user include a direct link to the alert in Defender XDR portal using the following URL format:
+Using the Purview plugin, get the last 10 DLP alerts with a severity of high over the past 30 days. For each user, write a concise, actionable summary addressed to their manager. Begin the summary with the user's User Principal Name (UPN). Highlight significant trends, behaviors, or issues observed in their DLP alerts using chain-of-thought reasoning to identify patterns, similarities, or contributing factors across the alerts. Provide clear, actionable recommendations to mitigate risk, prioritize incidents, and improve compliance.
+
+For each user include a direct link to the alert in Defender XDR portal using the following URL format:
[{Alert Title}](https://security.microsoft.com/alerts/{alert Id})
```
If the above prompt failed, verify that you updated the Tenant ID.
-
+
### Ensuring consistent formatting
While I value the friendly, conversational, and non-repetitive nature that Generative AI provides, I need the output to be consistently formatted each time. To achieve this, I'll provide clear instructions that integrate my previous prompts, URL modifications, and additional format guidelines.
-Since large language models (LLMs) work best when using specific delimiters, and Markdown is a well-established language for such delimiters, I will combine both the [Security Copilot prompt engineering best practices](https://learn.microsoft.com/en-us/copilot/security/prompting-tips) and the [general best practices for effective prompting of LLMs](#general-best-practices-for-effective-prompting-llms) outlined at the top of this page. By leveraging both of these resources, I’ve not only harnessed the power of Security Copilot, but I will also ensure that the reasoning output is consistently formatted each time I run this prompt.
+Because large language models (LLMs) perform best with defined delimiters and Markdown is a widely accepted standard I’ll apply both the [Security Copilot prompt engineering best practices](https://learn.microsoft.com/en-us/copilot/security/prompting-tips) and the [general best practices for effective prompting of LLMs](#general-best-practices-for-effective-prompting-llms) outlined at the top of this page. By combining these approaches, I can fully leverage Security Copilot while ensuring the reasoning and output formating remains consistent with each run.
```
Using the Purview plugin, get the last 10 DLP alerts with a severity of high. Assume the role of an expert **Data Loss Prevention (DLP) engineer** tasked with generating a **professional, manager-focused summary** of a DLP triage workflow for DLP alerts.
@@ -136,7 +139,10 @@ https://purview.microsoft.com/datalossprevention/alertspage/fullpage?tid=0527ecb
---
```
-
+
+
+
+⚠️ Note: This prompt was created by applying the [Large Language Model prompt engineering best practices](#large-language-model-prompt-engineering-best-practices) discussed earlier in this module. It was refined through multiple iterations to achieve the desired format and output.
---
@@ -146,27 +152,28 @@ To make repeating the execution of this prompt easier, I'll create a promptbook.
To create a promptbook from within a session, I simply need to click the checkbox on the top left corner of the prompt, and select the "Create promptbook" icon at the top. However, it's important to note that promptbooks use angle brackets`< >` to denote variables. As such, the existence of them must only be used for variables without spaces.
-
+
Next, I'll provide a name, relevant tags (hit ENTER after each tag), and a description. After this, I'll hover my mouse over the prompt box and click the pencil icon to edit the prompt.
-
+
Since I don't always want "10" alerts exactly, I'll make this a variable by replacing the number "10" with `` and surrounding it with angle brackets (like this ``), followed by the checkbox icon located where the pencil icon was previously.
-
+
After scrolling down, I can see that the variable was accepted. Additionally, if I want to share this promptbook with others, I can do so here.
-
+
Finally, I'll save it by selecting "Create" at the bottom left of the promptbook.
-
+
I can now view and execute the promptbook from the Home screen by filtering on "Promptbooks" or from the Promptbook Library.
-
+
+
---
-For the official Markdown specification, visit [CommonMark](https://commonmark.org/help/)
\ No newline at end of file
+✈️ Continue to [Module 4 - Refining Reasoning and Response with Markdown](.././Module%204%20-%20Refining%20Reasoning%20and%20Response%20with%20Markdown)
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/Get_mailbox_rules.yaml b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/Get_mailbox_rules.yaml
new file mode 100644
index 00000000..71dff911
--- /dev/null
+++ b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/Get_mailbox_rules.yaml
@@ -0,0 +1,49 @@
+Descriptor:
+ Name: Get mailbox rules
+ DisplayName: Get mailbox rules
+ Description: This plugin provides a KQL query to assist with mailbox rule investigations.
+ DescriptionForModel: |
+ This query analyzes user mailbox activities within the past 30 days, focusing on potentially malicious mailbox operations which are commonly associated with unauthorized access and attacker behavior.
+ It filters out system-generated events to reduce false positives and helps identify abnormal mailbox permission and rule changes, often indicative of malicious activity.
+
+ SupportedAuthTypes:
+ - None
+
+SkillGroups:
+ - Format: KQL
+ Skills:
+ - Name: GetMailboxRulesForAllUsers
+ DisplayName: Gets mailbox rules for all users within the last 30 days (GetMailboxRulesForAllUsers)
+ Description: Gets mailbox rules for all users within the last 30 days
+ DescriptionForModel: |-
+ This query analyzes user mailbox activities within the past 30 days, focusing on potentially malicious mailbox operations which are commonly associated with unauthorized access and attacker behavior. It filters out system-generated events to reduce false positives and helps identify abnormal mailbox permission and rule changes, often indicative of malicious activity.
+ ExamplePrompts:
+ - Get all mailbox rules using over the past 30 days
+ - Get all mailbox rules for every user recently
+ - Show me all mailbox rules in the last 30 days
+ - Show me everyones mailbox rules in the last month
+ Settings:
+ Target: Defender
+ Template: |-
+ let TimePeriod = 30d;
+ OfficeActivity
+ | where TimeGenerated >= ago(TimePeriod)
+ | where UserId !contains "NT AUTHORITY\\SYSTEM"
+ // The above line excludes 'NT AUTHORITY\SYSTEM' due to the high number of false positives from tooling actions such as eDiscovery. Note that when investigating advanced threat actors you will want to include these records.
+ | extend EST = datetime_utc_to_local(TimeGenerated, "US/Eastern")
+ | where Operation in (
+ "Add-MailboxPermission",
+ "New-InboxRule",
+ "Set-InboxRule",
+ "Set-Mailbox",
+ "New-TransportRule",
+ "Set-TransportRule",
+ "Add-MailboxFolderPermission",
+ "New-ManagementRoleAssignment"
+ )
+ // Operations more commonly used by attackers: "Add-MailboxPermission", "New-InboxRule", "Set-InboxRule"
+ // Noisy Operations less commonly used by attackers: "Set-Mailbox", "New-TransportRule", "Set-TransportRule"
+ // Noisy Operations rarely used by attackers: "Add-MailboxPermission", "New-ManagementRoleAssignment"
+ | extend Parameters_reformated = replace(@"\[|\]", "", tostring(Parameters)) // Remove square brackets from Parameters field to ensure propper JSON formatting
+ | extend ClientIP_reformated = replace(@"\[|\]", "", tostring(extract("^(.*):.*$", 1, ClientIP))) // Drop everything after the last colon and remove square brackets on IPv6 addresses
+ | project TimeGenerated, UserId, Operation, Parameters_reformated, ClientIP_reformated
\ No newline at end of file
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/001_module4_Defender_Advanced_hunting.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/001_module4_Defender_Advanced_hunting.png
new file mode 100644
index 00000000..c4e08bb9
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/001_module4_Defender_Advanced_hunting.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/002_module4_plugin_upload_part_1.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/002_module4_plugin_upload_part_1.png
new file mode 100644
index 00000000..44b07c1c
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/002_module4_plugin_upload_part_1.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/003_module4_plugin_upload_part_2.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/003_module4_plugin_upload_part_2.png
new file mode 100644
index 00000000..2a5b28bd
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/003_module4_plugin_upload_part_2.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/004_module4_plugin_upload_part_3.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/004_module4_plugin_upload_part_3.png
new file mode 100644
index 00000000..f84450e1
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/004_module4_plugin_upload_part_3.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/005_module4_plugin_skill.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/005_module4_plugin_skill.png
new file mode 100644
index 00000000..e93e3dc1
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/005_module4_plugin_skill.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/006_module4_GetMailboxRulesForAllUsers_skill_execution.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/006_module4_GetMailboxRulesForAllUsers_skill_execution.png
new file mode 100644
index 00000000..72752419
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/006_module4_GetMailboxRulesForAllUsers_skill_execution.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/007_module4_GetMailboxRulesForAllUsers_skill_execution_expanded.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/007_module4_GetMailboxRulesForAllUsers_skill_execution_expanded.png
new file mode 100644
index 00000000..90889a05
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/007_module4_GetMailboxRulesForAllUsers_skill_execution_expanded.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/008_module4_GetMailboxRulesForAllUsers_skill_with_prompt_eng.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/008_module4_GetMailboxRulesForAllUsers_skill_with_prompt_eng.png
new file mode 100644
index 00000000..75d4c3b3
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/008_module4_GetMailboxRulesForAllUsers_skill_with_prompt_eng.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/009_module4_create_promptbook_part_1.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/009_module4_create_promptbook_part_1.png
new file mode 100644
index 00000000..4b69f805
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/009_module4_create_promptbook_part_1.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/010_module4_create_promptbook_part_2.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/010_module4_create_promptbook_part_2.png
new file mode 100644
index 00000000..22e46c01
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/010_module4_create_promptbook_part_2.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/011_module4_create_promptbook_part_3.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/011_module4_create_promptbook_part_3.png
new file mode 100644
index 00000000..4bbc832f
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/011_module4_create_promptbook_part_3.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/012_module4_create_promptbook_part_4.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/012_module4_create_promptbook_part_4.png
new file mode 100644
index 00000000..a7b3cd55
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/012_module4_create_promptbook_part_4.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/013_module4_create_promptbook_part_5.png b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/013_module4_create_promptbook_part_5.png
new file mode 100644
index 00000000..269d9397
Binary files /dev/null and b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/images/013_module4_create_promptbook_part_5.png differ
diff --git a/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/readme.md b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/readme.md
new file mode 100644
index 00000000..8481f297
--- /dev/null
+++ b/Technical Workshops/Prompt Engineering Workshop/Module 4 - Refining Reasoning and Response with Markdown/readme.md
@@ -0,0 +1,308 @@
+## Module 4 - Refining Reasoning and Response with Markdown - Security Copilot
+
+
+
+Authors: Rick Kotlarz
+Updated: 2025-April-4
+
+#### ⌛ Estimated time to complete this lab: 30 minutes
+#### 🎓 Level: 300 (Advanced)
+
+1. [Prerequisites](#prerequisites)
+2. [Introduction](#introduction)
+3. [Mailbox rule KQL query](#mailbox-rule-kql-query)
+4. [Converting the KQL query to a plugin](#converting-the-kql-query-to-a-plugin)
+5. [Refining reasoning and response](#refining-reasoning-and-response)
+6. [creating a promptbook that uses a defined skill](#creating-a-promptbook-that-uses-a-defined-skill)
+
+
+## Prerequisites
+
+To fully execute all items outlined in this module must have permissions to Microsoft Defender Advanced Hunting pane and the OfficeActivity KQL table.
+
+## Introduction
+
+Large Language Models (LLMs) are highly effective at following instructions; however, humans often omit key details in their prompts, assuming they are understood. This often results in unsatisfactory responses when the model misinterprets the intent incorrectly. To prevent this, you should explicitly include all relevant details, following the prompt engineering best practices covered in the previous module. Doing so enables the model to reason more effectively and respond with greater accuracy and consistency.
+
+
+### Mailbox rule KQL query
+
+As a SOC analyst, a common task involves investigating mailbox rules, which may be created by legitimate users or, in some cases, by threat actors. One typical method for this investigation is using the Microsoft Defender Advanced Hunting pane to run KQL queries that retrieve detailed information about each rule. In this module, the necessary KQL query for hunting has already been provided. While the query results are helpful, I still need to manually examine each rule's behavior and review the JSON data found in the `Parameters_reformatted` column. This manual process is often time-consuming and prone to errors, especially due to the way the JSON is formatted in the output.
+
+KQL query:
+
+```
+let TimePeriod = 30d;
+OfficeActivity
+| where TimeGenerated >= ago(TimePeriod)
+| where UserId !contains "NT AUTHORITY\\SYSTEM"
+// The above line excludes 'NT AUTHORITY\SYSTEM' due to the high number of false positives from tooling actions such as eDiscovery. Note that when investigating advanced threat actors you will want to include these records.
+| extend EST = datetime_utc_to_local(TimeGenerated, "US/Eastern")
+| where Operation in (
+ "Add-MailboxPermission",
+ "New-InboxRule",
+ "Set-InboxRule",
+ "Set-Mailbox",
+ "New-TransportRule",
+ "Set-TransportRule",
+ "Add-MailboxFolderPermission",
+ "New-ManagementRoleAssignment"
+)
+// Operations more commonly used by attackers: "Add-MailboxPermission", "New-InboxRule", "Set-InboxRule"
+// Noisy Operations less commonly used by attackers: "Set-Mailbox", "New-TransportRule", "Set-TransportRule"
+// Noisy Operations rarely used by attackers: "Add-MailboxPermission", "New-ManagementRoleAssignment"
+| extend Parameters_reformated = replace(@"\[|\]", "", tostring(Parameters)) // Remove square brackets from Parameters field to ensure propper JSON formatting
+| extend ClientIP_reformated = replace(@"\[|\]", "", tostring(extract("^(.*):.*$", 1, ClientIP))) // Drop everything after the last colon and remove square brackets on IPv6 addresses
+| project TimeGenerated, UserId, Operation, Parameters_reformated, ClientIP_reformated
+```
+
+
+
+To leverage Security Copilot's ability to reason over the data I can either execute this KQL query from a Logic App and pass the results over to Security Copilot, or simply convert the KQL to a plugin.
+
+### Converting the KQL query to a plugin
+
+Convert KQL queries into a KQL-based plugin is fairly easy. To aid in this process, I reccomend using the [KQL Combined Defender and Sentinel example](https://github.com/Azure/Security-Copilot/blob/main/Plugins/MSFT_Plugin_Samples/KQL/KQL_Combined_Defender_and_Sentinel_Example.yaml) plugin from the plugin samples folder on GitHub. To steamline this part of the workshop I've modified the template to reflect what a plugin would look like. You can either copy and save the following content as `Get_mailbox_rules.yaml` using a plain text editor like Notepad. or simply download an already created version of the plugin [here](./Get_mailbox_rules.yaml)
+
+**Note:** The file extension must be lowercase (`.yaml` or `.yml`). Plugin uploads will fail if the extension is uppercase. If you encounter issues saving the file, wrap the name in double quotes `"Get_mailbox_rules.yaml"` to ensure the correct name and extension.
+
+```
+Descriptor:
+ Name: Get mailbox rules
+ DisplayName: Get mailbox rules
+ Description: This plugin provides a KQL queriy to assist with mailbox rule investigations.
+ DescriptionForModel: |
+ This query analyzes user mailbox activities within the past 30 days, focusing on potentially malicious mailbox operations which are commonly associated with unauthorized access and attacker behavior.
+ It filters out system-generated events to reduce false positives and helps identify abnormal mailbox permission and rule changes, often indicative of malicious activity.
+
+ SupportedAuthTypes:
+ - None
+
+SkillGroups:
+ - Format: KQL
+ Skills:
+ - Name: GetMailboxRulesForAllUsers
+ DisplayName: Gets mailbox rules for all users within the last 30 days (GetMailboxRulesForAllUsers)
+ Description: Gets mailbox rules for all users within the last 30 days
+ DescriptionForModel: |-
+ This query analyzes user mailbox activities within the past 30 days, focusing on potentially malicious mailbox operations which are commonly associated with unauthorized access and attacker behavior. It filters out system-generated events to reduce false positives and helps identify abnormal mailbox permission and rule changes, often indicative of malicious activity.
+ ExamplePrompts:
+ - Get all mailbox rules using over the past 30 days
+ - Get all mailbox rules for every user recently
+ - Show me all mailbox rules in the last 30 days
+ - Show me everyones mailbox rules in the last month
+ Settings:
+ Target: Defender
+ Template: |-
+ let TimePeriod = 30d;
+ OfficeActivity
+ | where TimeGenerated >= ago(TimePeriod)
+ | where UserId !contains "NT AUTHORITY\\SYSTEM"
+ // The above line excludes 'NT AUTHORITY\SYSTEM' due to the high number of false positives from tooling actions such as eDiscovery. Note that when investigating advanced threat actors you will want to include these records.
+ | extend EST = datetime_utc_to_local(TimeGenerated, "US/Eastern")
+ | where Operation in (
+ "Add-MailboxPermission",
+ "New-InboxRule",
+ "Set-InboxRule",
+ "Set-Mailbox",
+ "New-TransportRule",
+ "Set-TransportRule",
+ "Add-MailboxFolderPermission",
+ "New-ManagementRoleAssignment"
+ )
+ // Operations more commonly used by attackers: "Add-MailboxPermission", "New-InboxRule", "Set-InboxRule"
+ // Noisy Operations less commonly used by attackers: "Set-Mailbox", "New-TransportRule", "Set-TransportRule"
+ // Noisy Operations rarely used by attackers: "Add-MailboxPermission", "New-ManagementRoleAssignment"
+ | extend Parameters_reformated = replace(@"\[|\]", "", tostring(Parameters)) // Remove square brackets from Parameters field to ensure propper JSON formatting
+ | extend ClientIP_reformated = replace(@"\[|\]", "", tostring(extract("^(.*):.*$", 1, ClientIP))) // Drop everything after the last colon and remove square brackets on IPv6 addresses
+ | project TimeGenerated, UserId, Operation, Parameters_reformated, ClientIP_reformated
+```
+
+To upload the plugin select the `Sources` icon in the prompt bar and scroll down until you see "Custom" with another `Sources` icon.
+
+
+
+Next, select whether you want to share this plugin with everyone in your organization or just yourself, then select the `Security Copilot plugin` button on the left and uploade the `Get_mailbox_rules.yaml` file.
+
+
+
+After upload, ensure the plugin is enabled.
+
+
+
+Next search the skills menu for `GetMailboxRulesForAllUsers` and execute
+
+
+
+Executing the `GetMailboxRulesForAllUsers` skill results with the same data that I normally see when I run this same query in the Microsoft Defender Advanced Hunting pane.
+
+
+
+Expanding this window I can see additional rows and columns.
+
+
+
+
+### Refining reasoning and response
+
+I could easily prompt Security Copilot to check for any issues, but the response will likely vary each time. These variations are part of what makes LLM responses feel natural. To ensure the reasoning matches or exceeds human analysis, I need to be specific about my expectations.
+
+To do this, I’ll provide a list of mailbox operations and mailbox rule actions I typically investigate. Essentially, I’m giving the same level of detail I would to a newly hired analyst assigned to analyze mailbox activity and rules. While it's not necessary to include every minor detail, the more context I provide, the better Security Copilot can match my expectations. I’ll also include clear guidance on the exact output format I want.
+
+For human visual review, the prompt is shown below between two horizontal bars. It follows best practices for prompt engineering outlined in the previous module. It includes a defined persona, uses Markdown as delimiters, provides detailed context and instructions, and shows an example of the desired output format.
+
+⚠️ Do not copy the rendered version of the prompt below, as it lacks the Markdown delimiters. Use the version that follows instead.
+
+---
+---
+/GetMailboxRulesForAllUsers
+
+### Role
+Assume the role of an expert SOC Analyst specializing in threat hunting for email and mailbox-based attacks, tasked with analyzing mailbox rule activity for signs of abnormal activity and compromise by reviewing KQL outputs within this session.
+
+### Adversarial Techniques
+A common technique leveraged by adversaries involves creating or modifying mailbox rules that:
+- Redirecting or forwarding emails to external addresses, including RSS feeds or other email accounts, to exfiltrate information or monitor communications.
+- Modifying or deleting evidence of credential changes, configuration updates, or security alerts to avoid detection.
+- Flagging emails with specific keywords or from security-related senders for deletion or moving them to hidden folders, such as password resets or MFA alerts, to suppress detection.
+- Creating false filters to categorize malicious emails as safe or important, or modifying subject lines and content to disguise their malicious intent.
+- Re-routing emails intended for one user to another internal account to gain unauthorized access.
+
+#### Mailbox Operations and Attacker Logic
+**Commonly Used by Attackers:**
+- `Add-MailboxPermission`: When `FullAccess` or `SendAs` permissions are indicated.
+- `New-InboxRule`: When used to exfiltrate sensitive information, hide malicious activities, or manipulating email flow to suppress alerts or maintain persistence in compromised accounts.
+- `Set-InboxRule`: When used to exfiltrate sensitive information, hide malicious activities, or manipulating email flow to suppress alerts or maintain persistence in compromised accounts.
+**Less Commonly Used by Attackers:**
+- `Set-Mailbox`: Attackers may use this operation to exfiltrate sensitive information, disable security features, modify permissions like 'FullAccess' or 'SendAs', and adjust quota limits to facilitate data exfiltration and avoid detection. Note that this operation can also be legitimately triggered when mailboxes are shared.
+- `New-TransportRule` or `Set-TransportRule`: Attackers may create transport rules to intercept or redirect emails across the organization, particularly in multi-user compromises. It's important to note that these rules are also used by Exchange administrators to enforce compliance policies, enhance security, manage email flow, apply organizational communication standards, and ensure the proper handling of sensitive or important information.
+**Rarely Used by Attackers:**
+- `Add-MailboxFolderPermission`: When `Reviewer` permissions are indicated.
+- `New-ManagementRoleAssignment`: May be exploited to assign elevated roles to themselves or the attacker's tools.
+
+#### Mailbox Rule Actions and Attacker Logic
+**Commonly Used by Attackers:**
+- `RedirectToRecipients` Ensures attackers receive email copies without leaving traces in the original mailbox.
+- `PermanentDelete` Used to hide traces by permanently deleting emails.
+- `MoveToFolder` Used to hide emails in obscure folders to avoid detection.
+- `ForwardAsAttachmentToRecipients` Attackers frequently use this to exfiltrate emails to external addresses.
+- `ForwardToRecipients` Allows attackers to send email contents externally for further exploitation.
+**Less Commonly Used by Attackers:**
+- `StopProcessingRules` Can be used to disable legitimate mailbox rules but not as common as direct exfiltration.
+- `Delete` Sometimes used to clean up traces, but the "Deleted Items" folder could still reveal activity.
+
+#### Mailbox Keywords That May Indicate a Potential Compromise
+- `.bat`, `.exe`, `.iso`, `.ps1`, `.rar`, `.scr`, `.vbs`, `.zip`, `Account`, `ACH`, `Action Required`, `Admin`, `Agreement`, `alert`, `Attachment`, `Attorney`, `Audit`, `Bank`, `Billing`, `CEO`, `CFO`, `Clinical`, `Compliance`, `Confidential`, `Contract`, `Credentials`, `daemon`, `did you`, `Doc`, `License`, `Employee`, `File`, `Hack`, `Helpdesk`, `HIPAA`, `HR`, `Identification`, `Information`, `Internal`, `Invoice`, `IT`, `Key`, `Legal`, `Litigation`, `Locked`, `Manager`, `Medical`, `Passport`, `Password`, `Patient`, `Payment`, `Payment Confirmation`, `Payroll`, `PDF`, `Phish`, `PIN`, `Proposal`, `Reset`, `Restricted`, `Resume`, `RSS`, `Salary`, `Scam`, `Secret`, `Secure`, `Security`, `SSN`, `suspicious`, `Tax`, `Token`, `Transaction`, `Unusual`, `Urgent`, `Verify`, `Wire`
+
+### Task
+You are tasked with analyzing the provided mailbox rule output for potential compromise. Focus on:
+1. **Mailbox operations and actions configured** and consider the type of behavior associated with threat actors.
+2. **Keywords or patterns** that may represent Indicators of Compromise (IoCs) based on current or evolving threats.
+
+**Deliverables:**
+1. **List mailbox rules** for each user, sorted by the `TimeGenerated` field and grouped by the count of mailbox rules. Use a horizontal bar (`---`) between each user and indent each new rule to help readability.
+2. **Assess each mailbox rule** to determine if its operations and actions likely indicate compromise.
+3. **Provide a risk confidence score** for each action using the following levels: `Low`, `Medium`, `High`, `Critical`.
+4. **Explain each confidence score**, citing specific keywords, patterns, or behaviors observed in the data.
+
+### Format For Each User
+- **User ID:** [User email address, denoted as UserId]
+ - **Rule Number:** [Number of rules denoted as 1 of 1, 1 of 2, etc.]
+ - **Date and Time:** [Timestamp from "TimeGenerated" field]
+ - **Risk Confidence Level:** [Low/Medium/High/Critical]
+ - **Mailbox Rule Summary:** [Summarize the actions being taken within the "Parameters_reformated" field]
+ - **Analysis Reasoning:** [Summary of the entire mailbox rule, including identified patterns, whether the mailbox operations are commonly, less commonly, or rarely used by attackers, and any matched keywords, or anomalous behaviors fields that support the "Risk Confidence Level"]
+ - **Client IP:** [IP address, denoted as "ClientIP_reformated"]
+
+---
+---
+
+While the explanation above is written for human readers, Large Language Models (LLMs) respond more effectively to prompts formatted in Markdown. For convenience, the same prompt is provided below in a Markdown code block and should be copied and pasted into Security Copilot. When executed, it will trigger the `GetMailboxRulesForAllUsers` skill in the `Get mailbox rules` plugin you previously uploaded.
+
+```
+/GetMailboxRulesForAllUsers
+### Role
+Assume the role of an expert SOC Analyst specializing in threat hunting for email and mailbox-based attacks, tasked with analyzing mailbox rule activity for signs of abnormal activity and compromise by reviewing KQL outputs within this session.
+
+### Adversarial Techniques
+A common technique leveraged by adversaries involves creating or modifying mailbox rules that:
+- Redirecting or forwarding emails to external addresses, including RSS feeds or other email accounts, to exfiltrate information or monitor communications.
+- Modifying or deleting evidence of credential changes, configuration updates, or security alerts to avoid detection.
+- Flagging emails with specific keywords or from security-related senders for deletion or moving them to hidden folders, such as password resets or MFA alerts, to suppress detection.
+- Creating false filters to categorize malicious emails as safe or important, or modifying subject lines and content to disguise their malicious intent.
+- Re-routing emails intended for one user to another internal account to gain unauthorized access.
+
+#### Mailbox Operations and Attacker Logic
+**Commonly Used by Attackers:**
+- `Add-MailboxPermission`: When `FullAccess` or `SendAs` permissions are indicated.
+- `New-InboxRule`: When used to exfiltrate sensitive information, hide malicious activities, or manipulating email flow to suppress alerts or maintain persistence in compromised accounts.
+- `Set-InboxRule`: When used to exfiltrate sensitive information, hide malicious activities, or manipulating email flow to suppress alerts or maintain persistence in compromised accounts.
+**Less Commonly Used by Attackers:**
+- `Set-Mailbox`: Attackers may use this operation to exfiltrate sensitive information, disable security features, modify permissions like 'FullAccess' or 'SendAs', and adjust quota limits to facilitate data exfiltration and avoid detection. Note that this operation can also be legitimately triggered when mailboxes are shared.
+- `New-TransportRule` or `Set-TransportRule`: Attackers may create transport rules to intercept or redirect emails across the organization, particularly in multi-user compromises. It's important to note that these rules are also used by Exchange administrators to enforce compliance policies, enhance security, manage email flow, apply organizational communication standards, and ensure the proper handling of sensitive or important information.
+**Rarely Used by Attackers:**
+- `Add-MailboxFolderPermission`: When `Reviewer` permissions are indicated.
+- `New-ManagementRoleAssignment`: May be exploited to assign elevated roles to themselves or the attacker's tools.
+
+#### Mailbox Rule Actions and Attacker Logic
+**Commonly Used by Attackers:**
+- `RedirectToRecipients` Ensures attackers receive email copies without leaving traces in the original mailbox.
+- `PermanentDelete` Used to hide traces by permanently deleting emails.
+- `MoveToFolder` Used to hide emails in obscure folders to avoid detection.
+- `ForwardAsAttachmentToRecipients` Attackers frequently use this to exfiltrate emails to external addresses.
+- `ForwardToRecipients` Allows attackers to send email contents externally for further exploitation.
+**Less Commonly Used by Attackers:**
+- `StopProcessingRules` Can be used to disable legitimate mailbox rules but not as common as direct exfiltration.
+- `Delete` Sometimes used to clean up traces, but the "Deleted Items" folder could still reveal activity.
+
+#### Mailbox Keywords That May Indicate a Potential Compromise
+- `.bat`, `.exe`, `.iso`, `.ps1`, `.rar`, `.scr`, `.vbs`, `.zip`, `Account`, `ACH`, `Action Required`, `Admin`, `Agreement`, `alert`, `Attachment`, `Attorney`, `Audit`, `Bank`, `Billing`, `CEO`, `CFO`, `Clinical`, `Compliance`, `Confidential`, `Contract`, `Credentials`, `daemon`, `did you`, `Doc`, `License`, `Employee`, `File`, `Hack`, `Helpdesk`, `HIPAA`, `HR`, `Identification`, `Information`, `Internal`, `Invoice`, `IT`, `Key`, `Legal`, `Litigation`, `Locked`, `Manager`, `Medical`, `Passport`, `Password`, `Patient`, `Payment`, `Payment Confirmation`, `Payroll`, `PDF`, `Phish`, `PIN`, `Proposal`, `Reset`, `Restricted`, `Resume`, `RSS`, `Salary`, `Scam`, `Secret`, `Secure`, `Security`, `SSN`, `suspicious`, `Tax`, `Token`, `Transaction`, `Unusual`, `Urgent`, `Verify`, `Wire`
+
+### Task
+You are tasked with analyzing the provided mailbox rule output for potential compromise. Focus on:
+1. **Mailbox operations and actions configured** and consider the type of behavior associated with threat actors.
+2. **Keywords or patterns** that may represent Indicators of Compromise (IoCs) based on current or evolving threats.
+
+**Deliverables:**
+1. **List mailbox rules** for each user, sorted by the `TimeGenerated` field and grouped by the count of mailbox rules. Use a horizontal bar (`---`) between each user and indent each new rule to help readability.
+2. **Assess each mailbox rule** to determine if its operations and actions likely indicate compromise.
+3. **Provide a risk confidence score** for each action using the following levels: `Low`, `Medium`, `High`, `Critical`.
+4. **Explain each confidence score**, citing specific keywords, patterns, or behaviors observed in the data.
+
+### Format For Each User
+- **User ID:** [User email address, denoted as UserId]
+ - **Rule Number:** [Number of rules denoted as 1 of 1, 1 of 2, etc.]
+ - **Date and Time:** [Timestamp from "TimeGenerated" field]
+ - **Risk Confidence Level:** [Low/Medium/High/Critical]
+ - **Mailbox Rule Summary:** [Summarize the actions being taken within the "Parameters_reformated" field]
+ - **Analysis Reasoning:** [Summary of the entire mailbox rule, including identified patterns, whether the mailbox operations are commonly, less commonly, or rarely used by attackers, and any matched keywords, or anomalous behaviors fields that support the "Risk Confidence Level"]
+ - **Client IP:** [IP address, denoted as "ClientIP_reformated"]
+```
+
+
+
+### Creating a promptbook that uses a defined skill
+
+To make this process easily repeatable, I’ll create a Promptbook and ensure the new skill is selected as part of it. As shown in the previous module, I’ll scroll to the top of the last prompt and click the "Create Promptbook" icon.
+
+
+
+Once the Promptbook window appears, I’ll enter an appropriate name, tags, and description. Notice that the `Plugins` section lists the `Get mailbox rules` plugin that was executed during the execution of this prompt.
+
+Before saving, I’ll edit the prompt by clicking the pencil icon in the top right corner of the prompt.
+
+
+
+Now I'll copy and remove `/GetMailboxRulesForAllUsers` from the top of this prompt followed by selecting the `Skills` menu icon.
+
+
+
+I can either search for the `GetMailboxRulesForAllUsers` skill or simply scroll until until I find it. Once selected, the skill will be added to the Promptbook.
+
+
+
+Finally, I’ll click **Create** to save the Promptbook.
+
+
+
+Now I can easily call this Promptbook in the future to execute the KQL within the skill and perform analysis on the returned data.
diff --git a/Technical Workshops/Prompt Engineering Workshop/Readme.md b/Technical Workshops/Prompt Engineering Workshop/Readme.md
new file mode 100644
index 00000000..a5b59076
--- /dev/null
+++ b/Technical Workshops/Prompt Engineering Workshop/Readme.md
@@ -0,0 +1,18 @@
+# Welcome to the Security Copilot Prompt Engineering Workshop!
+
+
+
+Authors: Rick Kotlarz
+Updated: 2025-April-4
+
+## Introduction
+
+These workshops are designed to help you quickly get up to speed with prompt engineering in Microsoft Security Copilot. Through hands-on examples, you’ll learn best practices and how to create effective prompts that improve how Security Copilot interprets instructions, analyzes data, and formats responses.
+
+
+## Workshop Modules
+
+[Module 1 - Basics of Prompt Engineering](./Module%201%20-%20Basics%20of%20Prompt%20Engineering)
+[Module 2 - Standardizing Responses with Markdown](./Module%202%20-%20Standardizing%20Responses%20with%20Markdown)
+[Module 3 - Enhancing Reasoning and Responses with Markdown](./Module%203%20-%20Enhancing%20Reasoning%20and%20Responses%20with%20Markdown)
+[Module 4 - Refining Reasoning and Response with Markdown](./Module%204%20-%20Refining%20Reasoning%20and%20Response%20with%20Markdown)
diff --git a/Technical Workshops/readme.md b/Technical Workshops/readme.md
index 14f870ec..483b3bc5 100644
--- a/Technical Workshops/readme.md
+++ b/Technical Workshops/readme.md
@@ -6,11 +6,12 @@ Welcome to the series of technical workshops designed to enhance your skills and
## Workshop Topics
-Below is a table summarizing the focus areas of our workshop series:
+Below is a table summarizing the focus areas of our Security Copilot workshop series:
| Workshop Topic | Brief Description |
|-----------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
-|[Security Copilot: Knowledge Base Workshop](https://github.com/Azure/Copilot-For-Security/tree/main/Technical%20Workshops/Knowledge%20base%20Workshop)| Learn how to integrate various knowledge bases with Security Copilot to enhance its ability to provide accurate, contextually relevant security insights. |
-| [Security Copilot: Custom Plugin Workshop](https://github.com/Azure/Copilot-For-Security/tree/main/Technical%20Workshops/Custom%20Plugin%20Workshop)| Dive into custom plugin creation to extend Security Copilot's capabilities, enabling tailored solutions for your specific security needs. |
-|[Security Copilot: Automation Workshop](https://github.com/Azure/Copilot-For-Security/tree/main/Technical%20Workshops/Automation%20Workshop).| Discover how to use Microsoft Logic Apps to automate workflows and security responses, leveraging Security Copilot for enhanced security management. |
-
+| [Prompt Engineering Workshop](./Prompt%20Engineering%20Workshop) | Explore core concepts of prompt engineering and learn how to use Markdown to standardize, enhance, and refine reasoning and responses across four progressive modules. |
+| [Knowledge Base Workshop](./Knowledge%20base%20Workshop) | Learn how to integrate various knowledge bases with Security Copilot to enhance its ability to provide accurate, contextually relevant security insights. |
+| [Custom Plugin Workshop](./Custom%20Plugin%20Workshop) | Dive into custom plugin creation to extend Security Copilot's capabilities, enabling tailored solutions for your specific security needs. |
+| [Custom Plugin Calling Webservice](./Custom%20Plugin%20Calling%20Webservice) | This folder includes three plugins demonstrating how to send data via GET and POST to a Python/Flask-based REST API, showcasing how to craft prompts that guide Security Copilot to select the appropriate plugin and revealing the data exchange between Copilot and a custom web service. |
+| [Automation Workshop](./Automation%20Workshop) | Discover how to use Microsoft Logic Apps to automate workflows and security responses, leveraging Security Copilot for enhanced security management. |