Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
Binary file not shown.
Binary file not shown.
33 changes: 0 additions & 33 deletions Technical Workshops/Markdown Workshop/readme.md

This file was deleted.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
## Module 1 - Basics of Prompt Engineering - Security Copilot

![Security Copilot Logo](../../.././Images/ic_fluent_copilot_64_64%402x.png)

Authors: Rick Kotlarz<br>
Updated: 2025-April-4

#### ⌛ Estimated time to complete this lab: 15 minutes
#### 🎓 Level: 100 (Beginner)

The following module demonstrates effective prompt engineering for those just getting started with Security Copilot.

1. [How Security Copilot works](#initial-prompt)
2. [Prompting basics](#prompting-basics)
3. [Bad prompting](#bad-prompting)
4. [Good prompting](#good-prompting)


### How Security Copilot works

Regardless of whether you're using the embedded or standalone experience, Security Copilot prompts are evaluated by the Orchestrator. The Orchestrator’s primary role is to interpret the prompt, check enabled plugins, and map values from the prompt to the appropriate fields within one or more skills. As you might expect, prompts that lack sufficient detail often result in poor or incomplete responses or no response at all.

In the embedded experience, prompts are tied to a specific plugin based on the context. For example, if you're in a Purview DLP experience, asking questions about Intune or Defender will likely return no results or incomplete answers.

In contrast, the standalone experience supports all enabled plugins and allows you to pivot freely across plugins and skills. All prompts operate within the security context of your current role. If your role lacks the necessary permissions, prompts requesting that data will not be processed.

It's strongly recommended that you review the following two links before starting these modules.
- [Prompting in Microsoft Security Copilot](https://learn.microsoft.com/en-us/copilot/security/prompting-security-copilot)
- [Create effective prompts](https://learn.microsoft.com/en-us/copilot/security/prompting-tips)

### Prompting basics

While the order of these elements isn’t critical, including them in your prompts significantly improves the quality of the response.

![Image](./images/001_module1_basic_elements.png)

### Bad prompting

Bad prompts contain vague or highly subjective elements related to **Goal, Context, Source, or Expectation**

| Bad prompt examples | Reasoning why they're bad |
|--------|--------|
| Show me important alerts. | Using the word "important" without clearly defining what it means to you can lead to widely varying results. |
| How is my security posture from Defender looking today? Show results in a table. | Security posture can refer to many different resources across the Security and Compliance stack. Since there is no single plugin or skill that provides a complete review, asking about overall security posture will result in a response based on incomplete information.
| What's the compliance status of this entity? | Compliance could refer to areas within Intune, Purview, Entra, or other services, making the term too broad without additional context. |
| Tell me the MFA status for device ASH-U2746 | Devices do not have an MFA status. Asking about this, instead of the MFA status of the user currently or most recently logged in, will result in an inappropriate or failed response. |
| What's the MFA status for that user | Not referring to a named entity by a unique identifier in a prompt will almost always result in errors. It’s best to use a UPN, FQDN, Resource Object ID, or another identifier that is guaranteed to be unique within your organization.|

### Good prompting

Good prompts contain specific elements related to **Goal, Context, Source, and Expectation**

| Good prompt examples |
|--------|
| Using the Defender XDR plugin, provide an SOC manager summary of all Defender incidents over the last 7 days |
| Using NL2KQL for Defender, show me a list of alerts with 'phish' in the title for the last 30 days |
| Using Intune, provide a table showing the last 3 devices that were enrolled and their Operating System |
| Using Entra, what is the MFA enrollment status for user: john.smith@contso.com |
| Using Purview show the last 30 DLP alerts. For each user listed, provide a count of how many times they were included. |
| Using Purview, provide a list of all users that triggered DLP alerts over the last 30 days. For each user, provide their UPN and a count of how many alerts they were associated with. |

---

✈️ Continue to [Module 2 - Standardizing Responses with Markdown](.././Module%202%20-%20Standardizing%20Responses%20with%20Markdown)
Original file line number Diff line number Diff line change
@@ -1,19 +1,39 @@
## Module 1 - Formatting with Markdown in Microsoft Security Copilot
## Module 2 - Standardizing Responses with Markdown - Security Copilot

![Security CoPilot Logo](https://github.com/Azure/Copilot-For-Security/blob/main/Images/ic_fluent_copilot_64_64%402x.png)
![Security Copilot Logo](../../.././Images/ic_fluent_copilot_64_64%402x.png)

#### ⌛ Estimated time to complete this lab: 15 minutes
#### 🎓 Level: 100 (Beginner)
Authors: Rick Kotlarz<br>
Updated: 2025-April-4

#### ⌛ Estimated time to complete this lab: 30 minutes
#### 🎓 Level: 200 (Beginner)

1. [Introduction](#introduction)
2. [What is Markdown and why use it](#what-is-markdown-and-why-use-it)
3. [Initial prompt](#initial-prompt)
4. [Formatting as a table](#formatting-as-a-table)
5. [Formatting as a list](#formatting-as-a-list)
6. [Combining a prompt with Markdown formatting instructions](#combining-a-prompt-with-markdown-formatting-instructions)
7. [Available Markdown](#available-markdown)
8. [Increasing efficiency](#increasing-efficiency)

## Introduction

The following example prompts demonstrate how users can modify the output from a plugin skill using Markdown. Large Language Models (LLMs) interpret context and follow instructions more effectively when delimiters and Markdown are included in prompts. Although natural language can be used, it often requires more detailed explanations than most users are willing to provide. By offering clear instructions and utilizing Markdown, as covered in this module, you can reduce the likelihood of output variance.

1. [Initial prompt](#initial-prompt)
2. [Formatting as a table with AskGPT](#formatting-as-a-table-with-askgpt)
3. [Formatting as a list with AskGPT](#formatting-as-a-list-with-askgpt)
4. [Combining a prompt with Markdown formatting instructions](#combining-a-prompt-with-Markdown-formatting-instructions)
5. [Available Markdown](#available-markdown)
6. [Increasing efficiency](#increasing-efficiency)
## What is Markdown and why use it

[Markdown](https://commonmark.org/) is a lightweight markup language developed by John Gruber in 2004 that is used to format plain text. It allows you to easily add formatting elements such as headers, lists, links, images, bold or italic text, and more, using simple symbols or characters. Markdown enhances human readability, provides clear structure, and provides delimiters that helps Large Language Models (LLMs) better interpret user intent, instructions, and expected output.

Using Markdown provides:
- Clear separation of content
- Enhancing context recognition
- Distinguishing between input and output
- Highlighting intent with emphasis
- Prompting LLMs with specific formatting
- Facilitating conditional output

While this workshop doesn't cover syntax around the use of Markdown, you can easily find this information [here](https://commonmark.org/help/). Additionally on this website you can find a [10 minute interactive tutorial](https://commonmark.org/help/tutorial/), or [play with the reference CommonMark implementation](https://spec.commonmark.org/dingus/).

### Initial prompt

Expand All @@ -22,25 +42,34 @@ Running a prompt without specifying output expectations can lead to inconsistent
```
List the last 3 incidents from Defender.
```

![Image](./images/001_prompt_no_Markdown.png)

### Formatting as a table with AskGPT
### Formatting as a table

The /AskGPT skill bypasses plugins and interacts directly with the underlying LLM. We can use it to provide instructions that modify the default output format. Since we'll be submitting follow-up prompts, start by instructing it to take no action except to read the instructions. Then, specify that all subsequent outputs should follow the provided Markdown format.

Note that this will not reformat existing results the instruction must be given **before** the prompt you want to affect.

To modify the default output, we can use the /AskGPT skill and instruct the model to take no action other than reading the instructions. Then, specify that subsequent outputs should follow the provided Markdown format. Keep in mind that this will not reformat existing results; the instruction must be set **before** the prompt for which you want to change the output format.
```
/AskGPT No action is needed at this time, simply review the following instructions and respond with 'Ready!'. Instructions: All additional responses will be formatted to conform to the following Markdown example.
## Markdown example
| **Created Date** | **Incident ID** | **Status** | **Incident URL** | **Title** | **Severity** | **Assigned To** | **Classification** | **Determination** | **Last Updated** |
|-------------------------|-----------------|------------|-------------------|----------------------|--------------|-----------------|--------------------|-------------------|------------------------------|
| 2025-01-08T12:09:40.47Z | 1234 | Active | https://12.aka.ms | Multi-stage incident | High | John.Doe | Malware | True Positive | 2025-01-22T23:33:21.1733333Z |
```
![Image](./images/002_AskGPT_Markdown_formatting.png)

![Image](./images/002_module2_AskGPT_Markdown_formatting.png)

```
List the last 3 incidents from Defender.
```

Notice that re-running the [Initial prompt](#initial-prompt) now results with the first column of "Created Date," followed by "Incident ID," and then "Status," instead of "Incident ID," "Display Name," and "Severity." **Be sure to apply these instructions before formatting other prompts.** For better organization and easier access, consider saving this prompt in a promptbook.

![Image](./images/003_AskGPT_formatting_as_a_table_prompt.png)
![Image](./images/003_module2_AskGPT_formatting_as_a_table_prompt.png)

### Formatting as a list with AskGPT
### Formatting as a list

Another example of Markdown formatting is shown below, using bullets, indentations, and a horizontal bar after each incident. In this example, the "Assigned To," "Classification," and "Determination" fields have been excluded from the formatted output by removing them from the Markdown example.
```
Expand All @@ -56,10 +85,13 @@ Another example of Markdown formatting is shown below, using bullets, indentatio
```


![Image](./images/004_AskGPT_formatting_as_a_list_prompt.png)
![Image](./images/004_module2_AskGPT_formatting_as_a_list_prompt.png)

```
List the last 3 incidents from Defender.
```

![Image](./images/005_AskGPT_formatting_as_a_list_result.png)
![Image](./images/005_module2_AskGPT_formatting_as_a_list_result.png)

---

Expand All @@ -75,10 +107,11 @@ List the last 3 incidents from Defender. Ensure the output is formatted to confo
|-----------------|------------|-------------------|----------------------|--------------|-----------------|--------------------|-------------------|--------------------------|-----------------------------|
| 1234 | Active | https://12.aka.ms | Multi-stage incident | High | John.Doe | Malware | True Positive | 2025-01-08T12:09:40.47Z | 2025-01-22T23:33:21.1733333Z |
```
![Image](./images/006_prompt_that_includes_AskGPT_formatting.png)

![Image](./images/006_module2_prompt_that_includes_AskGPT_formatting.png)

---

### Available Markdown

Given the speed of change within Security Copilot, the best way to visualize which Markdown syntax elements are available to you is by using the following prompt.
Expand All @@ -88,7 +121,6 @@ Please note that the standalone instance of Security Copilot currently does not
```
/AskGPT Assume the role of a Markdown syntax expert. Provide a comprehensive list of Markdown syntax elements, including tables, numbered lists, task lists, and horizontal rules. For each element, provide instructions on how I can replicate them myself and what it would look like when rendered.
```
For the official Markdown specification visit [CommonMark](https://commonmark.org/help/)

### Increasing efficiency

Expand All @@ -98,4 +130,8 @@ All the Markdown formatting methods above include extra characters to help users
| **Incident ID** | **Status** | **Incident URL** |
| --- | --- | --- |
| 1234 | Active | https://12.aka.ms |
```
```

---

✈️ Continue to [Module 3 - Enhancing Reasoning and Responses with Markdown](.././Module%203%20-%20Enhancing%20Reasoning%20and%20Responses%20with%20Markdown)<br>
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading