Editing & Adding

Overview

The Policy Engine feature in Protege AI allows users to create, modify, and refine policies that govern how documents and other media are evaluated. This section outlines the process for adding and editing policies within a specific Policy Group.

Adding a New Policy

To introduce a new policy into your system, follow these simple steps:

Steps to Add a New Policy:

  1. Navigate to the Desired Policy Group: Access the Policy Group where the new policy will be housed.

  2. Add New Policy: Click on "Add New Policy" to initiate the creation of a new policy within the selected group.

Configuration of a Policy

When setting up a new policy or editing an existing one, it’s crucial to specify the mediums on which the policy should be applied. These settings ensure that the policy is appropriately targeted and effective across different types of content. The mediums currently available for Policy Application are:

Medium
Description
Example Tools/Use Cases

Text

Directs the policy towards analyzing the text content in documents.

Google Doc, Microsoft Word, Figma

Web Hidden Text

Content scanner for hidden web content.

Content scanner

Image OCR Text

Applies the policy to the OCR text extracted from submitted image attachments.

Images and websites

Image Query

Run a prompt with a large vision model to analyze an image.

Images and websites

Video Unique OCR Text

Analyzes all the deduplicated visible text in a video.

Videos, Instagram videos, TikTok videos

Video Transcript

Applies the policy to the transcript derived from a video’s audio.

Videos, Instagram videos, TikTok videos

Risk Level

The Risk Level setting allows you to classify policy feedback as High, Medium, or Low risk, based on customizable prompts. This enables granular control over how your policies interpret and prioritize different types of content, providing targeted feedback and actions depending on the potential impact or sensitivity of the detected issue.

Example Use Case: Product Comparisons

Suppose you have a policy to identify product comparisons in your documentation or communication, but you want to handle comparisons involving a specific competitor with greater scrutiny.

  • High Risk Prompt: Identify sentences that compare us to [competitor]

  • Fallback Risk Level: Set to Low.

Best Practices

  • Be Specific in Prompts: The clearer your prompts, the more accurately the system can classify risk.

  • Review and Iterate: Periodically review flagged feedback and adjust prompts to reduce false positives/negatives.

  • Use Fallbacks: Always set a fallback risk level to ensure all content receives a risk classification.

Special Policy Type

Ccustomers defined Policies should leave this field blank. While most policies are classified as General Policies and use customizable prompts from Language Learning Models (LLMs), Protege AI also supports special policies with preset logic for specific purposes.

Steps

Each Policy, except for the Special Type described above, is composed of a series of steps. Learn more about Steps.


This section provides detailed guidance on how to effectively manage and edit policies within Protege AI, ensuring that they are well-suited to the media types and specific needs of the organization. By properly configuring these settings, users can optimize the document evaluation process and enhance overall compliance and efficiency.

Last updated

Was this helpful?