Overview

Klyra provides powerful AI-powered moderation models that can analyze and flag potentially harmful content across multiple modalities. Our models are continually trained on diverse datasets to ensure high accuracy and minimal bias.

Available Models

Text Moderation

Our text moderation model can analyze text content in over 50 languages for harmful or inappropriate content.

Supported Categories:

  • Toxic: Hateful, aggressive, or insulting content
  • Harassment: Bullying, threats, or intimidation
  • Self-harm: Content promoting self-harm or suicide
  • Sexual: Sexually explicit or adult content
  • Violence: Graphic or violent descriptions
  • Hate speech: Content targeting protected groups
  • Spam: Unsolicited promotional content
  • Profanity: Swear words and offensive language

Sample Request:

{
  "content": "Your text content here",
  "settings": {
    "categories": ["toxic", "harassment", "hate_speech"],
    "threshold": 0.7,
    "language": "en"
  }
}

Model Selection

Klyra automatically selects the appropriate moderation model based on the content type you submit. You can also explicitly specify which model version to use:

{
  "content": "Your content here",
  "settings": {
    "model": "klyra-text-v2",
    "categories": ["toxic", "harassment"]
  }
}

Available Model Versions

Model NameContent TypeDescription
klyra-text-v2TextLatest text moderation model with improved multilingual support
klyra-image-v3ImageHigh-precision image moderation with 98.7% accuracy
klyra-audio-v1AudioAudio transcription and moderation
klyra-video-v1VideoFrame-by-frame video analysis

Confidence Scores

All moderation results include confidence scores between 0.0 and 1.0 for each category:

  • 0.0: No detection of the category
  • 1.0: Highest confidence that the content belongs to the category

You can set a custom threshold for flagging content in the request settings. The default threshold is 0.7 for most categories.

Model Ethics & Bias Prevention

Klyra is committed to providing fair and unbiased moderation systems. Our models are:

  • Trained on diverse, representative datasets
  • Regularly audited for demographic biases
  • Continuously refined based on customer feedback
  • Transparent in confidence scoring and decision making

Read more about our ethics principles and bias prevention practices.