Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arivu.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

RLHF Feedback

Overview

Reinforcement Learning from Human Feedback (RLHF) allows users to grade generation quality. Arivu natively captures thumb-ups/downs and descriptive feedback via chat integrations to shape prompt optimizations.

Intent Rectification

Locate queries where the LLM misunderstood user nuance and track corrections over time.

Approval Queues

Destructive queries (UPDATE/DELETE) are parked for admin consent before execution.

How It Works

1

Signal Capture

Users provide feedback (positive/negative) via the chat UI, Telegram, or any integration adapter. The signal is attached to the query’s trace record.
2

Storage

RLHF signals persist in the memory backend (SQLite or Redis), indexed by session_id and timestamp.
3

Analysis

The dashboard aggregates feedback trends, helping you identify which types of queries consistently generate poor results.
The rlhf_feedback LangGraph node executes entirely downstream of the response_generator. Injecting RLHF signals adds zero blocking latency to your conversational UX.