Home | Connectors | Prodigy | Prodigy - OpenText Core Experience Insights Integration and Automation

Prodigy - OpenText Core Experience Insights Integration and Automation

Integrate Prodigy Artificial intelligence (AI) and OpenText Core Experience Insights Artificial intelligence (AI) apps with any of the apps from the library with just a few clicks. Create automated workflows by integrating your apps.

Common Integration Use Cases Between Prodigy and OpenText Core Experience Insights

1. Measure annotation workflow adoption and labeler productivity

Data flow: Prodigy ? OpenText Core Experience Insights

Track how data scientists, annotators, and subject matter experts use Prodigy across projects, datasets, and labeling tasks. OpenText Core Experience Insights can analyze session frequency, task completion rates, time spent per labeling activity, and drop-off points to show where annotation workflows are efficient or where users struggle.

Business value: Helps AI teams identify bottlenecks in labeling operations, improve annotator training, and increase throughput for model development.

2. Optimize active learning and labeling process design

Data flow: Prodigy ? OpenText Core Experience Insights

Use interaction analytics to understand how teams respond to Prodigy?s active learning suggestions, including which sample types are repeatedly skipped, edited, or relabeled. This helps teams refine annotation guidelines, improve task design, and reduce rework in model training cycles.

Business value: Improves labeling quality and reduces wasted effort by aligning annotation workflows with actual user behavior.

3. Identify training and enablement needs for annotation teams

Data flow: Prodigy ? OpenText Core Experience Insights

Analyze usage patterns to detect where new annotators take longer to complete tasks, make frequent corrections, or abandon sessions. These insights can trigger targeted onboarding, updated playbooks, or role-based training for domain experts contributing to model development.

Business value: Shortens ramp-up time for new contributors and improves consistency in labeled data.

4. Correlate annotation activity with downstream model development outcomes

Data flow: Prodigy ? OpenText Core Experience Insights

Combine Prodigy usage data with engagement metrics to understand which teams, projects, or workflows produce the most sustained annotation activity and the least friction. This can help AI program leaders compare project performance across business units and prioritize high-value model initiatives.

Business value: Supports better portfolio management for AI programs and helps allocate resources to the most productive use cases.

5. Monitor collaboration between data scientists and domain experts

Data flow: Prodigy ? OpenText Core Experience Insights

Track how often domain experts review, correct, or validate labels created in Prodigy, and how those interactions vary by project. OpenText Core Experience Insights can surface collaboration patterns that indicate whether annotation workflows are well balanced or overly dependent on a small group of experts.

Business value: Improves cross-functional collaboration and ensures labeling responsibilities are distributed effectively.

6. Detect friction in custom annotation workflows

Data flow: Prodigy ? OpenText Core Experience Insights

Prodigy is often used in highly customized, scriptable workflows. OpenText Core Experience Insights can reveal where users encounter repeated navigation issues, long task durations, or inconsistent engagement across different annotation interfaces. This is especially useful when teams build custom labeling apps or integrate Prodigy into broader MLOps pipelines.

Business value: Helps engineering teams simplify custom workflows and reduce operational friction in AI delivery.

7. Support continuous improvement for enterprise AI programs

Data flow: Bi-directional, with Prodigy usage data flowing into OpenText Core Experience Insights and insights feeding back to Prodigy workflow owners

Use OpenText Core Experience Insights to identify trends in annotation behavior over time, then feed those findings back to Prodigy administrators and ML teams to adjust labeling instructions, task sequencing, or active learning thresholds. This creates a feedback loop for continuous improvement across the AI lifecycle.

Business value: Enables data-driven refinement of annotation operations and improves the quality and speed of model training over time.

How to integrate and automate Prodigy with OpenText Core Experience Insights using OneTeg?