Home | Connectors | Prodigy | Prodigy - OpenText Core Content - Metadata Integration and Automation

Prodigy - OpenText Core Content - Metadata Integration and Automation

Integrate Prodigy Artificial intelligence (AI) and OpenText Core Content - Metadata Document Management apps with any of the apps from the library with just a few clicks. Create automated workflows by integrating your apps.

Common Integration Use Cases Between Prodigy and OpenText Core Content - Metadata

1. Metadata-Guided Annotation Queue Creation

Direction: OpenText Core Content - Metadata to Prodigy

Use governed metadata from OpenText Core Content to automatically select and prioritize content for labeling in Prodigy. For example, documents tagged as ?customer complaint,? ?invoice exception,? or ?product defect? can be routed into specific annotation projects for NLP or computer vision model training.

  • Improves labeling focus by using business-approved classifications
  • Ensures annotation teams work on content aligned to enterprise taxonomy
  • Reduces manual data curation effort for data science teams

2. Annotated Content Returned with Enriched Metadata

Direction: Prodigy to OpenText Core Content - Metadata

After data scientists or subject matter experts label content in Prodigy, the resulting labels can be written back as metadata fields in OpenText Core Content. This is useful when annotated documents, images, or text need to remain searchable and governed in the enterprise content repository.

  • Supports downstream search, retention, and reporting
  • Keeps training outputs traceable inside the content system of record
  • Enables business users to find content by model-derived classifications

3. Controlled Vocabulary Enforcement for Annotation Consistency

Direction: OpenText Core Content - Metadata to Prodigy

Use controlled vocabularies and validation rules from OpenText Core Content to constrain label options in Prodigy. This is especially valuable for regulated industries where annotation categories must match approved business terms, such as document types, risk levels, or product codes.

  • Prevents label drift across annotation teams
  • Aligns ML training data with enterprise governance standards
  • Reduces rework caused by inconsistent or invalid labels

4. Active Learning on High-Value Content Sets

Direction: OpenText Core Content - Metadata to Prodigy

Use metadata filters in OpenText Core Content to identify high-value content sets, then feed those subsets into Prodigy?s active learning workflow. For example, content with low confidence, missing metadata, or specific lifecycle statuses can be prioritized for human review and labeling.

  • Targets the most business-critical records first
  • Improves model performance faster with less labeling effort
  • Supports exception handling for content operations teams

5. Model-Driven Metadata Classification at Scale

Direction: Prodigy to OpenText Core Content - Metadata

Use labels generated in Prodigy to train custom classification models, then apply those models to large content repositories managed in OpenText Core Content. The predicted classifications can be stored as metadata to automate content organization, routing, and reporting.

  • Automates tagging of large unstructured content volumes
  • Improves consistency compared with manual classification
  • Supports scalable DAM and ECM operations

6. Human-in-the-Loop Metadata Quality Improvement

Direction: Bi-directional

OpenText Core Content can surface content with incomplete, conflicting, or outdated metadata to Prodigy for expert review and relabeling. Once corrected in Prodigy, the validated labels can be synchronized back to OpenText Core Content to improve metadata quality across the repository.

  • Creates a closed-loop metadata governance process
  • Helps content stewards correct classification issues efficiently
  • Improves search relevance and reporting accuracy

7. Audit-Ready Training Data Governance

Direction: Bi-directional

Use OpenText Core Content as the governed repository for source content, annotation guidelines, and approved label schemas, while Prodigy manages the labeling work itself. Final training datasets and annotation outputs can be stored back in OpenText Core Content with metadata that captures version, reviewer, and approval status.

  • Provides traceability from source content to final training data
  • Supports audit and compliance requirements
  • Improves collaboration between AI teams, records managers, and business owners

8. Metadata-Driven Reporting on Annotation Operations

Direction: Prodigy to OpenText Core Content - Metadata

Export annotation outcomes from Prodigy into OpenText Core Content metadata fields to support operational reporting. Teams can track labeling progress, content categories, exception rates, and model-readiness status across repositories.

  • Gives business stakeholders visibility into AI data preparation
  • Supports governance dashboards and content lifecycle reporting
  • Helps prioritize future annotation and remediation work

How to integrate and automate Prodigy with OpenText Core Content - Metadata using OneTeg?