Home | Connectors | OpenText DAM (OTMM) | OpenText DAM (OTMM) - Prodigy Integration and Automation

OpenText DAM (OTMM) - Prodigy Integration and Automation

Integrate OpenText DAM (OTMM) Digital Asset Management (DAM) and Prodigy Artificial intelligence (AI) apps with any of the apps from the library with just a few clicks. Create automated workflows by integrating your apps.

Common Integration Use Cases Between OpenText DAM (OTMM) and Prodigy

OpenText DAM (OTMM) and Prodigy complement each other well in organizations that manage large volumes of visual content and need to turn that content into high quality training data for AI and machine learning. OTMM serves as the governed source for approved images, videos, and campaign assets, while Prodigy supports fast, iterative annotation and labeling for model training. Together, they enable controlled data handoff from content operations to AI teams and back into business workflows.

1. Curated asset export from OTMM to Prodigy for computer vision training

OTMM can act as the controlled source of product images, event photos, museum collection images, or broadcast stills that need to be labeled for AI use cases. Approved assets are exported or synchronized into Prodigy for annotation tasks such as object detection, image classification, scene tagging, or bounding box creation.

  • Data flow: OpenText DAM (OTMM) to Prodigy
  • Business value: Reduces manual file gathering and ensures only approved, high quality assets are used for model training
  • Typical users: DAM managers, data scientists, computer vision teams, subject matter experts

2. AI assisted metadata enrichment and tagging of DAM assets

Images and videos stored in OTMM can be sent to Prodigy to create training labels for automated tagging models. Once models are trained, predicted labels can be returned to OTMM as enriched metadata, improving search, discovery, and asset governance.

  • Data flow: OpenText DAM (OTMM) to Prodigy and back to OpenText DAM (OTMM)
  • Business value: Improves asset findability and reduces manual metadata entry for large content libraries
  • Typical users: Digital asset managers, taxonomy teams, AI engineers, content operations teams

3. Product image labeling for visual search and product recognition

Retail, manufacturing, and distribution organizations can use OTMM as the repository for product imagery and send selected images to Prodigy for labeling attributes such as product category, color, packaging type, logo presence, or damage indicators. The resulting datasets support visual search, product matching, and automated catalog enrichment.

  • Data flow: OpenText DAM (OTMM) to Prodigy
  • Business value: Accelerates development of image based product discovery and catalog automation capabilities
  • Typical users: E commerce teams, product information teams, AI teams, merchandising operations

4. Quality control model training using approved and rejected campaign assets

Marketing teams can use OTMM to store both approved campaign assets and rejected variants. These assets can be routed to Prodigy to label quality issues such as brand guideline violations, incorrect cropping, low resolution, or unauthorized logo usage. Trained models can then help flag non compliant assets before publication.

  • Data flow: OpenText DAM (OTMM) to Prodigy
  • Business value: Reduces brand risk and speeds up content review across marketing operations
  • Typical users: Brand governance teams, marketing operations, creative operations, AI teams

5. Museum and heritage collection annotation for archival intelligence

Museums and heritage organizations using OTMM to manage digital photos and videos of physical collections can send selected items to Prodigy for structured annotation. Labels may include artifact type, era, material, condition, exhibition status, or visual characteristics. These annotations support search, automated cataloging, and research workflows.

  • Data flow: OpenText DAM (OTMM) to Prodigy and back to OpenText DAM (OTMM)
  • Business value: Improves archival access and reduces the effort required to maintain rich collection metadata
  • Typical users: Archivists, curators, digital preservation teams, AI specialists

6. Video frame extraction and annotation for broadcast and media intelligence

Broadcast and media organizations can use OTMM as the source of short form and long form video assets, then send selected clips or extracted frames to Prodigy for annotation. Use cases include scene classification, speaker identification, logo detection, content moderation, and shot type labeling. The labeled data can train models for media indexing and automated content analysis.

  • Data flow: OpenText DAM (OTMM) to Prodigy
  • Business value: Speeds up video understanding workflows and improves content reuse across media libraries
  • Typical users: Media operations teams, video archivists, AI teams, compliance teams

7. Active learning loop for prioritizing the most valuable assets to label

Prodigy?s active learning approach can be used to identify which OTMM assets should be labeled next based on model uncertainty or business priority. OTMM provides the content pool, while Prodigy selects the most informative images or videos for annotation. This reduces labeling effort and focuses human review on the assets that will improve model performance fastest.

  • Data flow: Bi directional, with OTMM supplying assets and Prodigy returning labeling priorities or model feedback
  • Business value: Lowers annotation cost and shortens time to usable model performance
  • Typical users: Data science teams, ML engineers, content operations leads

8. Governance controlled handoff of sensitive assets for AI training

Organizations often need to train models on restricted or sensitive visual content, such as internal events, proprietary products, or archival materials with access controls. OTMM can enforce permissions, versioning, and approval workflows before assets are shared with Prodigy. After labeling, Prodigy outputs can be stored back in OTMM as governed derivatives or training references.

  • Data flow: OpenText DAM (OTMM) to Prodigy and back to OpenText DAM (OTMM)
  • Business value: Maintains content governance while enabling AI development on controlled datasets
  • Typical users: Security teams, DAM administrators, compliance teams, AI governance teams

In summary, the strongest integration pattern is OTMM as the trusted content source and Prodigy as the annotation and model training layer. This combination supports better asset governance, faster dataset creation, and more efficient collaboration between content, marketing, archival, and AI teams.

How to integrate and automate OpenText DAM (OTMM) with Prodigy using OneTeg?