Home | Connectors | OpenText DAM (OTMM) | OpenText DAM (OTMM) - Prodigy Integration and Automation
OpenText DAM (OTMM) and Prodigy complement each other well in organizations that manage large volumes of visual content and need to turn that content into high quality training data for AI and machine learning. OTMM serves as the governed source for approved images, videos, and campaign assets, while Prodigy supports fast, iterative annotation and labeling for model training. Together, they enable controlled data handoff from content operations to AI teams and back into business workflows.
OTMM can act as the controlled source of product images, event photos, museum collection images, or broadcast stills that need to be labeled for AI use cases. Approved assets are exported or synchronized into Prodigy for annotation tasks such as object detection, image classification, scene tagging, or bounding box creation.
Images and videos stored in OTMM can be sent to Prodigy to create training labels for automated tagging models. Once models are trained, predicted labels can be returned to OTMM as enriched metadata, improving search, discovery, and asset governance.
Retail, manufacturing, and distribution organizations can use OTMM as the repository for product imagery and send selected images to Prodigy for labeling attributes such as product category, color, packaging type, logo presence, or damage indicators. The resulting datasets support visual search, product matching, and automated catalog enrichment.
Marketing teams can use OTMM to store both approved campaign assets and rejected variants. These assets can be routed to Prodigy to label quality issues such as brand guideline violations, incorrect cropping, low resolution, or unauthorized logo usage. Trained models can then help flag non compliant assets before publication.
Museums and heritage organizations using OTMM to manage digital photos and videos of physical collections can send selected items to Prodigy for structured annotation. Labels may include artifact type, era, material, condition, exhibition status, or visual characteristics. These annotations support search, automated cataloging, and research workflows.
Broadcast and media organizations can use OTMM as the source of short form and long form video assets, then send selected clips or extracted frames to Prodigy for annotation. Use cases include scene classification, speaker identification, logo detection, content moderation, and shot type labeling. The labeled data can train models for media indexing and automated content analysis.
Prodigy?s active learning approach can be used to identify which OTMM assets should be labeled next based on model uncertainty or business priority. OTMM provides the content pool, while Prodigy selects the most informative images or videos for annotation. This reduces labeling effort and focuses human review on the assets that will improve model performance fastest.
Organizations often need to train models on restricted or sensitive visual content, such as internal events, proprietary products, or archival materials with access controls. OTMM can enforce permissions, versioning, and approval workflows before assets are shared with Prodigy. After labeling, Prodigy outputs can be stored back in OTMM as governed derivatives or training references.
In summary, the strongest integration pattern is OTMM as the trusted content source and Prodigy as the annotation and model training layer. This combination supports better asset governance, faster dataset creation, and more efficient collaboration between content, marketing, archival, and AI teams.