Home | Connectors | Google Vision AI | Google Vision AI - OpenText Core Experience Insights Integration and Automation
Flow: Google Vision AI ? OpenText Core Experience Insights
Google Vision AI can automatically tag images with objects, scenes, text, and logos before they are published to a digital asset or content repository. OpenText Core Experience Insights can then track how users interact with those enriched assets, showing which image categories are most viewed, searched, reused, or ignored. This helps content teams identify which visual assets drive the most engagement and which metadata patterns improve discoverability.
Flow: Google Vision AI ? OpenText Core Experience Insights
Google Vision AI can detect unsafe, inappropriate, or policy-sensitive imagery during upload. OpenText Core Experience Insights can measure how often users attempt to upload, view, or reject moderated content and identify friction points in the review process. This enables compliance and content operations teams to refine moderation rules, training, and approval workflows based on actual user behavior.
Flow: Google Vision AI ? OpenText Core Experience Insights
Google Vision AI can extract text from scanned documents, forms, receipts, and screenshots. OpenText Core Experience Insights can then analyze how users search, open, and act on those OCR-enriched documents. Organizations can determine whether extracted text improves search success, reduces time to locate information, and supports self-service document retrieval.
Flow: Google Vision AI ? OpenText Core Experience Insights
Google Vision AI can detect product attributes such as color, shape, packaging, and visible text to enrich catalog images. OpenText Core Experience Insights can measure whether shoppers, merchandisers, or internal users engage more with products that have AI-generated metadata. This helps commerce teams understand which attributes improve search, filtering, and conversion-related behaviors.
Flow: Google Vision AI ? OpenText Core Experience Insights
Google Vision AI can generate descriptive labels and extract text from images to support accessibility needs. OpenText Core Experience Insights can track whether users rely on those enriched assets more frequently, complete tasks faster, or abandon content less often. Accessibility and UX teams can use this data to prove the value of image descriptions and prioritize the content types that benefit most from enhancement.
Flow: Bi-directional
Google Vision AI can classify and enrich images, while OpenText Core Experience Insights can reveal which content types are most frequently accessed, shared, or reused. Together, the platforms can help content owners identify which visual themes, brands, or document types generate the most value and which should be prioritized for curation, archiving, or repurposing.
Flow: Google Vision AI ? OpenText Core Experience Insights
Google Vision AI can detect objects, scenes, logos, and text to enrich search metadata. OpenText Core Experience Insights can then analyze search success rates, failed searches, and click-through behavior to determine whether the enriched metadata is helping users find content. Search and platform teams can use these insights to refine taxonomy, metadata rules, and content recommendations.
Flow: Google Vision AI ? OpenText Core Experience Insights
Google Vision AI can detect logos and branded elements in uploaded or shared images. OpenText Core Experience Insights can measure how often brand-approved versus non-approved assets are used across teams, channels, or campaigns. Brand and governance teams can use this to identify misuse, improve asset adoption, and report on compliance trends over time.