Openstream.ai has announced it has expanded its multimodality patent portfolio with the issuance of Temporal Behavioral Analysis of Multi-Modal Conversations In a Question and Answer System.
This unique approach advances the abilities of a multi-modal virtual agent to engage in human-like dialogue with users, improves the quality of the Customer Experience (CX) and provides enterprises with deep insights about their customers across multiple use-cases.
Openstream.ai’s latest patent represents a significant advancement in the field of conversational analysis and AI-driven interactions.
The techniques presented uniquely identify important aspects of conversations and create hierarchical models that can lead to more effective and contextually aware question-and-answer systems, improving how people interact with AI and enhancing the overall user experience.
“This patent adds to our expansive portfolio of intellectual capital and provides the best Conversational AI platform and capabilities for our clients,” said Openstream.ai CEO Raj Tumuluri.
“By combining various sensory inputs such as speech, gesture, vision, and touch while leveraging the latest advances in artificial intelligence disciplines and finely tuned AI capabilities, Openstream.ai’s Eva (Enterprise Virtual Assistant) enabled applications excel at delivering natural human-like interaction.”
According to the 2023 Gartner Emerging Technologies: Tech Innovators in Advanced Virtual Assistants research report, by 2025, multimodal interactions will be a standard feature for VAs, up from less than 2% in 2021.
Enterprises need to evolve from using basic chatbots to plan-based multimodal Conversational AI-enabled agents that provide superior customer and employee experiences.
This latest breakthrough is already embedded in Openstream.ai’s Eva platform and enables superior customer and employee dialogue with personalized, fluid human-like conversations at scale.
Creating a comprehensive Conversational AI (CAI) platform for Visionary Enterprises involves integrating various cutting-edge capabilities to provide a robust and versatile solution.
“This patent deftly combines conversational multimodal interaction and neural architectures by integrating multimodal fusion with neural attention,” said Dr. Phil Cohen, Chief Scientist, Openstream.ai.
“This versatility allows users to engage in conversations using their preferred communication methods.”
Openstream.ai’s visionary clients benefit from our feature-rich offering underscored by:
- A complete plan-based multimodal Conversational AI platform.
- Explainability, Reasoning, Trust, Data Governance, and Privacy Protection features for growth to reduce risk.
- Extensive capabilities across knowledge acquisition, pre-trained models & ontologies, Semantic Processing, Emotion, Customer Engagement, Multi-Agent Interactions, Sentiment Analysis, Business Process Adaptation and Multi-Lingual Support.
Eva’s multi-modal runtime can also be deployed on the Edge for iOS and Android mobile apps. This capability ensures that even in situations with limited connectivity or resources, Eva can continue to function effectively by gracefully degrading to smaller local models.
These features make Eva a powerful tool for creating rich, dynamic, and context-aware conversational experiences across various platforms and devices.
This patent is the latest in a series of milestones for the company. In 2023 alone, Openstream.ai was included in over 20 Gartner analyst research reports, including the Gartner Emerging Tech and Hype Cycle research reports and the Gartner Magic Quadrant for Enterprise Conversational AI, in which it was named as the sole Visionary for the second year in a row.
Author: Robyn Coppell
Published On: 5th Oct 2023
Read more about - Industry News