Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.
Forbes Launches New Generative AI Search Tool, Adelaide, Powered By Google Cloud
Forbes announced the beta launch of Adelaide, its purpose-built news search tool using Google Cloud. The tool offers visitors AI-driven personalized recommendations and insights from Forbes’ trusted journalism. The launch makes Forbes one of the first major news publishers to provide personalized, relevant news content recommendations for its global readers leveraging generative AI.
Select visitors to Forbes.com can access Adelaide through the website, where they can explore content spanning the previous twelve months, offering deeper insights into the topics they are searching for. Adelaide, named after the wife of Forbes founder B.C. Forbes, has been crafted in-house by Forbes’ dedicated tech and data team, ensuring seamless integration and optimal functionality.
What sets Adelaide apart is its search- and conversation-based approach, making content discovery easier and more intuitive for Forbes’ global audience. The tool generates individualized responses to user queries based exclusively on Forbes articles, and is built using Google Cloud Vertex AI Search and Conversation. With this integration, Adelaide will comb through Forbes’ trusted content archive from the past twelve months, continuously learning and adapting to individual reader preferences.
“Forbes was an early adopter of AI nearly five years ago – and now AI is a foundational technology for our first-party data platform, ForbesOne,” said Vadim Supitskiy, Chief Digital and Information Officer, Forbes. “As we look to the future, we are enabling our audiences to better understand how AI can be a tool for good and enhance their lives. Adelaide is poised to revolutionize how Forbes audiences engage with news and media content, offering a more personalized and insightful experience from start to finish.”
DataGPTTM Launches out of Stealth to Help Users Talk Directly to Their Data Using Everyday Language
DataGPT, the leading provider of conversational AI data analytics software, announced the launch of the DataGPT AI Analyst, uniting the creative, comprehension-rich side of a large language model (LLM) (the “right brain”) with the logic and reasoning of advanced analytics techniques (the “left brain”). This combination makes sophisticated analysis accessible to more people without compromising accuracy and impact.
Companies everywhere are struggling to conduct analysis as quickly as business questions evolve. Current business intelligence (BI) solutions fall short, lacking iterative querying needed to dig deeper into data. Similarly, consumer-facing generative AI tools are unable to integrate with large databases. Despite investing billions in complex tooling, 85% of business users forgo them, wasting time and money as data teams manually parse through rigid dashboards, further burdened by ad hoc and follow-up requests. Conversational AI data analysis offers the best of both worlds.
“Our vision at DataGPT is crystal clear: we are committed to empowering anyone, in any company, to chat directly to their data,” said Arina Curtis, CEO and co-founder, DataGPT. “Our DataGPT software, rooted in conversational AI data analysis, not only delivers instant, analyst-grade results, but provides a seamless, user-friendly experience that bridges the gap between rigid reports and informed decision making.”
Predibase Launches New Offering to Fine-tune and Serve 100x More LLMs at No Additional Cost – Try Now with Llama-2 for Free
Predibase, the developer platform for open-source AI, announced the availability of their software development kit (SDK) for efficient fine-tuning and serving. This new offering enables developers to train smaller, task-specific LLMs using even the cheapest and most readily available GPU hardware within their cloud. Fine-tuned models can then be served using Predibase’s lightweight, modular LLM serving architecture that dynamically loads and unloads models on demand in seconds. This allows multiple models to be served without additional costs. This approach is so efficient that Predibase can now offer unlimited fine-tuning and serving of LLaMa-2-13B for free in a 2-week trial.
“More than 75% of organizations won’t use commercial LLMs in production due to concerns over ownership, privacy, cost, and security, but productionizing open-source LLMs comes with its own set of infrastructure challenges,” said Dev Rishi, co-founder and CEO of Predibase. “Even with access to high-performance GPUs in the cloud, training costs can reach thousands of dollars per job due to a lack of automated, reliable, cost-effective fine-tuning infrastructure. Debugging and setting up environments require countless engineering hours. As a result, businesses can spend a fortune even before getting to the cost of serving in production.”
DataStax Launches New Integration with LangChain, Enables Developers to Easily Build Production-ready Generative AI Applications
DataStax, the company that powers generative AI applications with real-time, scalable data, announced a new integration with LangChain, the most popular orchestration framework for developing applications with large language models (LLMs). The integration makes it easy to add Astra DB – the real-time database for developers building production Gen AI applications – or Apache Cassandra®, as a new vector source in the LangChain framework.
As many companies implement retrieval augmented generation (RAG) – the process of providing context from outside data sources to deliver more accurate LLM query responses – into their generative AI applications, they require a vector store that gives them real-time updates with zero latency on critical, real-life production workloads.
Generative AI applications built with RAG stacks require a vector-enabled database and an orchestration framework like LangChain, to provide memory or context to LLMs for accurate and relevant answers. Developers use LangChain as the leading AI-first toolkit to connect their application to different data sources.
The new integration lets developers leverage the power of the Astra DB vector database for their LLM, AI assistant, and real-time generative AI projects through the LangChain plugin architecture for vector stores. Together, Astra DB and LangChain help developers to take advantage of framework features like, vector similarity search, semantic caching, term-based search, LLM-response caching, and data injection from Astra DB (or Cassandra) into prompt templates.
“In a RAG application, the model receives supplementary data or context from various sources — most often a database that can store vectors,” said Harrison Chase, CEO, LangChain. “Building a generative AI app requires a robust, powerful database, and we ensure our users have access to the best options on the market via our simple plugin architecture. With integrations like DataStax’s LangChain connector, incorporating Astra DB or Apache Cassandra as a vector store becomes a seamless and intuitive process.”
“Developers at startups and enterprises alike are using LangChain to build generative AI apps, so a deep native integration is a must-have,” said Ed Anuff, CPO, DataStax. “The ability for developers to easily use Astra DB as their vector database of choice, directly from LangChain, streamlines the process of building the personalized AI applications that companies need. In fact, we’re already seeing customers benefit from our joint technologies as healthcare AI company, Skypoint, is using Astra DB and LangChain to power its generative AI healthcare model.”
Seismic Fall 2023 Release leads with new generative AI capabilities to unlock growth and boost productivity
Seismic, a leader in enablement, announced its Fall 2023 Product Release which brings several new generative AI-powered capabilities to the Seismic Enablement Cloud, including two major innovations in Aura Copilot and Seismic for Meetings. In addition, the company launched Seismic Exchange, a new centralized hub for all Seismic partner apps, integrations, and solutions for customers to get the most out of their tech stack.
“Seismic Aura continues to get smarter and better, leading to AI-powered products like Seismic for Meetings which will automate – and in some cases, eliminate – manual tasks for salespeople like meeting preparation, summarization, and follow-up,” said Krish Mantripragada, Chief Product Officer, Seismic. “Our new offering of Aura Copilot packages AI insights, use cases, workflows, and capabilities to enable our customers to grow and win more business. We’re excited to deliver these new innovations from our AI-powered enablement roadmap to our customers and show them what the future looks like this week at Seismic Shift.”
IBM Launches watsonx Code Assistant, Delivers Generative AI-powered Code Generation Capabilities Built for Enterprise Application Modernization
IBM (NYSE: IBM) launched watsonx Code Assistant, a generative AI-powered assistant that helps enterprise developers and IT operators code more quickly and more accurately using natural language prompts. The product currently delivers on two specific enterprise use cases. First, IT Automation with watsonx Code Assistant for Red Hat Ansible Lightspeed, for tasks such as network configuration and code deployment. Second, mainframe application modernization with watsonx Code Assistant for Z, for translation of COBOL to Java.
Designed to accelerate development while maintaining the principles of trust, security, and compliance, the product leverages generative AI based on IBM’s Granite foundation models for code running on IBM’s watsonx platform. Granite uses the decoder architecture, which underpins large language model capabilities to predict what is next in a sequence to support natural language processing tasks. IBM is exploring opportunities to tune watsonx Code Assistant with additional domain-specific generative AI capabilities to assist in code generation, code explanation, and the full end-to-end software development lifecycle to continue to drive enterprise application modernization.
“With this launch, watsonx Code Assistant joins watsonx Orchestrate and watsonx Assistant in IBM’s growing line of watsonx assistants that provide enterprises with tangible ways to implement generative AI,” said Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. “Watsonx Code Assistant puts AI-assisted code development and application modernization tools directly into the hands of developers – in a naturally integrated way that is designed to be non-disruptive – to help address skills gaps and increase productivity.”
Twelve Labs Breaks New Ground With First-of-its-kind Video-to-text Generative APIs
Twelve Labs, the video understanding company, announced the debut of game-changing technology along with the release of its public beta. Twelve Labs is the first in its industry to commercially release video-to-text generative APIs powered by its latest video-language foundation model, Pegasus-1. This model would enable novel capabilities like Summaries, Chapters, Video Titles, and Captioning from videos – even those without audio or text– with the release of its public beta to truly extend the boundaries of what is possible.
This groundbreaking release comes at a time when language models’ training objective previously had been to guess the most probable next word. This task alone enabled new possibilities to emerge, ranging from planning a set of actions to solve a complex problem, to effectively summarizing a 1,000 page long text, to passing the bar exam. While mapping visual and audio content to language may be viewed similarly, solving video-language alignment, as Twelve Labs has with this release, is incredibly difficult– yet by doing so, Twelve Labs latest functionality solves a myriad of other problems no one else has been able to overcome.
The company has uniquely trained its multimodal AI model to solve complex video-language alignment problems. Twelve Labs’ proprietary model, evolved, tested, and refined for its public beta, leverages all of the components present in videos like action, object, and background sounds, and it learns to map human language to what’s happening inside a video. This is far beyond the capabilities in the existing market and its APIs are now available as OpenAI rolls out voice and image capabilities for ChatGPT, signaling a shift is underway from interest in unimodal to multimodal.
“The Twelve Labs team has consistently pushed the envelope and broken new ground in video understanding since our founding in 2021. Our latest features represent this tireless work,” said Jae Lee, co-founder and CEO of Twelve Labs. “Based on the remarkable feedback we have received, and the breadth of test cases we’ve seen, we are incredibly excited to welcome a broader audience to our platform so that anyone can use best-in-class AI to understand video content without manually watching thousands of hours to find what they are looking for. We believe this is the best, most efficient way to make use of video.”
Privacera Announces the General Availability of Its Generative AI Governance Solution Providing a Unified Platform for Data and AI Security
Privacera, the AI and data security governance company founded by the creators of Apache Ranger™ and the industry’s first comprehensive generative AI governance solution, announced the General Availability (GA) of Privacera AI Governance (PAIG). PAIG allows organizations to securely innovate with generative AI (GenAI) technologies by securing the entire AI application lifecycle, from discovery and securing of sensitive fine-tuning data, Retrieval Augmented Generation (RAG) and user interactions feeding into AI-powered models, to model outputs as well as continuous monitoring of AI governance through comprehensive audit trails. Securing sensitive data and managing other risks with AI applications is crucial to enable organizations to accelerate their GenAI product strategies.
The emergence of Large Language Models (LLMs) is providing a vast range of opportunities to innovate and refine new experiences and products. Whether it’s content creation, developing new experiences around virtual assistance or improved productivity around code development, smaller and larger data-driven organizations are going to invest in diverse LLM-powered applications. With these opportunities, there is an increased need to secure and govern the use of LLMs within and outside of any enterprise, small or large. Such risks include sensitive and unauthorized data exposure, IP leakage, abuse of models, and regulatory compliance failures.
“With PAIG, Privacera is becoming the unified AI and data security platform for today’s modern data applications and products,” said Balaji Ganesan, co-founder and CEO of Privacera. “Data-driven organizations need to think about how GenAI fits in their overall security and governance strategy. This will enable them to achieve enterprise-grade security in order to fully leverage GenAI to transform their businesses without exposing the business to unacceptable risk. Our new product capabilities allow enterprises to secure the end-to-end lifecycle for data and AI applications – from fine-tuning the LLMs, protecting VectorDB to validating and monitoring user prompts and replies to AI at scale.”
Domino Fall 2023 Release Expands Platform to Fast-Track and Future-Proof All AI, including GenAI
Domino Data Lab, provider of the Enterprise AI platform trusted by more than 20% of the Fortune 500, announced powerful new capabilities for building AI, including Generative AI (GenAI), rapidly and safely at scale. Its Fall 2023 platform update expands Domino’s AI Project Hub to incorporate contributions from partners on the cutting edge of AI, introduces templates and code generation tools to accelerate and guide responsible data science work with best practices, and strengthens data access and governance capabilities.
By enabling data scientists with the latest in open-source and commercial technologies, Domino’s AI Project Hub now accelerates the development of real-world AI applications with pre-packaged reference projects integrating the best of the AI ecosystem. Both Domino customers and now partners can contribute templated projects to the AI Hub. Customers can adapt contributed projects to their unique requirements, IP, and data—to build AI applications such as fine-tuning LLMs for text generation, enterprise Q&A chatbots, sentiment analysis of product reviews, predictive modeling of energy output, and more.
AI Hub Project Templates pre-package state-of-the-art models with environments, source code, data, infrastructure, and best practices – so enterprises can jumpstart AI productivity using a wide variety of use cases, including natural language processing, computer vision, generative AI, and more. With its initial release, Domino AI Project Hub includes templates to build machine learning models for classic regression and classification tasks, advanced applications such as fault detection using computer vision, and generative AI applications using the latest foundation models from Amazon Web Services, Hugging Face, OpenAI, Meta, and more.
Mission Cloud Launches New Generative AI Podcast Hosted by Generative AI
Mission Cloud, a US-based Amazon Web Services (AWS) Premier Tier Partner, launched Mission: Generate, a new, innovative podcast cutting through the noise and helping businesses and technology professionals discover what’s real and what’s simply hype when it comes to the generative AI landscape. The podcast will feature hosts Dr. RyAIn Ries and CAIsey Samulski – AI versions of Mission Cloud’s Dr. Ryan Ries, Practice Lead, Data, Analytics, AI & Machine Learning, and Casey Samulski, Sr. Product Marketing Manager – discussing real-world generative AI applications that drive business success and industry evolution.
That’s right! There are human voices on this podcast, but no actual humans talking. Generative AI was used to synthesize the conversations to create a generative AI podcast, by generative AI, about generative AI.
“Generative AI is a hot topic, but how traditional media typically talks about it is pretty loose in terms of accuracy and often exclusively about chatbots, such as ChatGPT,” said Samulski. “There is much more to gen AI than chat, however. And businesses can actually accomplish incredible things and boost revenue and productivity if they know how to leverage it to build real solutions. The creation of this podcast, for example, is just the tip of the iceberg in terms of how far we can go with this technology and we hope to provide a truly educational experience by showcasing real-life use cases we’ve built for actual customers.”
Druva Supercharges Autonomous Protection With Generative AI
Druva, the at-scale SaaS platform for data resiliency, unveiled Dru, the AI copilot for backup that revolutionizes how customers engage with their data protection solutions. Dru allows both IT and business users to get critical information through a conversational interface, helping customers reduce protection risks, gain insight into their protection environment, and quickly navigate their solution through a dynamic, customized interface. Dru’s generative AI capabilities empower seasoned backup admins to make smarter decisions and novice admins to perform like experts.
In an ever-evolving digital world, IT teams are overburdened with an increasing amount of responsibilities and complexity as businesses try to do more with less. While many organizations are promising the potential value of AI, Dru addresses real customer challenges today. With Dru, IT teams gain a new way to access information and drive actions through simple conversations to further increase productivity.
“We believe that the future is autonomous, and Druva is committed to pushing the forefront of innovation. We are listening closely to customer pain points and being thoughtful with how we build and incorporate Dru into our platform to ensure we’re solving real-world problems,” said Jaspreet Singh, CEO at Druva. “Our customers are already leveraging the efficiency benefits of our 100% SaaS, fully managed platform, and now with generative AI integrated directly into the solution, they can further expedite decision-making across their environment.”
Prevedere Introduces Prevedere Generate, A Generative AI Solution That Brings Powerful New Forecasting Capabilities to Business Users Worldwide
Prevedere, a leading provider of global intelligence and technology for advanced predictive planning, introduced a new generative AI product, Prevedere Generate, into its advanced predictive platform. Featuring a natural language chat interface, Prevedere Generate overlays the Prevedere solution suite, enabling business users to quickly and easily incorporate industry and company-specific economic and consumer data into their forecast models to gain new insights related to leading indicators and business trends.
Since its inception, Prevedere’s predictive AI engine, driven by external and internal data, has empowered enhanced decision-making and forecasting capabilities, enabling business leaders to reduce risk, identify market opportunities, and foresee impending economic and market shifts. Prevedere Generate builds on this capability by offering a natural language search that allows business leaders to uncover leading external drivers and add more context about historical and future events related to these indicators in record time.
“We’re committed to enhancing our generative AI capabilities to further accelerate customer adoption and drive market expansion worldwide,” Rich Fitchen, Chief Executive Officer at Prevedere. “Our ongoing efforts combine generative AI with our patented technology and extensive econometric analysis expertise, allowing customers to quickly access actionable insights presented in easily digestible formats.”
Cognizant and Vianai Systems Announce Strategic Partnership to Advance Generative AI Capabilities
Cognizant (Nasdaq: CTSH) and Vianai Systems, Inc. announced the launch of a global strategic go-to-market partnership to accelerate human-centered generative AI offerings. This partnership leverages Vianai’s hila™ Enterprise platform alongside Cognizant’s Neuro® AI, creating a seamless, unified interface to unlock predictive, AI-driven decision making. For both companies, this partnership is expected to enable growth opportunities in their respective customer bases, and through Cognizant’s plans to resell Vianai solutions.
Vianai’s hila Enterprise provides clients a platform to safely and reliably deploy any large language model (LLM), optimized and fine-tuned to speak to their systems of record – both structured and unstructured data, enabling clients to better analyze, discover and explore their data leveraging the conversational power of generative AI.
In addition, the LLM monitoring capabilities within hila Enterprise (vianops) is a next-generation monitoring platform for AI-driven enterprises, which monitors and analyzes LLM performance to proactively uncover opportunities to continually improve the reliability and trustworthiness of LLMs for clients.
“In every business around the world, there is a hunger to harness the power of AI, but serious challenges around hallucinations, price-performance and lack of trust are holding enterprises back. That’s why we built hila Enterprise, a platform that delivers trusted, human-centered applications of AI,” said Dr. Vishal Sikka, Founder and Chief Executive Officer of Vianai Systems. “In Cognizant, we have found a strategic partner with a distinguished history of delivering innovative services. Together we will deliver transformative applications of AI that businesses can truly rely on, built on the trusted foundation of hila Enterprise and Cognizant’s Neuro AI platform.”
“Being able to monitor and improve LLM performance is critical to unlocking the true power of generative AI,” said Ravi Kumar S, Cognizant’s Chief Executive Officer. “With Vianai’s platform and our Neuro AI platform, we believe we will be able to offer our clients a high-quality solution to support seamless data analysis with predictive decision-making capabilities.”
Introducing Tynker Copilot – The First-Ever LLM-Powered Coding Companion for Young Coders
Tynker, the leading game-based coding platform that has engaged over 100 million kids, introduced “Tynker Copilot.” Leveraging the capabilities of Large Language Models (LLMs), Tynker Copilot empowers young innovators aged 6-12. It provides a seamless interface for these budding developers to transform their ideas into visual block code for apps and games. Additionally, when exploring existing projects, kids benefit from the tool’s ability to explain block code fragments, ensuring a deeper understanding. Tynker Copilot allows children to build confidence as they work with AI, laying a solid foundation for their future. With this launch, coding education takes a significant leap forward.
Tynker’s introduction of the Copilot feature marks a significant industry milestone. Until now, the capabilities of LLMs have not been fully utilized for the younger age group. Tynker Copilot empowers children as young as 6 to input text commands like “Design a space-themed knock-knock joke” or “Teach me how to build a Fruit-ninja style game” and receive block code outputs with step-by-step instructions. Moreover, when debugging existing projects, students can submit block-code snippets and benefit from LLM-generated natural language explanations.
Our ethos at Tynker is grounded in innovation and breaking boundaries,” stated Srinivas Mandyam, CEO of Tynker. “We’re exhilarated to channel the latest advancements in AI technology to serve our youngest learners, fostering an environment of curiosity and growth. While the potential for such a tool is vast, our prime objective remains to employ it as an empowering educational aid, not a shortcut.
care.ai Builds Advanced Solutions for Nurses with Google Cloud’s Generative AI and Data Analytics on Its Smart Care Facility Platform
care.ai announced it is building Google Cloud’s generative AI and data analytics tools into its Smart Care Facility Platform to improve healthcare facility management and patient care, and move toward its vision for predictive, smart care facilities.
By leveraging Google Cloud’s gen AI tools, including Vertex AI, and analytics and business intelligence products BigQuery and Looker, care.ai’s Smart Care Facility Platform aims to help reduce administrative burdens, mitigate staffing shortages, and free up clinicians to spend more time with patients in 1,500 acute and post-acute facilities where care.ai’s platform is already deployed.
“This collaboration brings us closer to our vision of predictive smart care facilities, as opposed to the current reactive care model, which relies on latent, manual, sometimes inaccurate data entry,” said Chakri Toleti, CEO of care.ai. “Our vision is to enable the era of the smart hospital by making gen AI and ambient intelligence powerful assistive technologies, empowering clinicians, making healthcare safer, smarter, and more efficient.”
Talkdesk Launches New Generative AI Features that Make AI More Responsible, Accurate, and Accessible in the Contact Center
Talkdesk®, Inc., a global AI-powered contact center leader for enterprises of all sizes, announced significant product updates that deepen the integration of generative AI (GenAI) within its flagship Talkdesk CX Cloud platform and Industry Experience Clouds. Now, companies across industries can easily deploy, monitor, and fine-tune GenAI in the contact center with no coding experience, eliminate inaccurate and irresponsible AI use and subsequent brand risk, and create a powerful personalized experience for customers.
Tiago Paiva, chief executive officer and founder of Talkdesk, said: “About a year after its debut, GenAI significantly builds upon the benefits of artificial intelligence and has already proven to be a powerful tool for businesses. But as more enterprises deploy GenAI within business functions, it’s clear that more work needs to be done to ensure accuracy, responsibility, and accessibility of the technology. At Talkdesk, we’re taking a stand in the CCaaS industry to ensure that GenAI within the contact center does no harm to the business or its customers, provides the right level of personalized experiences across the customer journey, and gives more businesses access to its benefits.”
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW