Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.
TruEra Launches “TruEra AI Observability,” First Full Lifecycle AI Observability Solution Covering Both Generative and Traditional AI
TruEra launched TruEra AI Observability, a full-lifecycle AI observability solution providing monitoring, debugging, and testing for ML models in a single SaaS offering. TruEra AI Observability now covers both generative and traditional (discriminative) ML models, meeting customer needs for observability across their full portfolio of AI applications, as interest in developing and monitoring LLM-based apps is accelerating.
Initial development of LLM-based applications is dramatically increasing since the launch of ChatGPT. However, LLM-based applications have well-known risks for hallucinations, toxicity and bias. TruEra AI Observability offers new capabilities for testing and tracking LLM apps in development and in live use, so that risks are minimized while acceleratingLLM app development. The product capabilities were informed by the traction of TruLens – TruEra’s open source library for evaluating LLM applications.
“TruEra’s initial success was driven by customers in banking, insurance, and other financial services, whose high security requirements were well met by existing TruEra on-prem solutions,” said TruEra Co-founder, President and Chief Scientist Anupam Datta. “Now, with TruEra AI Observability, we are bringing ML monitoring, debugging, and testing to a broader range of organizations, who prefer the rapid deployment, scalability, and flexibility of SaaS. We were excited to see hundreds of users sign up in the early beta period, while thousands have engaged with our hands-on educational offerings and community. The solution brings incredible monitoring and testing capabilities to everyone developing machine learning models and LLM applications.”
Vianai Introduces Powerful Open-Source Toolkit to Verify Accuracy of LLM-Generated Responses
Vianai Systems, a leader in human-centered AI (H+AI) for the enterprise, announced the release of veryLLM, an open-source toolkit that enables reliable, transparent and transformative AI systems for enterprises. The veryLLM toolkit empowers developers and data scientists to build a universally needed transparency layer into Large Language Models (LLMs), to evaluate the accuracy and authenticity of AI-generated responses — addressing a critical challenge that has prevented many enterprises from deploying LLMs due to the risks of false responses.
AI hallucinations, in which LLMs create false, offensive or otherwise inaccurate or unethical responses raise particularly challenging issues for enterprises as the risks of financial, reputational, legal and/or ethical consequences are extremely high. The AI hallucination problem left unaddressed by LLM providers has continued to plague the industry and hinder adoption, with many enterprises simply unwilling to bring the risks of hallucinations into their mission-critical enterprise systems. Vianai is releasing the veryLLM toolkit (under the Apache 2.0 open-source license) to make this capability available for anyone to use, to build trust and to drive adoption of AI systems.
The veryLLM toolkit introduces a foundational ability to understand the basis of every sentence generated by an LLM via several built-in functions. These functions are designed to classify statements into distinct categories using context pools that the LLMs are trained on (e.g., Wikipedia, Common Crawl, Books3 and others), with the introductory release of veryLLM based on a subset of Wikipedia articles. Given that most publicly disclosed LLM training datasets include Wikipedia, this approach provides a robust foundation for the veryLLM verification process. Developers can use veryLLM in any application that leverages LLMs, to provide transparency on AI generated responses. The veryLLM functions are designed to be modular, extensible, and work alongside any LLM, providing support for existing and future language models.
“AI hallucinations pose serious risks for enterprises, holding back their adoption of AI. As a student of AI for many years, it is also just well-known that we cannot allow these powerful systems to be opaque about the basis of their outputs, and we need to urgently solve this. Our veryLLM library is a small first step to bring transparency and confidence to the outputs of any LLM – transparency that any developer, data scientist or LLM provider can use in their AI applications,” said Dr. Vishal Sikka, Founder and CEO of Vianai Systems and advisor to Stanford University’s Center for Human-Centered Artificial Intelligence. “We are excited to bring these capabilities, and many other anti-hallucination techniques, to enterprises worldwide, and I believe this is why we are seeing unprecedented adoption of our solutions.”
Pinecone working with AWS to solve Generative AI hallucination challenges
Pinecone, the vector database company providing long-term memory for artificial intelligence (AI), announced an integration with Amazon Bedrock, a fully managed service from Amazon Web Services (AWS) for building GenAI applications. The announcement means customers can now drastically reduce hallucinations and accelerate the go-to-market of Generative AI (GenAI) applications such as chatbots, assistants, and agents.
The Pinecone vector database is a key component of the AI tech stack, helping companies solve one of the biggest challenges in deploying GenAI solutions — hallucinations — by allowing them to store, search, and find the most relevant and up-to-date information from company data and send that context to Large Language Models (LLMs) with every query. This workflow is called Retrieval Augmented Generation (RAG), and with Pinecone, it aids in providing relevant, accurate, and fast responses from search or GenAI applications to end users.
With Amazon Bedrock, the serverless platform lets users select and customize the right models for their needs, then effortlessly integrate and deploy them using popular AWS services such as Amazon SageMaker.
Pinecone’s integration with Amazon Bedrock allows developers to quickly and effortlessly build streamlined, factual GenAI applications that combine Pinecone’s ease of use, performance, cost-efficiency, and scalability with their LLM of choice. Pinecone’s enterprise-grade security and its availability on the AWS Marketplace allow developers in enterprises to bring these GenAI solutions to market significantly faster.
“We’ve already seen a large number of AWS customers adopting Pinecone,” said Edo Liberty, Founder & CEO of Pinecone. “This integration opens the doors to even more developers who need to ship reliable and scalable GenAI applications… yesterday.”
Kyndi Introduces New Capabilities to Optimize Knowledge Management and User Experiences with Latest Version of its Generative AI Answer Engine
Kyndi, the Answer Engine company, today announced the immediate availability of its latest Generative AI Answer Engine, Kyndi 6.0. The latest release introduces several new features that address the need for more effective knowledge management and AI transparency. Integrated into Kyndi’s award-winning Platform, the new features expand on the existing generative AI use cases for enterprises looking to optimize their knowledge management process and the information retrieval experiences for the end users.
The current content and knowledge management process is manual and time-consuming for enterprises. Worse, building a comprehensive, agile knowledge base that consistently provides accurate and timely information for business users is a huge endeavor, given the wealth of information organizations create and maintain today.
Using Kyndi 6.0, content leaders can leverage a new generative AI-enabled feature called “Topics” which empowers them to curate, validate, and manage knowledge bases in under one hour while ensuring the correct and relevant answers are always available to the end users. Another capability is the new generative AI-enabled user interface called “Citation”, which presents end users with the citations of the content sources for enhanced AI explainability. Users can also copy the citations for future references and continued research.
“While many organizations were initially swept up in the hype of GenAI, they soon witnessed deal-breaking issues from inaccurate answers due to hallucinations, difficulty in incorporating domain-specific information, lack of explainability, and security challenges,” said Ryan Welsh, CEO and Founder of Kyndi. “As an end-to-end enterprise solution, Kyndi 6.0 addresses these challenges by bringing new capabilities that enhance relevancy, AI transparency, and automation to organizations looking to modernize the way their users find critical business information.”
Automation Anywhere Unveils Expanded Generative AI-Powered Automation Platform to Empower People and Teams and Accelerate Enterprise Productivity Gains
Automation Anywhere, a leader in intelligent automation, today announced an expansion of its Automation Success Platform, enabling enterprises to accelerate their transformation journeys and put AI to work securely throughout their organizations. Automation Anywhere’s new tools and enhancements deliver AI-powered automation across every team, system and process. During their Imagine 2023 conference, the company unveiled a new Responsible AI Layer, and announced four key product updates including the brand-new Autopilot, which enables the rapid development of end-to-end automations from Process Discovery, using the power of generative AI. The company also announced new, expanded features in Automation Co-Pilot for Business Users, Automation Co-Pilot for Automators, and Document Automation.
“The combination of generative AI and intelligent automation represents the most transformational technology shift of our generation,” said Mihir Shukla, CEO and Co-Founder, Automation Anywhere. “Every company, every team, every individual will be able to re-imagine their system of work and automate the processes that hold them back. Great people, empowered with AI and intelligent automation will be absolutely transformative to their organizations as they increase their productivity, creativity and accelerate the business.”
ADDING MULTIMEDIA AnswerRocket’s GenAI Assistant Revolutionizes Enterprise Data Analysis
AnswerRocket, an innovator in GenAI-powered analytics, today announced new features of its Max solution to help enterprises tackle a variety of data analysis use cases with purpose-built GenAI analysts.
Max offers a user-friendly chat experience for data analysis that integrates AnswerRocket’s augmented analytics with OpenAI’s GPT large language model, making sophisticated data analysis more accessible than ever. With Max, users can ask questions about their key business metrics, identify performance drivers, and investigate critical issues within seconds. The solution is compatible with all major cloud platforms, leveraging OpenAI and Azure OpenAI APIs to provide enterprise-grade security, scalability, and compliance.
“AI has been reshaping what we thought was possible in the market research industry for the past two decades. Combined with high-quality and responsibly sourced data, we can now break barriers to yield transformative insights for innovation and growth for our clients,” said Ted Prince, Group Chief Product Officer, Kantar. “Technologies like AnswerRocket’s Max combined with Kantar data testify to the power of the latest technology and unrivaled data to shape your brand future.”
Galileo Releases the First LLM Evaluation, Experimentation and Observability Platform for Building Trustworthy Production-Ready LLM Applications
Galileo, a leading machine learning (ML) company for unstructured data, today announced the general availability of Galileo LLM Studio, a platform for building trustworthy LLM applications and getting them into production faster. The platform has Prompt and Fine-Tune modules and now has a third module, Monitor, that provides a continuous feedback loop for developers and data scientists. All three modules leverage Galileo’s Guardrail Metrics Store where users can find unique evaluation metrics created by Galileo’s research team which enhance developer productivity and provide robust hallucination detection or build their own custom metrics.
“We’ve spent the last year speaking with enterprises working to bring LLM-based applications to production and three things became radically clear. First, companies of all sizes now have LLM powered applications in production. Second, LLM output evaluation is painfully manual with no guardrails against hallucinations. Third, teams are looking for sophisticated metric-driven monitoring for their applications in production. This need for LLM evaluation, experimentation and observability was core to our latest release,” said Vikram Chatterji, co-founder and CEO of Galileo.
Torii Ushers in a New Era of SaaS Management with Enhanced, Generative AI-powered Platform
Torii, the SaaS Management pioneer, announced the launch of its next generation SaaS Management Platform (SMP), featuring a series of product releases that set a new standard for innovation and extensibility in SMPs. The only SMP powered by generative AI, Torii’s enhanced platform equips bandwidth-strapped IT teams to automate time-consuming tasks, cut SaaS spend, and power quicker, actionable insights. The uniquely open SMP ultimately allows IT pros – and the organizations they work for – to stay on the forefront of SaaS management innovation, tailor-fit Torii to their needs, and create one place to manage all their software.
“At Torii, we’ve always had the vision of developing a singular software to manage all other software. Thanks to the introduction of generative AI, we’re closer to that goal than ever before. With this new release, we’re offering a true end-to-end platform that solves all the SaaS management problems IT teams face by having our software identify, understand, and communicate application data,” said Uri Haramati, CEO and Co-Founder of Torii. “The new and open nature of the platform further amplifies our ability to give IT pros exactly what they need to be successful in managing SaaS, and the possibilities of what we can do from here are endless.”
Teradata Launches ask.ai, Brings Generative AI Capabilities to VantageCloud Lake
Teradata (NYSE: TDC) announced ask.ai, a new generative AI capability for VantageCloud Lake. The natural language interface is designed to allow anyone with approved access to ask questions of their company’s data and receive instant responses from VantageCloud Lake, the most complete cloud analytics and data platform for AI. By reducing the need for complex coding and querying, ask.ai can dramatically increase productivity, speed and efficiency for technical users and expands analytic use to non-technical roles, who can now also use Teradata’s powerful, cloud-native platform to sift through mountains of data and draw insights. The increased use of data and analytics via VantageCloud Lake can drive breakthrough innovations that positively impact business results.
The mainstreaming of natural language interfaces has set new expectations for the ability to gather, understand and use information, even within complex enterprise environments and outside of traditional technical roles. Implementing these generative AI capabilities and expanding access can save employee time and improve productivity.
Generative AI tools are only as useful as an enterprise’s underlying analytic capabilities and quality of the data sets. The harmonized data approach and rich in-database analytics that Teradata provides in VantageCloud Lake is designed to ensure that ask.ai delivers accurate and comprehensive results, enabling trusted cloud analytics for more confident decision-making at every level.
“Teradata ask.ai for VantageCloud Lake enables enterprises to quickly get to the value of their data, wherever it is, and democratizes AI and ML,” said Hillary Ashton, Teradata’s Chief Product Officer. “Teradata was recently called out by Forrester for its strong vision and strategy that includes AI/ML at scale* and now Teradata ask.ai takes this even further with a dramatic improvement in productivity and ease of use. Enterprises choose Teradata’s open and connected platform which empowers AI and ML at massive scale, harmonizes data, and delivers a price-per-query advantage.”
Couchbase to Advance Developer Productivity by Adding Generative AI to Capella Database-as-a-Service
Couchbase, Inc. (NASDAQ: BASE), the cloud database platform company, announced it is introducing generative AI capabilities into its Database-as-a-Service Couchbase Capella™ to significantly enhance developer productivity and accelerate time to market for modern applications. The new capability called Capella iQ enables developers to write SQL++ and application-level code more quickly by delivering recommended sample code. Couchbase also announced additional Capella updates that further enhance the developer experience, increase efficiency and ease operations.
Managing the full lifecycle of an application puts pressure on developers and adds friction to their workflow, which can slow down productivity. At the same time, developer productivity has never been more important given the demand for and potential of AI applications. Powered by generative AI, Capella iQ uses foundation models to add intelligence to the Capella developer workbench integrated development environment (IDE). With Capella iQ, developers can use natural language to quickly and easily generate code, sample datasets and unit tests. Capella iQ also advises on index creation, search syntax and other programmatic access to Capella. Leveraging generative AI to build and test applications quicker in Capella delivers higher productivity and quality, resulting in faster time to market.
“Code that used to take hours for a developer to write will now be generated in a matter of minutes in sample sets from Capella iQ,” said Scott Anderson, SVP of product management and business operations at Couchbase. “This makes developers more efficient when building modern apps, ultimately accelerating innovation for customers. By incorporating generative AI into our fully managed DBaaS, we are making it easier for developers to get started with Capella and significantly boost their productivity.”
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW