Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.
Sama Introduces Sama GenAI for Faster, High-Performance Model Development
Sama, a leader in providing data annotation and model validation solutions, today announced Sama GenAI, its specialized solution that has already helped power some of the most well-known Generative AI (GenAI) and foundation models in the world. Unlike other solutions on the market, Sama GenAI leverages SamaIQ™, a combination of proprietary algorithms and an expert workforce, to provide ongoing insights into data and model predictions, enabling faster development of high-performance models.
Sama’s proprietary approach is designed to keep foundation models, including GenAI models, constantly learning, beginning in proof-of-concept and training to post-deployment. SamaIQ can also help identify biases in a model’s original dataset or better understand how they are being created by data. In addition, SamaIQ can prioritize and annotate data that will have the most impact on performance to speed up the development process. Finally, the insights surfaced by SamaIQ can then be used to create more training data to directly address identified issues, ultimately building a more accurate model. Each time data is processed by Sama, the model improves — creating a flywheel effect that significantly reduces the chances of failure, including catastrophic failure. To date, Sama GenAI has already surfaced potential biases with its customers, including major foundation models that have been globally adopted.
“GenAI is extremely confident in what it is creating — even when it shouldn’t be. Model hallucination, where it creates text or images that don’t make sense or are simply factually incorrect, is a real and documented problem across all GenAI models,” said Duncan Curtis, SVP of Product at Sama. “Sama’s approach, using both technological and human insights, allows us to identify when models are too confident, then provide additional training data to adjust that confidence level accordingly and ultimately help them perform better. We are proud to be supporting this quickly-growing, innovative field of AI now and in the future.”
AKKIO Launches Generative Reports to Instantly Turn Data into Decisions
Akkio, a pioneer in generative business intelligence, today unveiled Generative Reports, a powerful, first-of-its-kind feature designed to simplify data-driven decision-making for small and medium businesses (SMBs). When customers connect data sources to Akkio, Generative Reports lets users describe the problem or application they’re working on, and then automatically analyzes the data to create a real-time report that supports their requirements. With Generative Reports, SMBs get a self-service tool to surface the information they need to optimize marketing spend, forecast revenue, score leads, improve customer experiences or anything else they can imagine.
Data is an SMB’s competitive advantage, but without big company resources, most struggle to extract the value. Akkio levels the playing field. For the first time, Generative Reports gives SMBs the ability to go toe-to-toe with industry giants with AI, letting them quickly understand their data, get insights based on their specific use case, and share live reports with their team.
“Generative Reports epitomizes the power of generative business intelligence, marking a transformative moment for SMBs. It’s not merely about leveling the playing field; it’s about setting the pace,” said Jonathon Reilly, Co-CEO and Co-Founder of Akkio. “For any SMB, be it an emerging e-commerce venture or a seasoned player, Generative BI is the indispensable tool for modern success, and Generative Reports lets anyone get the instant insights they need to make better, more informed decisions.”
Boomi Introduces Boomi GPT, Further Accelerating Integration and Automation With Conversational AI
Boomi™, the intelligent connectivity and automation leader, today announced Boomi GPT, the first offering available in the Boomi AI suite, bringing a simple, conversational experience to the Boomi platform. With Boomi GPT, organizations can harness the power of generative AI to integrate and automate faster than ever before, further democratizing innovation and accelerating business outcomes.
Learning from Boomi’s approximately 20,000 global customers and 200+ million integrations, Boomi GPT translates words into action to quickly connect applications, data, processes, people, and devices. Customers can use Boomi GPT’s natural language prompt to ask Boomi AI to build integrations, APIs, or master data models. Acting as a knowledgeable assistant or “copilot,” Boomi GPT then designs an outline of the requested integration or other software, which users can accept or modify, greatly accelerating the work of building connections and automations to drive business results.
“Organizations are working around the clock to deliver innovative products and services that exceed customer expectations while applying extensive connectivity and automation to streamline operations and reduce costs,” said Ed Macosky, Chief Product and Technology Officer at Boomi. “With Boomi AI, organizations can dramatically accelerate and democratize this work, turning natural language requests into integrations and connections that are critical for application modernization and cloud migration. We are thrilled to launch Boomi GPT, the first feature in the Boomi AI suite that will help organizations move with the speed and acumen necessary for success in today’s hypercompetitive markets.”
Vectara Launches Boomerang: The Next-Gen Large Language Model Redefining GenAI Accuracy
Large Language Model (LLM) builder Vectara, the trusted Generative AI (GenAI) platform unveiled Boomerang, a next-gen neural information retrieval model integrated into its end-to-end GenAI platform. In recently published benchmarks Boomerang outperforms Cohere and is comparable to OpenAI on certain performance metrics, expressly excelling at multilingual benchmarks. As a security-first AI leader, Boomerang significantly reduces the probability of bias, copyright infringement and “hallucinations,” fabricated information or inconsistencies in model outputs that have become an industry-wide problem and critical challenge for business adoption.
Vectara’s early “Grounded Generation” paved the way for mitigating hallucinations, a practice many others are adopting via the moniker Retrieval Augmented Generation. Boomerang takes this a step farther. Vectara’s ML team designed Boomerang from the ground-up to deliver the most accurate neural retrieval with low latency, and increased cross lingual support to hundreds of languages and dialects.
“At Vectara, we aim to solve the biggest problems facing Generative AI adoption today,” said Amin Ahmad, Cofounder and CTO of Vectara. “Our neural retrieval model achieves state of the art relevance across hundreds of languages and dialects, significantly reducing one of the biggest barriers to responsible AI adoption in the enterprise: hallucinations.”
WillowTree Transforms Vocable AAC Mobile App with Conversational AI Integration, Giving Voice to Millions
WillowTree, a leading digital experience consultancy, has launched its enhanced version of Vocable AAC with the revolutionary integration of OpenAI’s ChatGPT. With 17.9 million U.S. adults experiencing difficulty speaking in the past year, this free mobile app, originally designed as an augmented and alternative communication (AAC) device for non-verbal individuals, now offers the power of conversational AI for a more intuitive and impactful method of communication.
The new Smart Assist feature actually listens to caregivers and gives the user likely response options based on generative AI natural language models. The integration of OpenAI’s ChatGPT allows the app to retain conversational context, enhance predictive pattern detection, and bolster semantic understanding of a caregiver’s speech. In lay terms, this means the application now acts as a translator between caregivers and non-verbal individuals, who can communicate more naturally by pointing to AI-generated responses.
Andrew Carter, Staff Software Engineer at WillowTree, emphasizes the transformative quality of Vocable AAC’s new integration: “ChatGPT is a conversational interface, it’s literally made for this. It understands an entire conversation’s history, providing better AI-generated responses for the non-verbal user to select.”
VDO.AI Unveils Dexter: Revolutionizing Brand Engagement Through Innovative Generative AI Algorithms
VDO.AI, a leader in the AdTech industry, unveils Dexter, a game-changing creative feature poised to redefine brand engagement by harnessing the power of innovative Generative AI algorithms. Recognizing the ever-evolving landscape of consumer expectations, VDO.AI introduces Dexter, a new tool designed to change how brands connect with their audiences.
Dexter is a testament to the fusion of creativity and technology. It leverages the capabilities of Generative AI algorithms to empower brands with the ability to craft deeply personalized, contextually relevant, and visually stunning content. By understanding individual user preferences, Dexter ensures that brands can deliver messages that resonate deeply, fostering authentic connections. Moreover, this avant-garde technology streamlines content creation, enabling brands to deliver the right message at precisely the right moment.
Amitt Sharma, CEO and Founder of VDO.AI, shared his excitement about Dexter: “We find ourselves on the verge of a groundbreaking era in brand-consumer interaction. Dexter empowers us to establish dynamic connections with our audiences, converting passive observers into engaged participants. This achievement signifies a pivotal milestone in amplifying customer engagement and loyalty, as it allows us to harness the interactive potential that AI offers-a facet that brands have yet to fully explore.”
Baffle Unveils Solution for Data Security and Compliance with Generative AI
Baffle, Inc. unveiled the first and only solution for securing private data for use in generative AI (GenAI) projects that integrates seamlessly with existing data pipelines. With Baffle Data Protection for AI, companies can accelerate GenAI projects knowing their regulated data is cryptographically secure and remain compliant while minimizing risk and gaining the benefits of a breakthrough technology.
GenAI tools like ChatGPT are now available very easily via the web and can deliver new insights from public internet data, which has led to a surge in their adoption. Companies want to get competitive insights from their private data but are prevented from doing so given the risk of sharing that data with public LLMs. Consequently, they have barred employees from using public GenAI services out of security concerns. Fortunately, there are private GenAI services available, specifically retrieval-augmented generation (RAG) implementations, that allow embeddings to be computed locally on a subset of data. But, even with RAG, data privacy and security implications, especially compliance regulations, have not been fully considered, and these factors can cause GenAI projects to stall. While the C-suite and investors push for AI adoption, security and compliance teams are forced to make difficult choices that could put the company at risk of financial or reputational loss. Baffle Data Protection for AI empowers companies to easily secure sensitive data for use in GenAI projects against those risks.
“ChatGPT has been a corporate disruptor, forcing companies to either develop or accelerate their plans for leveraging GenAI in their organizations,” said Ameesh Divatia, founder and CEO, Baffle. “But data security and compliance concerns have been stifling innovation — until now. Baffle Data Protection for AI makes it easy to protect sensitive corporate data while using that data with private GenAI services so companies can meet their GenAI timelines safely and securely.”
Dataiku Unveils LLM Mesh
Dataiku, the platform for Everyday AI, unveiled the LLM Mesh, addressing the critical need for an effective, scalable, and secure platform for integrating Large Language Models (LLMs) in the enterprise. While Generative AI presents a myriad of opportunities and benefits for the enterprise, organizations face notable challenges. These include an absence of centralized administration, inadequate permission controls for data and models, minimal measures against toxic content, the use of personally identifiable information, and a lack of cost-monitoring mechanisms. Additionally, many need help with establishing best practices to fully harness the potential of this emerging technology ecosystem.
LLM Mesh provides the components companies need to build safe applications using LLMs at scale efficiently. With the LLM Mesh sitting between LLM service providers and end-user applications, companies can choose the most cost-effective models for their needs, both today and tomorrow, ensure the safety of their data and responses, and create reusable components for scalable application development.
Components of the LLM Mesh include universal AI service routing, secure access and auditing for AI services, safety provisions for private data screening and response moderation, and performance and cost tracking. The LLM Mesh also provides standard components for application development to ensure quality and consistency while delivering the control and the performance expected by the business.
Clément Stenac, Chief Technology Officer and co-founder at Dataiku shared, “The LLM Mesh represents a pivotal step in AI. At Dataiku, we’re bridging the gap between the promise and reality of using Generative AI in the enterprise. We believe the LLM Mesh provides the structure and control many have sought, paving the way for safer, faster GenAI deployments that deliver real value.”
Pathway launches free LLM-App for building privacy-preserving LLM applications that learn in real-time
Pathway, the real-time intelligence technology specialist, announced LLM-App a free framework in Python for creating real-time AI applications that continuously learn from proprietary data sources. LLM-App also critically overcomes the data governance challenges that have stalled enterprise adoption of LLMs, by building responsive AI applications from data that remains secure and undisturbed in its original storage location.
Enterprise adoption of Large Language Models (LLMs) for real-time decision-making on proprietary data has struggled to take off despite the boom in Generative AI. The challenge has been two-fold. Firstly, there have been concerns over sharing intellectual property and sensitive information with open systems, like ChatGPT and Bard. Secondly, the complexity of designing efficient systems that combine both batch and streaming workflows means AI applications are unable to perform incremental updates to revise preliminary results. This freezes its knowledge to a moment in time, making it unsuitable for decisions that need to be made on accurate, real-time data in industries like manufacturing, financial services and logistics.
Pathway’s LLM-App overcomes these challenges by allowing organizations to build privacy-preserving responsive AI applications based on live data sources. It can leverage your own private LLM or public APIs to provide human-like responses to user queries based on proprietary data sources, with the data remaining secure and undisturbed in its original storage location. This means the application owner retains complete control over the input data and the application’s outputs, making it suitable even for use cases that draw on sensitive data and intellectual property.
Zuzanna Stamirowska, CEO & Co-Founder of Pathway, comments: “While many enterprises have been eager to adopt LLMs, there have been a number of risks involved which have stalled its adoption. From potentially exposing IP or sensitive data, to making decisions based on out-of-date knowledge, concerns around the accuracy and privacy of LLM applications have been difficult to overcome. We hope that with our free framework in Python to build AI apps, more organizations will be able to start building use cases with proprietary data and advance the use of LLMs in the enterprise.”
Nucleus.ai founders emerge from stealth and demonstrate that Big Tech companies aren’t the only ones building large language models
The four founders at startup NucleusAI just emerged from stealth and demonstrated they are able to build large language models (LLMs) like the Big Tech companies. NucleusAI just launched a 22-billion-parameter LLM that outperforms all the models of similar size including Llama 2 13B.
Typically, foundational models are built by 30 to 100 people with skill sets that address the different aspects of LLMs. The fact Nucleus has achieved the same or better with just four people and the open source community is a testament to the team’s overall expertise.
There have been only a handful of companies to have open sourced and commercially licensed an LLM: Meta, Salesforce, Mosaic and Falcon by TII (UAE government funded) among them. With this release Nucleus will join that exclusive list. Nucleus is unique among those companies as an early stage startup.
Nucleus’s 22-billion-parameter LLM was pre-trained on a context length of 2048 tokens, and trained on a trillion tokens of data, from large scale deduplicated and cleaned web data, Wikipedia, Stack Exchange, arXiv, and code. This diverse training data ensures a well-rounded knowledge base, spanning general information to academic research and coding insights.
The announcement will be followed by the release of two other models (3-billion parameters and 11-billion parameters) that were pre-trained on the larger context length of 4,096 tokens. The models were also trained with slightly different dimensional choices than the 22-billion parameter model. The 22-bilion parameter model will be released in several versions, trained on one trillion tokens, 350 billion tokens, and 700 billion tokens, respectively.
Balancing Efficiency with Employee Care: MakeShift Unveils ShiftMate AI to Modernize Workforce Scheduling
MakeShift, renowned for its award-winning cloud employee scheduling and time-tracking platform, announced the launch of ShiftMate AI, a generative AI platform that was designed to address the unique scheduling challenges faced by essential industries like healthcare and retail. With ShiftMate AI, for the first time ever, organizations won’t have to choose between operational efficiency or staff well-being. By harnessing the power of AI, ShiftMate AI promises to allow businesses to achieve both.
“The modern workforce presents a challenging dichotomy for healthcare and retail, said Adam Greenberg, MakeShift CEO. “Critical industries are often caught in a tug-of-war between optimizing for cost reduction and operational efficiency on one hand, and ensuring employee well-being on the other. Traditional solutions force businesses into a precarious trade-off: push for maximum efficiency and risk high turnover, burnout, and compromised service quality, or prioritize staff wellness and face operational inefficiencies. With the launch of ShiftMate AI, this ends. ShiftMate AI is where empathy meets efficiency.”
AWS Announces Powerful New Offerings to Accelerate Generative AI Innovation
Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), announced five generative artificial intelligence (AI) innovations, so organizations of all sizes can build new generative AI applications, enhance employee productivity, and transform their businesses. Today’s announcement includes the general availability of Amazon Bedrock, a fully managed service that makes foundation models (FMs) from leading AI companies available through a single application programming interface (API). To give customers an even greater choice of FMs, AWS also announced that Amazon Titan Embeddings model is generally available and that Llama 2 will be available as a new model on Amazon Bedrock–making it the first fully managed service to offer Meta’s Llama 2 via an API. For organizations that want to maximize the value their developers derive from generative AI, AWS is also announcing a new capability (available soon in preview) for Amazon CodeWhisperer, AWS’s AI-powered coding companion, that securely customizes CodeWhisperer’s code suggestions based on an organization’s own internal codebase. To increase the productivity of business analysts, AWS is releasing a preview of Generative Business Intelligence (BI) authoring capabilities for Amazon QuickSight, a unified BI service built for the cloud, so customers can create compelling visuals, format charts, perform calculations, and more–all by simply describing what they want in natural language. From Amazon Bedrock and Amazon Titan Embeddings to CodeWhisperer and QuickSight, these innovations add to the capabilities AWS provides customers at all layers of the generative AI stack—for organizations of all sizes and with enterprise-grade security and privacy, selection of best-in-class models, and powerful model customization capabilities. To get started with generative AI on AWS, visit aws.amazon.com/generative-ai/.
“Over the last year, the proliferation of data, access to scalable compute, and advancements in machine learning (ML) have led to a surge of interest in generative AI, sparking new ideas that could transform entire industries and reimagine how work gets done,” said Swami Sivasubramanian, vice president of Data and AI at AWS. “With enterprise-grade security and privacy, a choice of leading FMs, a data-first approach, and our high-performance, cost-effective infrastructure, organizations trust AWS to power their businesses with generative AI solutions at every layer of the stack. Today’s announcement is a major milestone that puts generative AI at the fingertips of every business, from startups to enterprises, and every employee, from developers to data analysts. With powerful, new innovations, AWS is bringing greater security, choice, and performance to customers, while also helping them to tightly align their data strategy across their organization, so they can make the most of the transformative potential of generative AI.”
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW