Google’s new Vertex AI features to unlock advanced LLM capabilities – InfoWorld

By
Senior Writer, InfoWorld |
Google Cloud has added and updated multiple features to its AI and machine learning platform, Vertex AI, to help enterprises unlock new capabilities of large language models (LLMs).
To keep LLMs updated with real-time data for more accurate decision-making, the company announced at its annual Cloud Next conference that it will be offering Vertex AI Extensions — a set of managed tools for developers that connect models to APIs for real-time data.
“With Extensions, developers can use pre-built extensions to popular enterprise APIs or build their own extensions to private and public APIs using a schema compatible across Google,” the company said in a statement, adding that Extensions can be used to build digital assistants, search engines and automated workflows.
Google will provide pre-built extensions for proprietary cloud services such as BigQuery and AlloyDB along with partner databases such as DataStax, MongoDB, and Redis.
Vertex AI Search and Vertex AI Conversation, unveiled earlier this year as Enterprise Search and Conversational AI as part of Generative AI App Builder, have been made generally available, the company said.
Google Cloud added new features to both the components of Vertex AI to help developers create more “compelling” applications underpinned by generative AI, Google said.
Vertex AI Search, which allows enterprises to set up a Google Search-like experience via applications powered by foundation models, now comes with data connectors, which can help the models ingest data with read-only access from third-party applications such as Salesforce, JIRA, and Confluence.
This ability, according to Google, helps connect real-time data from other applications with generative AI-powered systems, bots, or applications.
Other updates to Search and Conversation include multiturn search, better summarization, and grounding capabilities.
While multiturn search allows enterprise users to ask follow-up questions without starting a new conversation, search summarization provides summaries of search results and chat conversations.
Google is also packing tools for developers that will allow enterprises to pre-program prompts and responses for specific queries in natural language.
Developers will get tools for grounding that will allow enterprises to decide if they want specific citations for search results.
Vertex AI’s Model Garden, which hosts all the foundation large language models Google Cloud offers in partnership with companies such as Anthropic and HuggingFace, will now feature Meta’s Llama 2, TII’s Falcon, and Anthorpic’s Claude 2 models.
Google is also refreshing its models — PaLM 2, Codey, and Imagen — available as part of the Model Garden.
While PaLM 2 will support 38 languages, helping enterprises ground their data, Codey has been updated to support better quality code generation with at least a 25% improvement.
“To support longer question-answer chats and summarize and analyze large documents such as research papers, books, and legal briefs, our new version of PaLM 2 for text and chat also supports 32,000-token context windows, enough to include an 85-page document in a prompt,” the company said, adding that its Imagen model has been updated with new capabilities such as image editing, captioning and visual question and answering.
Google will also offer tools for reinforcement learning with human feedback to help enterprises customize foundation models with their data. These tools are currently in public preview.
Next read this:
Anirban Ghoshal is a senior writer, covering enterprise software for CIO and databases and cloud infrastructure for InfoWorld.
Copyright © 2023 IDG Communications, Inc.
Copyright © 2023 IDG Communications, Inc.

source

Jesse
https://playwithchatgtp.com