Google restricts election-related queries for its Gemini chatbot

0
103

[ad_1]

Sundar Pichai, CEO of Google and Alphabet, speaks on artificial intelligence during a Bruegel think tank conference in Brussels, Belgium, on Jan. 20, 2020.

Yves Herman | Reuters

Google announced it will restrict the types of election-related queries that users can ask its Gemini chatbot, adding it has already rolled out the changes in India, where voters will head to the polls this spring.

“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” Google wrote in a blog post on Tuesday. “We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protections.”

A Google spokesperson told CNBC that the changes were in line with the company’s planned approach for elections, and that it’s introducing the Gemini restrictions “in preparation for the many elections happening around the world in 2024 and out of an abundance of caution.”

The announcement comes after Google pulled its artificial intelligence image generation tool last month following a string of controversies, including historical inaccuracies and contentious responses. The company had introduced the image generator earlier in February through Gemini — Google’s main suite of AI models — as part of a significant rebrand.

“We have taken the feature offline while we fix that,” Demis Hassabis, CEO of Google’s DeepMind, said last month during a panel at the Mobile World Congress conference in Barcelona. “We are hoping to have that back online very shortly in the next couple of weeks, few weeks.” He added that the product was not “working the way we intended.”

The news also comes as tech platforms are preparing for a huge year of elections worldwide that affect upward of four billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns, with the number of generated deepfakes increasing 900% year over year, according to data from machine learning firm Clarity.

Election-related misinformation has been a major problem dating back to the 2016 presidential campaign, when Russian actors sought to deploy cheap and easy ways to spread inaccurate content across social platforms. Lawmakers are currently even more concerned with the rapid rise of AI.

“There is reason for serious concern about how AI could be used to mislead voters in campaigns,” Josh Becker, a Democratic state senator in California, told CNBC last month in an interview.

The detection and watermarking technologies used to identify deepfakes haven’t advanced quickly enough to keep up. Even if platforms behind AI-generated images and videos agree to bake in invisible watermarks and certain types of metadata, there are ways around those protective measures. At times, screenshotting can even dupe a detector.

In recent months, Google has underlined its commitment to pursuing — and investing heavily in — AI assistants or agents. The term often describes tools ranging from chatbots to coding assistants and other productivity tools.

Alphabet CEO Sundar Pichai highlighted AI agents as a priority during the company’s earnings call of Jan. 30. Pichai said that he eventually wants to offer an AI agent that can complete more and more tasks for a user, including within Google Search — although he said there is “a lot of execution ahead.” Likewise, chief executives at tech giants from Microsoft to Amazon doubled down on their commitment to build AI agents as productivity tools.

Google’s Gemini rebrand, app rollouts and feature expansions were a first step to “building a true AI assistant,” Sissie Hsiao, a vice president at Google and general manager for Google Assistant and Bard, told reporters on a call in February.

[ad_2]

Source link