Commercial Actors Launch Massive Attempt to Clone Google’s Gemini AI

Commercial Actors Launch Massive Attempt to Clone Google’s Gemini AI

2026-02-15 companies

Mountain View, Saturday, 14 February 2026.
Rivals bombarded Google’s Gemini with over 100,000 prompts to reverse-engineer its proprietary reasoning, marking a sophisticated ‘distillation attack’ that signals a new era of intellectual property theft in AI.

Unprecedented Model Extraction Attacks

Alphabet Inc. (GOOGL) has formally accused private sector entities of launching sophisticated “distillation attacks” against its Gemini artificial intelligence models. In a report released on Thursday, February 12, Google revealed that “commercially motivated” actors utilized over 100,000 prompts in a concerted effort to clone the proprietary logic and reasoning capabilities of its flagship AI [1][2]. These adversaries are not traditional hackers breaking into servers, but rather researchers and rival companies exploiting legitimate API access to extract the model’s inner workings without authorization [1][2].

The Mechanics of Intellectual Property Theft

The technique, known as “model extraction,” involves bombarding a Large Language Model (LLM) with specific queries designed to map its decision-making patterns [2][4]. By analyzing the outputs, attackers can reconstruct a functional copy of the target model, significantly bypassing the billions of dollars in capital expenditure typically required for research and training [2][4]. Google explicitly categorizes these attempts as intellectual property theft, noting that attackers are specifically targeting Gemini’s ability to “reason” and process information in non-English languages [1][2].

The Economics of Distillation

This surge in cloning attempts highlights a shifting economic reality in the generative AI sector. Training a frontier model requires immense computational resources and data, creating a high barrier to entry [2][4]. However, “distillation” allows smaller entities to piggyback on the investments of industry giants. This trend gained visibility in early 2025, when the Chinese startup DeepSeek introduced a low-cost model, prompting OpenAI to accuse the firm of potentially violating terms of service by distilling its technology [1][2][4]. Google’s recent findings suggest that this practice has now evolved into a standardized strategy for competitors seeking a rapid, low-cost market entry [2].

A Warning for the Wider Industry

John Hultquist, chief analyst at Google’s Threat Intelligence Group, warns that the attack on Gemini is likely a precursor to a broader industry trend. “We’re going to be the canary in the coal mine for far more incidents,” Hultquist stated, indicating that as more enterprises deploy custom LLMs trained on sensitive proprietary data—such as trading algorithms—they will become prime targets for similar extraction efforts [1][2]. While Google’s systems successfully identified and mitigated this specific campaign in real time, the inherent vulnerability of public-facing LLMs remains a critical challenge for the sector [1][2]. As the AI arms race intensifies, companies must now treat their model weights and reasoning logic as trade secrets requiring vigilant, active protection against this new wave of corporate espionage.

Sources


Artificial Intelligence Intellectual Property