Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Google Gemini 2.0, the most powerful artificial intelligence model, opens for everyone

Jack Silva Soup pictures Lightrockket | Gety pictures

Google On Wednesday, Gemini 2.0 – its “most artificial intelligence” suite has released – for everyone.

In December, the company granted access to developers and trusted laboratories, in addition to wrapping some features in Google products, but this “public version”, according to Google.

The 2.0 Flash Models Collection, which is described as “the optimal spinal model for high -sized tasks, and high frequency on a large scale”; 2.0 Pro experimental, which is largely focused on coding performance; And 2.0 Flash-Lite, which is rising as “the most expensive model so far.”

Gueini Flash costs 10 cents per million symbols of textual inputs, photos and video, although Flash-Light has a cost-effective version, costs 0.75 percent for itself.

Continuous releases are part of Google’s broader strategy to invest in “artificial intelligence agents” where the AI ​​Arm Race races rises between both technology and startup giants.

DeadAmazon, MicrosoftOpenai and Anthropic also moves towards AICENT AI, or models that can complete multiple multi -step tasks on behalf of the user, instead of having to walk in every individual step.

Read more report on CNBC on artificial intelligence

“Over the past year, we were investing in developing more agents, which means that they can understand more about the world around you, think about multiple steps forward, and take measures on your behalf, with your supervision,” Google wrote in December Blog postAdding that Gemini 2.0 has “new developments in multimedia – such as the original image and sound outputs – and the use of original tools”, and that the forms family “will enable us to build the new Amnesty International agents that bring us closer to our vision of the world.”

Anthopor, AI-AIC, is backed by Amazon, which was founded by the former research managers in Openai, a major rival in the race to develop artificial intelligence agents. In October, the startup said that artificial intelligence agents were able to use computers like humans to complete complex tasks. The startup said that the possibility of using the device for the device allows its ability to explain what is on the computer screen, choose buttons, enter the text, navigate web sites, and carry out tasks through any program and browsing the Internet in the actual time.

“He can” use computers in the same way we do. “He said he could do tasks with” dozens or even hundreds of steps. “

Openai has recently released a similar tool, providing an feature called the operator that will automate tasks such as vacation planning, fill models, make restaurant reservations and arrange groceries. Startup, backed by Microsoft, described the operator as “an agent who can go to the web to perform the tasks for you.”

Earlier this week, Openai announced another tool called Deep Research that allows the artificial intelligence agent to collect complex research reports, analyze questions and user selection topics. In December, Google launched a similar tool of the same name – deep search – which works as “researcher assistant, exploring complex topics and collecting reports on your behalf.”

CNBC mentioned for the first time in December that Google will provide many features of artificial intelligence in early 2025.

“In history, you do not always need to be first, but you have to implement well and be the best in the separation as a product,” said CEO Sondar Beshai at a strategic meeting at that time. “I think this is what is going on around 2025.”

Leave a Reply

Your email address will not be published. Required fields are marked *