This is Google Gemini, Google’s most advanced “multimodal” AI

Google Gemini It is the most advanced Artificial Intelligence model that Google has created to date. Presented in December 2023 and created by the DeepMind and Google Research team, it is a multimodal AI model. This means that you can generalize, understand and operate with different types of information at the same time.

Until now, AIs were trained to operate mainly with one modality of content – such as, for example, describing images – but they were not capable of carrying out more complex reasoning and mixing different types of information.

From the beginning, Google Gemini has been trained in different modalities and refined with multimodal data so that it is able to understand and reason from scratch on all types of input. According to Google, this AI model has managed to surpass human experts in some tasks and also its main rival, ChatGPT, the AI ​​created by OpenIA and its partner Microsoft.

How to use Google Gemini

The great news about Google’s Gemini is that allows you to work with text, images, audio, video and programming language in code efficiently thanks to the MMLU model (acronym in English for “massive multitasking language understanding”).

In this sense, orders –prompts– can be given to Gemini of all kinds, visually or auditorily, using text, audio, images, etc. From there the AI ​​generates its own content, which can also be in any format. You also understand and apply programming languages ​​such as Java, Python, C++ and Go.

One of the peculiarities of Google Gemini is that it is fully scalable, since it can be used by both end consumers, developers and large companies.

Gemini 1.0 can be used on any device, from a smart mobile phone to a data center. This is because it comes in three versions:

Gemini Ultra – It is the most powerful and largest model, for highly complex tasks.

Gemini Pro – the best model for scaling across a wide range of tasks.

Gemini Nano – the most efficient model to execute tasks directly on a mobile device.

Where you can use Google Gemini

End consumers can now use Google Gemini starting December 13 on the following Google tools and devices:

-On Google Bard. Google’s Artificial Intelligence tool already has a refined version of Gemini Pro. It is available in English – it will later reach more languages ​​- in more than 170 countries with improved functionalities such as summary, brainstorming, writing and planning. In 2024 Google will integrate Gemini Ultra into Bard Advanced.

-On Google Pixel 8 Pro mobile phones. Functions such as Summarize in the Recorder and Smart Reply in Gboard can now be used on these devices.

Google has indicated that in the coming months Gemini will be available in more products and services such as Google Search, Google Ads, Google Chrome and Duet AI.

Of course, at the moment Google users in the European Union will not be able to use Google Gemini, as the company wants to ensure that they comply with the strict community regulations in this regard.

Google Gemini for developers

For their part, developers and companies can Access Gemini Pro using the Gemini API through Google AI Studio, a free web-based tool that helps developers and enterprise customers quickly prototype and launch applications with an API key.

They can also do this through Vertex AI if they need a fully managed AI platform, with full data control and that benefits from additional Google Cloud features for enterprise security, privacy and data governance and compliance.

Android developers will also be able to create with Gemini Nano, the most efficient model for tasks on the mobile device, through AICore, a new system capability available in Android 14, starting with Pixel 8 Pro devices.

For its part, Gemini Ultra will be available in early 2024 for companies and developers, although there will be an early access program for developers, partners and selected companies.

Click to rate this entry!
(Votes: 0 Average: 0)
Share!

Leave a Comment