Google's Project Ellmann To Use Gemini AI To Tell Your Life Story
A team at Google demonstrated Ellmann Chat with the description: "Imagine opening ChatGPT but it already knows everything about your life."
A team at Google has put forward an exciting plan for consideration by the search giant. The plan involves using artificial intelligence (AI) to give users a "bird's-eye" view of their lives.
The proposed technology will use mobile phone data such as photos and Google searches. The program is codenamed "Project Ellmann" after biographer and literary critic Richard David Ellmann.
According to a copy of a presentation viewed by CNBC, the Google team recommends using large language models (LLMs) like Gemini to absorb search results, uncover patterns in a user's photos, create a chatbot and "answer previously impossible questions".
The presentation also establishes how Project Ellmann will be "Your Life Story Teller". However, Google is still mum on its plan to add these capabilities within any of its products including Google Photos.
A blog post shared by Google's Software Engineer for Photos Dave Perra and the company's Devices & Services SRE Lead Tracy Ferrell suggests Google Photos boasts over a whopping 1 billion users and 4 trillion photos and videos.
Project Ellmann: Everything we know about Google's new project so far
Aside from Project Ellman, there are numerous ways Google is planning to adopt AI technology in a bid to improve its products. In line with this, the American tech giant recently unveiled its latest "most capable" and advanced AI model yet, Gemini.
It is worth noting that the Gemini AI model was able to outperform OpenAI's GPT-4 on 30 out of 32 benchmark tests. Google is reportedly planning to make Gemini available to a wide range of customers through Google Cloud for them to use in their own apps.
One of Gemini's most notable features is that it is multimodal. In other words, the AI model can not only process but also understand information beyond text, including video, audio and images.
Documents viewed by CNBC suggest that a product manager for Google Photos showed off Project Ellman alongside Gemini teams at a recently concluded internal summit. The product manager proposed that this project will have a bird's eye approach to one's life story.
"We can't answer tough questions or tell good stories without a bird's-eye view of your life," the presentation mentioned.
"We trawl through your photos, looking at their tags and locations to identify a meaningful moment. "When we step back and understand your life in its entirety, your overarching story becomes clear," a presentation slide reads.
The presentation said large language models could infer a slew of memorable moments such as a user's child's birth. The presentation claims: "This LLM can use knowledge from higher in the tree to infer that this is Jack's birth, and that he's James and Gemma's first and only child."
"One of the reasons that an LLM is so powerful for this bird's-eye approach, is that it's able to take unstructured context from all different elevations across this tree, and use it to improve how it understands other regions of the tree," a separate slide reads.
Interestingly, Google is also reportedly testing an AI assistant that can offer life advice.
© Copyright IBTimes 2024. All rights reserved.