Close Menu
digi2buzz.com
  • Home
  • Marketing
    • B2B Marketing
    • Advertising
  • Content Marketing
  • eCommerce Marketing
  • Email Marketing
  • More
    • Influencer Marketing
    • Mobile Marketing
    • Network Marketing

Subscribe to Updates

Get the latest creative news from digi2buzz about Digital marketing, advertising and E-mail marketing

Please enable JavaScript in your browser to complete this form.
Loading
What's Hot

3 pitfalls that sabotage account-based advertising strate…

July 6, 2025

The right way to Create a Salon E-newsletter to Develop Gross sales and Ebook…

July 6, 2025

5 Content material Advertising and marketing Concepts for August 2025

July 6, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
digi2buzz.com
Contact Us
  • Home
  • Marketing
    • B2B Marketing
    • Advertising
  • Content Marketing
  • eCommerce Marketing
  • Email Marketing
  • More
    • Influencer Marketing
    • Mobile Marketing
    • Network Marketing
digi2buzz.com
Home»Marketing»Jeff Dean On Combining Google Search With LLM In-Contex…
Marketing

Jeff Dean On Combining Google Search With LLM In-Contex…

By February 18, 2025004 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Jeff Dean On Combining Google Search With LLM In-Contex…
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


Jeff Dean On Combining Google Search With LLM In-Contex…

Dwarkesh Patel interviewed Jeff Dean and Noam Shazeer of Google and one matter he requested about what would it not be wish to merge or mix Google Search with in-context studying. It resulted in a captivating reply from Jeff Dean.

Earlier than you watch, here’s a definition you would possibly want:

In-context studying, also called few-shot studying or immediate engineering, is a way the place an LLM is given examples or directions throughout the enter immediate to information its response. This methodology leverages the mannequin’s capacity to grasp and adapt to patterns offered within the fast context of the question.

The context window (or “context size”) of a giant language mannequin (LLM) is the quantity of textual content, in tokens, that the mannequin can contemplate or “keep in mind” at anybody time. A bigger context window permits an AI mannequin to course of longer inputs and incorporate a higher quantity of knowledge into every output.

This query and reply begins on the 32 minute mark on this video:

Right here is the transcript if you do not need to learn this:

Query:

I do know one factor you are engaged on proper now’s longer context. In case you consider Google Search, it is received the whole index of the web in its context, but it surely’s a really shallow search. After which clearly language fashions have restricted context proper now, however they’ll actually suppose. It is like darkish magic, in-context studying. It may actually take into consideration what it’s seeing. How do you consider what it might be wish to merge one thing like Google Search and one thing like in-context studying?

Yeah, I will take a primary stab at it as a result of – I’ve thought of this for a bit. One of many stuff you see with these fashions is that they’re fairly good, however they do hallucinate and have factuality points typically. A part of that’s you’ve got skilled on, say, tens of trillions of tokens, and you’ve got stirred all that collectively in your tens or lots of of billions of parameters. However it’s all a bit squishy since you’ve churned all these tokens collectively. The mannequin has a fairly clear view of that knowledge, but it surely typically will get confused and can give the mistaken date for one thing. Whereas info within the context window, within the enter of the mannequin, is basically sharp and clear as a result of we’ve this very nice consideration mechanism in transformers. The mannequin can take note of issues, and it is aware of the precise textual content or the precise frames of the video or audio or no matter that it is processing. Proper now, we’ve fashions that may take care of tens of millions of tokens of context, which is kind of a lot. It is lots of of pages of PDF, or 50 analysis papers, or hours of video, or tens of hours of audio, or some mixture of these issues, which is fairly cool. However it might be very nice if the mannequin might attend to trillions of tokens.

May it attend to the whole web and discover the suitable stuff for you? May it attend to all of your private info for you? I’d love a mannequin that has entry to all my emails, all my paperwork, and all my photographs. After I ask it to do one thing, it might probably form of make use of that, with my permission, to assist resolve what it’s I am wanting it to do.

However that is going to be a giant computational problem as a result of the naive consideration algorithm is quadratic. You’ll be able to barely make it work on a good bit of {hardware} for tens of millions of tokens, however there is not any hope of constructing that simply naively go to trillions of tokens. So, we want a complete bunch of fascinating algorithmic approximations to what you would actually need: a approach for the mannequin to attend conceptually to a lot and much extra tokens, trillions of tokens. Possibly we are able to put the entire Google code base in context for each Google developer, all of the world’s supply code in context for any open-source developer. That might be wonderful. It will be unbelievable.

Right here is the place I discovered this:

Related: pic.twitter.com/N8fECkK36M

— DEJAN (@dejanseo) February 15, 2025

I am enamored of mixing many approaches. Listed below are some which are fascinating and public:

Varied dense retrieval strategies

TreeFormer (https://t.co/aplh2tS9DM)

Excessive-Recall Approximate High-Ok Estimation (https://t.co/rVcYm5vltU)

Varied types of KV cache quantization and…

— Jeff Dean (@JeffDean) February 15, 2025

Discussion board dialogue at X.





Supply hyperlink

Combining Dean Google InContex.. Jeff LLM Search
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

20 Highly effective Google Search Operators (Up to date for 2025)

July 6, 2025

Inside BET’s Transformation From Media Firm to Movem…

July 6, 2025

What Does AI Search Imply for the Way forward for Longform Con…

July 5, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

9 Should-Have E mail Examples for the Vogue & Attire In…

August 31, 202419 Views

Newest Gartner Hype Cycles for Advertising and marketing and Advertisin…

October 31, 202411 Views

10 Key Pillars for Efficient TikTok Content material Advertising and marketing i…

August 31, 202411 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram

Subscribe to Updates

Get the latest tech news from Digi2buzz about Digital marketing, advertising and B2B Marketing.

Please enable JavaScript in your browser to complete this form.
Loading
About Us

At Digi2Buzz, we believe in transforming your digital presence into a powerhouse of engagement and growth. Founded on the principles of creativity, strategy, and results, our mission is to help businesses of all sizes navigate the dynamic world of digital marketing and achieve their goals with confidence.

Most Popular

9 Should-Have E mail Examples for the Vogue & Attire In…

August 31, 202419 Views

Newest Gartner Hype Cycles for Advertising and marketing and Advertisin…

October 31, 202411 Views
Quicklinks
  • Advertising
  • B2B Marketing
  • Email Marketing
  • Content Marketing
  • Network Marketing
Useful Links
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
Copyright 2024 Digi2buzz Design By Horaam Sultan.

Type above and press Enter to search. Press Esc to cancel.