#ai

AI Frontiers: AI for health and the future of research with Peter Lee

https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5ibHVicnJ5LmNvbS9mZWVkcy9taWNyb3NvZnRyZXNlYXJjaC54bWw/episode/aHR0cHM6Ly9ibHVicnJ5LmNvbS9taWNyb3NvZnRyZXNlYXJjaC85NTE3NTgwMC9haS1mcm9udGllcnMtYWktZm9yLWhlYWx0aC1hbmQtdGhlLWZ1dHVyZS1vZi1yZXNlYXJjaC13aXRoLXBldGVyLWxlZS8?ep=14

----

A very cool discussion on the topic of large language models.

They mentioned the early stage test of Davinci from OpenAI. The model was able to reason for AP in biology and many of the reasoning was surprising to them. Then Ashley asked the person from OpenAI why is Davinci reason like that and the person replied they don't know.

Not everyone expected that kind of reasoning in LLM. In hindsight, "It is just a language model" is a very good question. Nowadays with GPT models, it seems that this question is not a question anymore because it is becoming a fact. What is in the training texts and what is language? Karpathy even made a joke about this:
> The hottest new programming language is English
https://twitter.com/karpathy/status/1617979122625712128?lang=en
 
 
Back to Top