#ml

It’s a lengthy article but also a well written one.

A few comments:

- The author wrote a paper on “The Next Decade in AI”: https://arxiv.org/abs/2002.06177
- Make things work in their own domain. If we are gonna come up with a “theory of everything” for computing or intelligence, we will hit the “mesoscopic” wall, where the bottom up theories and the top down approaches meet but we can’t really make a connection. In the case of intelligence, the wall is determined by the complexities (maybe MDL?). You can make symbols work for high complexities but not always. Similar thing happens to neural networks.
- The neural symbolic approach sounds good but it’s almost like patching a bike as wheels of a train.


https://nautil.us/deep-learning-is-hitting-a-wall-14467/ Deep Learning Is Hitting a Wall
 
 
Back to Top