Machine learning and other gibberish
See also: https://sharing.leima.is
Archives: https://datumorphism.leima.is/amneumarkt/
See also: https://sharing.leima.is
Archives: https://datumorphism.leima.is/amneumarkt/
#neuroscience
Definitely weird. The authors used DNN to capture the firing behaviors of cortical neurons.
- A single hidden layer DNN (can you even call it Deep NN in this case?) can capture the neuronal activity without NMDA but with AMPA.
- With NMDA, the neuron requires more than 1 layer.
This paper *stops* here.
WTH this is?
Let's go back to the foundations of statistical learning. What the author is looking for is a separation of "stimulation" space. The "stimulation" space is basically a very simple time series (Poissonic) space. We just need to map inputs back to the same space but with different feature values. Since the feature space is so small, we will absolutely fit everything if we increase the expressing power of the DNN.
The thing is, we already know that NMDA-based synapses require more expressing power and we have very interpretable and good mathematical models for this... This research provides neither better predictability nor interpretability. Well done...
Maybe you have different opinions, prove me wrong.
https://www.biorxiv.org/content/10.1101/613141v2
Definitely weird. The authors used DNN to capture the firing behaviors of cortical neurons.
- A single hidden layer DNN (can you even call it Deep NN in this case?) can capture the neuronal activity without NMDA but with AMPA.
- With NMDA, the neuron requires more than 1 layer.
This paper *stops* here.
WTH this is?
Let's go back to the foundations of statistical learning. What the author is looking for is a separation of "stimulation" space. The "stimulation" space is basically a very simple time series (Poissonic) space. We just need to map inputs back to the same space but with different feature values. Since the feature space is so small, we will absolutely fit everything if we increase the expressing power of the DNN.
The thing is, we already know that NMDA-based synapses require more expressing power and we have very interpretable and good mathematical models for this... This research provides neither better predictability nor interpretability. Well done...
Maybe you have different opinions, prove me wrong.
https://www.biorxiv.org/content/10.1101/613141v2
Source:
https://science.sciencemag.org/content/370/6523/1410.full
A gatekeeper for learning
> Upon learning a hippocampus-dependent associative task, perirhinal inputs might act as a gate to modulate the excitability of apical dendrites and the impact of the feedback stream on layer 5 pyramidal neurons of the primary somatosensory cortex.
😲 In some sense, perirhinal inputs are like config files for learning.