Machine learning and other gibberish
See also: https://sharing.leima.is
Archives: https://datumorphism.leima.is/amneumarkt/
#ts

I love the last paragraph, especially this sentence:
> Unfortunately, I can’t continue my debate with Clive Granger. I rather hoped he would come to accept my point of view.

Rob J Hyndman - The difference between prediction intervals and confidence intervals
https://robjhyndman.com/hyndsight/intervals/
#ai

Generate complicated 3D models by jotting down your ideas:

https://opus.ai/demo
#ml

Pérez J, Barceló P, Marinkovic J. Attention is Turing-Complete. J Mach Learn Res. 2021;22: 1–35. Available: https://jmlr.org/papers/v22/20-302.html
#misc

This is how generative AI is changing our lives. Now thinking about it, those competitive advantages from our satisfying technical skills are fading away.

What shall we invest into for a better career? Just integrated whatever is coming into our workflow? Or fundamentally change the way we are thinking?
#dl

https://github.com/Lightning-AI/lightning/releases/tag/2.0.0

You can compile (torch 2.0) LightningModule now.

import torch
import lightning as L
model = LitModel()
# This will compile forward and {training,validation,test,predict}_step
compiled_model = torch.compile(model)
trainer = L.Trainer()
trainer.fit(compiled_model)
Release Lightning 2.0: Fast, Flexible, Stable · Lightning-AI/lightning
#ml

https://mlcontests.com/state-of-competitive-machine-learning-2022/

Quote from the report:

Successful competitors have mostly converged on a common set of tools — Python, PyData, PyTorch, and gradient-boosted decision trees.

Deep learning still has not replaced gradient-boosted decision trees when it comes to tabular data, though it does often seem to add value when ensembled with boosting methods.
Transformers continue to dominate in NLP, and start to compete with convolutional neural nets in computer vision.

Competitions cover a broad range of research areas including computer vision, NLP, tabular data, robotics, time-series analysis, and many others.
Large ensembles remain common among winners, though single-model solutions do win too.

There are several active machine learning competition platforms, as well as dozens of purpose-built websites for individual competitions.
Competitive machine learning continues to grow in popularity, including in academia.

Around 50% of winners are solo winners50% of winners are first-time winners; 30% have won more than once before.

Some competitors are able to invest significantly into hardware used to train their solutions, though others who use free hardware like Google Colab are also still able to win competitions. The State of Competitive Machine Learning | ML Contests
Great thread on how a communication failure contributed to SVB’s collapse.
https://twitter.com/lulumeservey/status/1634232322693144576
Back to Top