Machine learning and other gibberish
See also: https://sharing.leima.is
Archives: https://datumorphism.leima.is/amneumarkt/
#ml

Fotios Petropoulos initiated the forecasting encyclopaedia project. They published this paper recently.

Petropoulos, Fotios, Daniele Apiletti, Vassilios Assimakopoulos, Mohamed Zied Babai, Devon K. Barrow, Souhaib Ben Taieb, Christoph Bergmeir, et al. 2022. “Forecasting: Theory and Practice.” International Journal of Forecasting 38 (3): 705–871.

https://www.sciencedirect.com/science/article/pii/S0169207021001758

Also available here: https://forecasting-encyclopedia.com/

The paper covers many recent advances in forecasting, including deep learning models. There are some important topics missing but I’m sure they will cover them in future releases.
#ml


I was playing with dalle-mini ( https://github.com/borisdayma/dalle-mini ).

So... in the eyes of Dalle-mini,

1. science == chemistry (? I guess),
2. scientists are men.

Tried several times, same conclusions.

It is so hard to fight against the bias in ML models.


---

Update: OpenAI is fixing this.

https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2/
#ml


Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM; 2019. doi:10.1145/3287560.3287596

https://arxiv.org/abs/1810.03993
#data

If you are building a simple dashboard using python, streamlit is a great tool to get started. One of the problems in the past was to create multipage apps.

To solve this problem, I created a template for multipage apps a year ago.
https://github.com/emptymalei/streamlit-multipage-template

But today, streamlit officially introduced multipage support. And it looks great. I haven’t built any dashboards for a while, but to me, this is still the go-to solution for a dashboard.
https://blog.streamlit.io/introducing-multipage-apps/
#ml

I have heard about deepeta before but never thought it was a transformer.

According to this blog post by uber, they are using an encoder decoder architecture with linear attention.

This blog post also explains how they made a transformer fast.

DeepETA: How Uber Predicts Arrival Times Using Deep Learning
https://eng.uber.com/deepeta-how-uber-predicts-arrival-times/
Back to Top