Week 25 Digest - Pixel Art Edition
June 26, 2022

How attention works in transformers
(in case you never realized) pic.twitter.com/XvNtUwTOFP
— François Fleuret (@francoisfleuret) June 22, 2022
pixray: Generate AI pixel art
Easy enought to get running locally if you’re bored of Dalle-mini already. I tried a few Cape Town related prompts and it did okay.

Meta releases new large language model
{2205.01068} OPT: Open Pre-trained Transformer Language Models
GitHub - facebookresearch/metaseq: Repo for external large-scale work
Also has the training lab notebooks/logs which are always cool.
The de-detailing of the world
The Danger of Minimalist Design
— The Cultural Tutor (@culturaltutor) June 18, 2022
(& the death of detail)
A short thread... pic.twitter.com/byrgyZzl6O
GPT writes Trump tweets about Greek mytholical characters
God this is such a good prompt pic.twitter.com/lhVpNtU4BG
— 🅁🅈🄰🄽 (@BoyNamedShit) June 20, 2022
Relationships between distributions
this was my multiverse of madness pic.twitter.com/d7bDFWC0G5
— 🔥🔥Kareem Carr 🔥🔥 (@kareem_carr) June 20, 2022
Use ConvNext not Imagenet for finetuning
Jeremy Howard investigates some modern models for finetuning: The best vision models for fine-tuning | Kaggle
Paper: {2201.03545} A ConvNet for the 2020s
How air filters actually work
Contra Wirecutter on the IKEA air purifier Turns out they block really small and really large particles equally well with a not great bit inbetween.
Getting to Gnome Mode - by Venkatesh Rao
Use Cumulative Logit models for linkert scale data
If you’ve ever wondered how to model survey responses (classic 1-5 scale for example)
Available as ordinal regression in statsmodels.