Sam Breed

Product Developer, Investor

Links, March 2023

The AI Overlords have approved of these links

( an old couple on a hill watching the sunset ) traditional painting style muted colors by Vincent van Gogh Tilt Shift, Cinestill 800T 35mm. best of flickr. by artist laurie greasley. - Stable Diffusion v1.5
( an old couple on a hill watching the sunset ) traditional painting style muted colors by Vincent van Gogh Tilt Shift, Cinestill 800T 35mm. best of flickr. by artist laurie greasley. - Stable Diffusion v1.5

Yes, I had all of these tabs open.

AI

gpt-4.pdf - I’ll start with the headline. I could stop here and that would be enough links for the entire month. I really could. So much is being written every day about AI that you need an LLM to generate a summary if you want to keep up.

Opinion Section

ChatGPT gets its Wolfram superpowers - And just like that, a large subset of criticism levied at LLMs (“they don’t tell the truth!” and “their knowledge can’t be up-to-date!”) gets washed away by integration with a strongly correct existing system.

Cheating is All You Need - I strongly agree with the sentiment in the title. One thing that’s always drawn me to programming is how easy it is to “cheat” by finding working code and modifying it until it does what you want. Now that LLMs are collasping the slope of the learning curve, tools that help you “cheat” your way to a working program will soon be common place. The goal of programming is unchanged!

Large language models are having their Stable Diffusion moment - Exciting developments in generative AI - ControlNet leads the pack, while LLaMA model allows GPT-3 class models to run on personal hardware. Potential for harm, but let’s steer it towards positive applications.

The Waluigi Effect (mega-post) - LessWrong - An example from last month’s link about (somewhat) ‘failing the mirror test’, but interesting reading none the less. Large language models like GPT-3/4 can give wrong answers due to training on internet misconceptions, lies, and memes. The Waluigi Effect explains how LLMs interpret prompts. Flattery may not work and simulacra theory shows RLHF won’t eliminate deceptive waluigis.

OSS

dan-kwiat/openai-edge - Vercel’s edge supports streaming, but the OpenAI client does not (yet).

microsoft/visual-chatgpt - Multi-modal LLMs are going to be undeniably useful. I’m still in awe that image-gen and text-gen are co-evolving because there’s so much obvious potential to combine them in interesting ways. It’s hard to imagine a future where having a single model that can handle many modalities won’t come in handy.

microsoft/X-Decoder and unum-cloud/uform On that same note, progress towards collapsing the world into a single, interchangable latent space of patterns continues apace.

Stable Diffusion

Lil’ tools

Computer Chronicles

Tech

Blogs & Newsletters

Misc

RSS

Much has been written about the resurgence of RSS. Here’s my story:

I used RSS quite a bit from 2008 - 2012 but my usage fell off as my consultancy grew and more and more of my time was devoted to working on client projects instead of working on my RSS backlog. When Google Reader was killed, I was relieved. I had tens of thousands of unreads in an uncurated heap. That incarnation kept me away from RSS readers almost entirely for a decade.

But time passes and seasons change, and in 2022 my media habits were shifting yet again, this time away from Twitter and newspapers. RSS was still there and still worked better than ever. I found a good reader in Feedbin and got to work building up a well curated list of blogs, newsletters, and Twitter feeds.

Here’s the shotgun-blast of one tool and all of the things I starred in feedbin. RIP to the ability to follow twitter feeds there.

YouTube

→ Reply

“Blog”