Category Archives: ai

A quick experiment with generative AI

WordPress offered to generate an image for my last blog post. Here is the prompt it suggested:

“Generate a high-resolution, highly detailed image capturing the essence of “20 Years of PerfectTablePlan Software.” The main subjects should be two screenshots side-by-side: one showcasing PerfectTablePlan version 1, reflecting a vintage desktop interface with a Windows aesthetic from 2005, and the other displaying version 7 with a sleek, modern design. The lighting should be bright and inviting, emphasizing the contrast between the older and newer software. The style should blend nostalgia with innovation, showcasing the journey of the product over two decades. Ensure the image has sharp focus and intricate details to attract the reader’s attention.”

And here 5 images it came up with, from that prompt:

They are simulateously very impressive and hilariously awful. Quite apart from the weird text (“sex 20”?), none of the screenshots look even slightly like PerfectTablePlan. I think I’ll pass!

The AI bullshit singularity

I’m sure we are all familiar with the idea of a technological singularity. Humans create an AI that is smart enough to create an even smarter successor. That successor then creates an even smarter successor. The process accelerates through a positive feedback loop, until we reach a technological singularity, where puny human intelligence is quickly left far behind.

Some people seem to think that Large Language Models could be the start of this process. We train the LLMs on vast corpuses of human knowledge. The LLMs then help humans create new knowledge, which is then used to train the next generation of LLMs. Singularity, here we come!

But I don’t think so. Human nature being what it is, LLMs are inevitably going to be used to churn out vast amount of low quality ‘content’ for SEO and other commercial purposes. LLM nature being what it is, a lot of this content is going to be hallucinated. In otherwords, bullshit. Given that LLMs can generate content vastly faster than humans can, we could quickly end up with an Internet that is mostly bullshit. Which will then be used to train the next generation of LLM. We will eventually reach a bullshit singularlity, where it is almost impossible to work out whether anything on the Internet is true. Enshittification at scale. Well done us.