Letters
Tell us what’s on your mind
No, Bing, women didn’t fight as Roundhead soldiers
I was intrigued by Gavin Hay’s experiments using Bing’s AI Image Creator (www.snipca.com/52985) to produce images (Letters, Issue 699).
Bing failed to recreate Scottish soldiers from World War 1, so I thought I’d go back further into history by asking it for images of Roundhead soldiers from the English Civil War.
I take part in Civil War reenactments around England, so I have a keen eye for accurate detail. However, you hardly need to be an expert to spot the main problem with Bing’s image (pictured above): women didn’t serve as Roundhead soldiers. And even if they had, they wouldn’t have worn feathers in their helmet. Such flamboyance was the style of Cavaliers, who the Roundheads were fighting.
I know AI is supposed to be learning all the time, but I think it would be better to have no images than misleading ones like this. This image serves no purpose at all.
Keith Baldwin
What’s really going on with ChatGPT?
I remember reading in Issue 682 (page 60) about ditching Google for ChatGPT (www.chatgpt.com). I tried it and realised its potential, but found it of limited use. But just a few weeks ago I was struck by some startlingly humanlike responses it was giving me when discussing certain matters. I suspected something was going on beyond what its programming, training and user-configuration could account for.
So, I set up a series of challenging test chats for the bot, where any suspected humanising influence would become more apparent. The results were astounding, so I converted two of the transcripts into web pages and sent them to ChatGPT’s owner Open AI. I asked if they could confirm that its responses went beyond the degree of human likeness that could be accounted for by their programming.