AI Images Are Getting Too Real: OpenAI Develops Detection Tool

by Voinea Laurentiu

We all love to spot those weird pictures obviously made by AI – messed up hands, wonky backgrounds, that kind of thing. But guess what? The tech is getting scary good! So good, that soon we might not even be able to tell what's real and what's fake.

OpenAI, the big company behind the DALL-E image generator, is trying to stay ahead of the problem. They made a tool to spot pictures created by their own software. It works amazingly well...if it's a DALL-E picture that hasn't been changed at all.

The Problem: Real Life Isn't Perfect. Here's the thing: nobody just posts AI images without messing with them first! Cropped, edited, with angry text slapped on – the tool struggles with these changes. And sadly, that's how most of the fake news images designed to make people mad are going to look.

It's Already Happening. Bad actors are already using AI to make fake election ads and images. With elections coming up, this is a big problem, and it's only going to get worse.

At least OpenAI is being honest about what their tool can and can't do. They're even sharing it with others to try and improve it, and are trying to educate people about AI. It's a race against time, though!