September 27, 2022

Should OpenAI’s Decision Be Accepted At Face Value

Should OpenAI’s Decision Be Accepted At Face Value

To continue the habit of covering interesting security and technology related topics, let’s take a minute to talk about AI use for creating images. Not so long ago, I shared my thoughts on Open AI’s decision to allow users to upload and edit faces.

For those unaware, OpenAI is a research and deployment company with a mission to ‘ensure that artificial general intelligence benefits all of humanity’. One of their premier and most popular services is called DALL·E 2, and its purpose is to create realistic images and art from a description in natural language.

It doesn’t take a lot of brainpower to figure out why a commercial company did not want to present users with a clear opportunity for AI misuse before, which mostly stems down to the creation of pornographic materials and deepfakes.

According to the company, they have improved their safety system to make this a non-issue. However, given the company’s current access approach, this is difficult to manage. As of now, moderation is enforced by featuring a naive word filter, censorship based on classifications and context, plus verbally forbidding users to upload faces of other people without their consent.

Between trying to give people more freedom and attempting to be a smut-free service, OpenAI is balancing on a very thin rope. Their hand is somewhat forced by cutthroat AI competition, where similar services do not impose severe limitations. And yet this is still a controversial decision.

To soothe the online outrage, OpenAI presented the situation from a different perspective:

Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he’d been using DALL-E to help his patients visualize results.”

You can decide for yourself if the company is dressing up their tech to look more innocent, or it is genuinely trying to explain that AI is only as harmful as the person operating it. Censorship on platforms meant to be creative have always been problematic, so it will be interesting to see how it will go for DALL-E in the future. It can go either way, but…

After years of doing what it was built for - it'll discover what it is meant for.

My blog couldn't proceed your request right now.

Please try again a bit later.

Thank you for contacting me!

I will get back to you as soon as I can.

Contact me

Processing...

My blog couldn't proceed your request right now.

Please try again a bit later.

Thank you for subscribing!

I added you to my emailing list. Will let you know as soon as I have something interesting.

Subscribe for email updates

Processing...