Share on Pinterest
Design by Lauren Park

“Is this going to be a dick pic?” is a question so many folks, especially women, ask with a sense of dread whenever they see a DM from a stranger. Well, there’s finally an app for that anxiety, at least for your Twitter DMs.

The inspiration for SafeDM, a content blocking filter that’s currently in development, sparked when Kelsey Bressler found herself dealing with unwanted harassment, of the digital sexual kind.

“[It was] a random person and wanted to get my attention,” Bressler says. “They asked why don’t you ever talk to me and then sent me… that. I think I know why I don’t talk to that person.”

Bressler’s experience with revenge porn has also spurred her to fight back against this harassment. With a team of three others, she is working on a content filter AI, teaching it to recognize dick pics, inform the user about them, and also delete them from their inbox.

Research shows that 53 percent of women 18 to 29 and roughly 78 percent of millennial women are messaged unsolicited explicit images, many of which are dick pics.

“If a woman hasn’t experienced cyber flashing, she knows someone who has. It’s something so common that we really haven’t had a solution other than telling women to just close their DMs which isn’t the correct response in my opinion,” Bressler says. “This is something that most women are really excited about.”

Well, thanks to Bressler’s viral tweet in September, many solicited photos are sent to the account ShowYoDiq. (Some photos are also unfortunately mistakenly, and not mistakenly, sent to Bressler’s personal Twitter account.) Dicks covered in glitter. Dicks with socks on them. Penises in cages — all these photos have been tested to help the AI recognize what makes a dick pic.

If it is a penis, SafeDM will automatically delete the image and reply to the sender saying what was sent was an inappropriate message and let the user know their strike count.

“We don’t scold them in the message. We do say that was inappropriate content. We are working on all the verbiage as well as our strike count, [which after] a certain amount of strikes the person is automatically blocked from contacting you again.”

Some potential users have stated that they would like to manually block offenders themselves rather than have SafeDM automatically do it for them. Bressler and her team are listening to the feedback from those interested in the filter.

“It’s what the users want. We are trying to make things as flexible as possible so people can change these settings” Bressler says. When released, the filter and its settings will be fully functional but the team is willing to adopt user-requested options post-release date to its end users while retaining its core functionality.

Thanks to the amount of data provided by eager folks, there’s been quite a bit to test in order to help the content filter determine what is a penis, and what, well, isn’t just a phallic-looking piece of produce. This has also helped the team develop the filter rather quickly, including whether or not false positives should or shouldn’t be caught by the filter.

“We are over 99 percent accurate, although we do have false positives,” says Bressler. “We have to come to a point where we have to decide if people want to see penis like objects, is that harassment also? Even if [the image] is not a penis, should the filter be filtering it out anyway? It really goes back to what the user wants.”

SafeDm’s twitter testing has shown tremendous progress and accuracy since it was announced in September with less and less getting through its content filter AI. And folks are naturally hyped about the upcoming release. In a surprising twist, Bressler mentioned that some of her favorite people to get messages about the filter are from dads, particularly dads with daughters.

“They’re happy I’m doing it but sad that it’s necessary to begin with. [I’ve had a couple of] dads [who] are glad that I’m working on it because they have daughters and they don’t want them to have to deal with this.”

“You don’t need a specific phone model,” Bressler explains. “Any Twitter account can use this. It works the same on the app and the web.”

The ultimate goal for the project is to give the control back to the community so that they don’t have to live in fear of what resides in their Twitter inboxes.

“It’s funny how you can take something out of a sucky situation and get a little control over the situation and make the world a better place.”

Jennifer Stavros is a freelance writer splitting her time across Los Angeles, San Francisco, and London. Like her life, her work blurs the line on a variety of topics from the quirky to corporate. Follow her Twitter where she promises to mostly behave… ish.