Home Child pornography Turns out Jason Miller’s GETTR channel was flooded with porn and spam

Turns out Jason Miller’s GETTR channel was flooded with porn and spam

0

Child porn on Trump jerk Jason Miller’s holy anti-censorship social media site? IMPOSSIBLE! What we mean, it is easily found using the most rudimentary detection software.

The Stanford University Internet Observatory Cyber ​​Policy Center launches a new report on former Trumplander Jason Miller GETTR’s low-rent knockoff Twitter site, and it’s not good. Aside from being comically indifferent to its user data, the site appears to be massively inflating its reach, both by counting Twitter’s share of imported posts from @Jack’s evil corporate platform, and by dint of low-tech lies.

Miller claims Gettr was able to attract 1 million users in the first 3 days after launch, with 1.4 million in the first week. Other stories promoted claims that Gettr surpassed 1.5 million users after 11 days; however, based on our analysis, it did not appear to reach this number until the first week of August. The @support account, which all users automatically follow when created, shows 1.54 million users as of August 9.

Don’t you hate when that useless auto-tracking you inserted turns out to be a perfectly working metric of how much you’re spending your own stats?


And speaking of shit, most of those accounts appear to be bots and stalkers, just to gawk at Jason Miller’s crying goiter website – only 372,000 have posted anything.

But wait, it gets worse! In keeping with its declared “anti-censorship” stance, whatever content moderation practices the site is employing (it seems to rely on users to flag most things) is allowing a lot of adult content to pass through.

Social networking services generally use machine learning models to analyze the content of uploaded images and videos and determine how to act accordingly; Uploads can be rejected entirely, placed behind a filter for sensitive content or clicks, or, in severe cases, reported to law enforcement. As mentioned above, Gettr does not appear to implement any kind of detection of sensitive content: an image survey using Google’s SafeSearch API indicates that 0.9% of posts with media and 1.8% of comments with media were classified as likely to contain violent or adult content. , and as noted elsewhere, violent terrorist content has also appeared on Gettr.

You mean when you unfold the welcome mat for people who have been kicked off other platforms for engaging in wildly antisocial behavior, they come in and post a video of ISIS beheading before shitting on your rug? Who could have predicted it?

You know where this is going, right? Of course yes. Because if you can’t stop your site from being inundated with racist epithets, Nazi profiles, and stool images, you know full well that you won’t have the tools to keep child pornography off the shelf.

Using PhotoDNA, a widely used tool employed by responsible Internet platforms, that flags known images of child exploitation, the Cyber ​​Policy Center detected 16 known exploitation images in a GETTR sample data set. (The images were reported to law enforcement.) They also successfully uploaded PhotoDNA images, albeit harmless which are included to perform such a test to determine if the sites are using PhotoDNA to filter images.

The Cyber ​​Policy Center notes that “community reporting mechanisms for finding sensitive content and illegal images related to children” will never be effective because “such posts and comments may not be seen by users inclined to report them.” Or simply put, the guys in kinky chat rooms looking to exchange illicit images are the last people on earth to call the police when they come across illicit images. And, as the researchers comment indirectly and more politely than Wonkette would say, half of these people are so crazy about QAnon’s conspiracies that they might view child pornography or even trade it themselves while working under the illusion that they are gathering. evidence against an evil. pedophile cabal: “Users may also be unaware of the reporting mechanisms themselves, or even what content qualifies as ‘child-related crimes’, particularly given the fabricated child-related crime conspiracies flourishing at Gettr and similar platforms “.

When asked by Vice On the report, Miller said the seeded ones were “completely wrong” and insisted that his site has “a strong and proactive dual-layer moderation policy that uses artificial intelligence and human review, ensuring our platform remains safe for all users “.

Which is great and all, but it doesn’t address the fact that there were photos of exploited children on their site, and the Stanford researchers didn’t have to try too hard to find them.

Moderating a “normal” social networking site is hard enough for Facebook and Twitter, who at least feel the need to pretend be responsible adults. But these idiots from Trumpland announced that their business model is to open a bar without a gorilla and invite in all the drunks who roll down the sidewalk in a puddle of their own vomit. Of course, it ended up like Tattooine’s bar.

In short, the dumpster fire continues to burn.

OPEN WIRE.

[Stanford Cyber Policy Institute Report / Vice]

Follow, continue Liz dye On twitter!

Smash that donation button to keep your Wonkette ad-free and energetic. And if you order on Amazon, use this link, because reasons.

.

LEAVE A REPLY

Please enter your comment!
Please enter your name here