***Official Political Discussion Thread***




On Thursday, lawmakers and the commander of U.S. Space Command expressed concern about SpaceX’s recent announcement that it would start limiting Ukraine’s use of the company’s Starlink constellation.

“I was personally disappointed to see discontinuation of full services at such a critical time for Ukraine self-defense,” Sen. Mark Kelly, D-Ariz., told Space Command’s Gen. James Dickinson. “Do you feel there's a connection between the availability of this capability to our partners in Ukraine in this conflict, and relationships we have with companies like SpaceX?”

Dickinson answered matter-of-factly: “Yes.”

The Pentagon’s relationship with SpaceX has only grown since DARPA’s 2005 Falcon program helped the company get off the ground. The military, already a large customer of Starlink services, is planning a vast expansion of battlefield communications that depends on cheap commercial satellite communications.

WCGW when you privatize everything?
 

Some observers, like LeCun, say that these isolated examples are neither surprising nor concerning: Give a machine bad input and you will receive bad output. But the Elon Musk car crash example makes clear these systems can create hallucinations that appear nowhere in the training data. Moreover, the potential scale of this problem is cause for worry. We can only begin to imagine what state-sponsored troll farms with large budgets and customized large language models of their own might accomplish. Bad actors could easily use these tools, or tools like them, to generate harmful misinformation, at unprecedented and enormous scale. In 2020, Renée DiResta, the research manager of the Stanford Internet Observatory, warned that the “supply of misinformation will soon be infinite.” That moment has arrived.

Each day is bringing us a little bit closer to a kind of information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots. GPT-3 produces more plausible outputs than GPT-2, and GPT-4 will be more powerful than GPT-3. And none of the automated systems designed to discriminate human-generated text from machine-generated text has proved particularly effective.

We already face a problem with echo chambers that polarize our minds. The mass-scale automated production of misinformation will assist in the weaponization of those echo chambers and likely drive us even further into extremes. The goal of the Russian “Firehose of Falsehood” model is to create an atmosphere of mistrust, allowing authoritarians to step in; it is along these lines that the political strategist Steve Bannon aimed, during the Trump administration, to “flood the zone with ****.” It’s urgent that we figure out how democracy can be preserved in a world in which misinformation can be created so rapidly, and at such scale.
 

Even if you're vehemently against children receiving any sort of gender-affirming care, there's a standard in here that is truly just baffling even for the Florida nutjobs.
For as low as 'believing they're "at risk", this bill seems to immunize just straight up kidnapping in Florida.

Personally I tend to agree that underage hormone treatment is a bad idea. I'm no transgender, or doctor, but I do have a hormone condition and it's pretty wild what even just relatively minor hormone abnormalities can do to a person. I damn near turn into a cripple if I don't take my testosterone injections.
I do volunteer work in higher education student mental health, where I've met quite a few transgender students, and learned that even amongst them it is sort of devisive topic. Small sample size of course but I'd say it was about 70% in favor. At the end of the day though, I don't think there's any real debate that it does a lot more good than harm.
 

I wonder what the racial diversity of the 38% is
ee8021b291456ad235e73664927a6c15.png


Turns out the biblethumping white Jesus cultists in this poll are the most racist, who could've seen that one coming
0c1c5012faec1ea0e8d9816c18fe7019.png


Remember, this is "the ONE issue [that] will be the most important in deciding which candidate you will support"
9e14b57aac6f952e53654b3fcd5cad41.png



bc7ad0f1cb2cb5f4bbc7bfc3a032e598.png
 
Max Blumenthal might be just as bad as his father


That's exactly what that Atlantic article was talking about.

Also:

Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned.

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

If someone thought 2016 was bad with misinformation, they're not ready for what's coming, especially when we expect regulations to come from legislators who can't explain how the internet works.
 
Back
Top Bottom