- 151,001
- 202,329
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
On Thursday, lawmakers and the commander of U.S. Space Command expressed concern about SpaceX’s recent announcement that it would start limiting Ukraine’s use of the company’s Starlink constellation.
“I was personally disappointed to see discontinuation of full services at such a critical time for Ukraine self-defense,” Sen. Mark Kelly, D-Ariz., told Space Command’s Gen. James Dickinson. “Do you feel there's a connection between the availability of this capability to our partners in Ukraine in this conflict, and relationships we have with companies like SpaceX?”
Dickinson answered matter-of-factly: “Yes.”
The Pentagon’s relationship with SpaceX has only grown since DARPA’s 2005 Falcon program helped the company get off the ground. The military, already a large customer of Starlink services, is planning a vast expansion of battlefield communications that depends on cheap commercial satellite communications.
Some observers, like LeCun, say that these isolated examples are neither surprising nor concerning: Give a machine bad input and you will receive bad output. But the Elon Musk car crash example makes clear these systems can create hallucinations that appear nowhere in the training data. Moreover, the potential scale of this problem is cause for worry. We can only begin to imagine what state-sponsored troll farms with large budgets and customized large language models of their own might accomplish. Bad actors could easily use these tools, or tools like them, to generate harmful misinformation, at unprecedented and enormous scale. In 2020, Renée DiResta, the research manager of the Stanford Internet Observatory, warned that the “supply of misinformation will soon be infinite.” That moment has arrived.
Each day is bringing us a little bit closer to a kind of information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots. GPT-3 produces more plausible outputs than GPT-2, and GPT-4 will be more powerful than GPT-3. And none of the automated systems designed to discriminate human-generated text from machine-generated text has proved particularly effective.
We already face a problem with echo chambers that polarize our minds. The mass-scale automated production of misinformation will assist in the weaponization of those echo chambers and likely drive us even further into extremes. The goal of the Russian “Firehose of Falsehood” model is to create an atmosphere of mistrust, allowing authoritarians to step in; it is along these lines that the political strategist Steve Bannon aimed, during the Trump administration, to “flood the zone with ****.” It’s urgent that we figure out how democracy can be preserved in a world in which misinformation can be created so rapidly, and at such scale.
Translation: 'Blame the black and LGBT people.'
Hm yes, very diverse. Probably "1 black" too many if you ask me. The veterans are ok I guess, as long as they're god-loving biblethumping cousin-****ing white Americans.
Max Blumenthal might be just as bad as his father
Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned.
The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.