Is Social Media Destroying Society? Former Facebook Exec Says 'Yes'

So basically the last two videos I posted the first is a pedo ring on YouTube tagging videos at points where children are visible in various ways and that apparently leads to a whole other section. Of YouTube that they are apparently aware of and have separate servers for to facilitate these kind of videos

The second is a meme or challenge #momochallenge apparently daring kids to commit suicide they say its been out over a year as a creepypasta but no one has seen the video and celebs like Kim k are tweeting about it
 
I doubt it. Too many lonely attention seeking people out here

The amount of selfies people take says otherwise
Yeah, no way will it ever stop. That's like saying porn will stop.
With hi tech phones releasing with 6 cameras, narcissism is just gonna be a growing monster. That beach vid a page or so back is both funny and sad.
 
If anything they'll find some technology to make it even more invasive than it already is.
 
Wonder what the world would be like if suddenly Facebook, IG, Snapchat, Twitter and all the other similar social media platforms just ceased to exist.
 
What i see happening more often is tech being created based on 'can we do it', instead of 'should we do it'. Most of the time this notion is led by money instead if morals... God bless America.
 
YouTube Bans Comments On All Videos of Children
_105846085_ytkids.png

Image copyright | GETTY IMAGES

https://www.bbc.com/news/technology-47408969

YouTube says it will switch off comments on almost all videos featuring under-18s, in an attempt to "better protect children and families".

Several brands stopped advertising on YouTube after discovering that paedophiles were leaving predatory comments on videos of children.

YouTube had originally disabled comments on videos that were attracting predatory and obscene comments.

But it will now disable comments on almost all videos of minors by default.

It said the change would take effect over several months.

What was happening?

The BBC first reported that paedophiles were leaving explicit comments on YouTube videos back in 2017.

As well as leaving obscene or sexual comments, they were also using the comments section to signal content of interest to other paedophiles.

At the time, YouTube said it was "working urgently" to clean up the site.

However, in February this year advertisers including AT&T, Nestle and Hasbro suspended their ads after more predatory activity was found.

What has YouTube announced?

In a blog post, YouTube said its new policy meant videos of very young children would automatically have the comments section disabled.

The move is likely to include videos of toddlers uploaded by parents, as well as short films featuring children by established YouTube stars.

Videos of older children and teenagers will typically not have the comments disabled, unless a specific video is likely to attract predatory attention. That could include, for example, a video of a teenager doing gymnastics.

YouTube told the BBC it would use algorithms to detect which videos contained children.

Millions of hours of footage are uploaded to YouTube every day.

When will comments be permitted?

A small number of YouTube content creators will be allowed to enable comments on videos featuring children.

These channels will be trusted partners such as family video-bloggers or known YouTube stars.

However, they will be required to actively moderate their comments and demonstrate that their videos carry a low risk of attracting predatory comments.

YouTube said it had developed a system that was better at detecting predatory comments and removing them.

Previously, it had said it would stop video-makers earning ad revenue if paedophiles left explicit comments on their videos, but this will no longer be necessary.

What further action is being taken?

In addition to updating its comments policy, YouTube said it had terminated several channels that were "endangering" children.

The ban included several channels that were adding shocking content in the middle of children's cartoons.

It named FilthyFrankClips as one of the banned channels. It had released a video instructing children how to cut themselves.

"Nothing is more important to us than ensuring the safety of young people on the platform," said YouTube chief executive Susan Wojcicki on Twitter.

YouTube's app for children - YouTube Kids - has been criticised for using algorithms to curate content. Inappropriate videos have repeatedly been discovered on the service.

How have creators responded?

The comments left by fans on YouTube videos help the platform's algorithms decide which videos to serve up and recommend to viewers.

Creators have expressed concern that being forced to disable comments on their videos will affect the growth of their channels.

Despite the wide-ranging new policy, comments will remain part of the recommendation algorithm.

"We understand that comments are an important way creators build and connect with their audiences," YouTube said in a statement. "We also know that this is the right thing to do to protect the YouTube community."

Andy Burrows from the child protection charity NSPCC said the announcement was an "important step".

"We know that offenders are twisting YouTube videos for their own sexual gratification, using them to contact other predators and using the comments section as shop window to child abuse image sites," he said.

However, he called for an "independent statutory regulator" that could "force social networks to follow the rules or face tough consequences".
 
Gender-Specific Behaviors on Social Media and What They Mean for Online Communications



https://www.socialmediatoday.com/so...edia-and-what-they-mean-online-communications


Have you ever wondered why there are more women than men on Pinterest? Or that trolls are more commonly male?

In this post, we'll look at some of the more gender-specific behaviors on social media, the motivations behind such actions and what it means in our wider understanding of social behaviors.

News vs Friendships

Research shows that men are more likely to use social media to seek information, while women use social platforms to connect with people. Studies also show that when men do open social media accounts to network, they're more often looking to form new relationships, while women are more focused on sustaining existing ones.

An investigation conducted by Facebook found that female users of their platform tend to share more personal issues (e.g., family matters, relationships), whereas men discuss more abstract topics (e.g., politics). Facebook's research team analyzed 1.5m status updates published on the platform, categorizing them into topics. Each topic was then evaluated on the basis of both gender preferences and audience reactions. The results showed that men and women not only prefer certain topics, but distinct 'female' topics (e.g. birthdays, family fun) tend to receive more likes from other users, while clearly 'male' topics (e.g. sports, deep thoughts) elicit more comments.

We can't infer, however, that women aren't interested in abstract topics enough as to share them. One of the reasons why female users may be more reticent online is negative feedback. Indeed, women receive more abusive comments when expressing their opinions. A telling example is this Twitter experiment conducted by British journalist Martin Belam - Belam created a spoof account in which he pretended to guest-tweet as different male and female celebrities. When he presented himself as a woman, the account received significantly more offensive comments, and even blatantly misogynist ones.

Research conducted by The Guardian found similar - an analysis of 70 million readers' comments on their website showed that 8 of the 10 most abused journalists were women.

Totally HerSelfie

Depending on what men and women like to talk about on social media, their platform of choice will also vary. Female users generally prefer visual platforms, while men gravitate to more text-oriented mediums. Indeed, Pinterest, Facebook and Instagram have a larger female user base, while online discussion forums such as Reddit or Digg count more male users.

So why are women more drawn to producing and sharing visual content? Tallinn University sociologist Katrin Tiidenberg believes the answer may lie in the traditional female role in the family - in all societies mothers have been historically responsible for taking family photos. In this sense, Instagram is a modern continuation of a female practice that began with the popularization of photography.

Maybe this can also help explain why women post more selfies than men: the Selfieexploratory project for example analyzed 3800 Instagram selfies from 5 cities across the world and found that the number of female selfies is always significantly higher. A recent study from the Ohio State University even suggests that men who take a lot of selfies tend to have narcissistic or psychopathic personalities.

But it's not just a knack for photography that makes girls strike a pose.

Trimmed Up For Some Likes

All content we post - photos especially - is motivated by a desire to make a good impression on others.

Women and men, however, differ in their self-presentation on social media. For example, women post more portrait photos with direct eye contact, while men prefer more full body shots that include other people. Male users are also more likely to post more outdoor photographs which present them in a more adventurous light.

These differences are even more pronounced among younger users - several studies have shown that teenagers often use gender stereotypes to build their social media personas. For instance, teenage girls are more likely to post overtly seductive photos of themselves, while boys are more inclined to share pictures related to risky behaviors, alcohol or sex. Girls also tend to share more 'cute' pictures, too (think of those puppies).

A Northwestern University study also found that male users are generally more self-promotional on social media and are more likely to show their creative work, like writings, music or videos, online. Almost two-thirds of men reported posting their work online compared to only half of women.

She said: "OMG!!", he said: "Yeah"
Social media data also shows that men and women communicate very differently on social platforms.

Men are more likely to use authoritative language and more formal speech than women. Males respond more negatively in interactions, as well, whereas women tend to use 'warmer' and more positive words.

Women also use words more emotionally. A recently study examined 15.4 million status updates made by 68,000 Facebook users and found that words describing positive emotions (e.g., "excited", "happy", "love"), social relationships (e.g., "friends", "family"), and intensive adverbs (e.g., "sooo", "sooooo", "ridiculously") were predominantly used by women. By comparison, male topics were fact-oriented and included words related to politics (e.g., "government", "tax"), sports and competition (e.g., "football", "season", "win", "battle").

It's even possible to identify the gender of social media users solely based on their writing style. Academics from John Hopkins University analyzed the language of Twitter users and found that women use more emoticons and put increased emphasis on punctuation, included ellipses, repeated exclamations (!!!) and puzzled punctuation (?!). The expressions "OMG" and "lol" are also predominantly used by females, while the affirmation "yeah" is more strongly associated with men.

Congruent with this are the findings of a content analysis of 14.000 Twitter users. Researchers identified the 10,000 most used lexical items (both individual words and word-like items such as emoticons and punctuation) and discovered that female authors write with more personal pronouns(e. g. "you", "me"), use non-standard spelling of words (e. g. "Nooo waaay"), and more hesitant words ("hmm", "umm"). Offensive and taboo words, on the other hand, were strongly among male users.

Men are also more likely to engage in trolling, or aggressive language, online. Psychology Professor Mark Griffiths says that the prevalence of male trolling may be related to the fact that men use the Internet as a way to vent their aggression, something they're not able to do in face-to-face communication, unlike women.

Interestingly, male language also appears to be more possessive - male Facebook users include the possessive pronoun 'my' when mentioning their 'wife' or 'girlfriend' more often than female users talking about their husband or boyfriends, found another research team.

To sum up
Men and women communicate differently in real life, which naturally reflects how they use social media. They post about different things, prefer certain platforms and even use language differently. Some findings might appear obvious, others are unexpected: what strikes you as most intriguing?

 
People who have their lives negatively changed via social media are the same people that would of had their lives destroyed by some other means. Kinda cray to think about it but social media may be the lesser of 2 evils.
 
What i see happening more often is tech being created based on 'can we do it', instead of 'should we do it'. Most of the time this notion is led by money instead if morals... God bless America.
Billions are spent every year to figure out how to get us to click on ad links. They’re not giving that up. Semi related, the continuation of cars that use gas could of ended 20 years ago....... the money didn’t make that happen
 
Does this apply to Ball is Life or other vidros of minor athletics
Thought this only applied to kids videos, but apparently if any offensive comments appear in any channels vids they'll shut off the comments entirely instead of just removing the comments. Which is silly, takes away the creators connection to their audience.
 
Facebook Bans White Nationalism and White Separatism



https://motherboard.vice.com/en_us/...XWIoJfRNEp18rkSAk6KyVmMHyS6hJEvKmOXy0D5YFafdA

In a major policy shift for the world’s biggest social media network, Facebook banned white nationalism and white separatism on its platform Tuesday. Facebook will also begin directing users who try to post content associated with those ideologies to a nonprofit that helps people leave hate groups, Motherboard has learned.

The new policy, which will be officially implemented next week, highlights the malleable nature of Facebook’s policies, which govern the speech of more than 2 billion users worldwide. And Facebook still has to effectively enforce the policies if it is really going to diminish hate speech on its platform.

Last year, a Motherboard investigation found that, though Facebook banned “white supremacy” on its platform, it explicitly allowed “white nationalism” and “white separatism.” After backlash from civil rights groups and historians who say there is no difference between the ideologies, Facebook has decided to ban all three, two members of Facebook’s content policy team said.

“We’ve had conversations with more than 20 members of civil society, academics, in some cases these were civil rights organizations, experts in race relations from around the world,” Brian Fishman, policy director of counterterrorism at Facebook, told us in a phone call. “We decided that the overlap between white nationalism, [white] separatism, and white supremacy is so extensive we really can’t make a meaningful distinction between them. And that’s because the language and the rhetoric that is used and the ideology that it represents overlaps to a degree that it is not a meaningful distinction.”

Specifically, Facebook will now ban content that includes explicit praise, support, or representation of white nationalism or separatism. Phrases such as “I am a proud white nationalist” and “Immigration is tearing this country apart; white separatism is the only answer” will now be banned, according to the company. Implicit and coded white nationalism and white separatism will not be banned immediately, in part because the company said it’s harder to detect and remove.

The decision was formally made at Facebook’s Content Standards Forum on Tuesday, a meeting that includes representatives from a range of different Facebook departments in which content moderation policies are discussed and ultimately adopted. Fishman told Motherboard that Facebook COO Sheryl Sandberg was involved in the formulation of the new policy, though roughly three dozen Facebook employees worked on it.

Here's How Facebook Is Trying to Moderate Its Two Billion Users

Fishman said that users who search for or try to post white nationalism, white separatism, or white supremacist content will begin getting a popup that will redirect to the website for Life After Hate, a nonprofit founded by ex-white supremacists that is dedicated to getting people to leave hate groups.

“If people are exploring this movement, we want to connect them with folks that will be able to provide support offline,” Fishman said. “This is the kind of work that we think is part of a comprehensive program to take this sort of movement on.”

Behind the scenes, Facebook will continue using some of the same tactics it uses to surface and remove content associated with ISIS, Al Qaeda, and other terrorist groups to remove white nationalist, separatist, and supremacist content. This includes content matching, which algorithmically detects and deletes images that have been previously identified to contain hate material, and will include machine learning and artificial intelligence, Fishman said, though he didn’t elaborate on how those techniques would work.

The new policy is a significant change from the company’s old policies on white separatism and white nationalism. In internal moderation training documents obtained and published by Motherboard last year, Facebook argued that white nationalism “doesn’t seem to be always associated with racism (at least not explicitly).”

That article elicited widespread criticism from civil rights, Black history, and extremism experts, who stressed that “white nationalism” and “white separatism” are often simply fronts for white supremacy.

“I do think it’s a step forward, and a direct result of pressure being placed on it [Facebook],” Rashad Robinson, president of campaign group Color Of Change, told Motherboard in a phone call.


Signs at Facebook's headquarters in Menlo Park, California. Image: Jason Koebler

Experts say that white nationalism and white separatism movements are different from other separatist movements such as the Basque separatist movement in France and Spain and Black separatist movements worldwide because of the long history of white supremacism that has been used to subjugate and dehumanize people of color in the United States and around the world.

“Anyone who distinguishes white nationalists from white supremacists does not have any understanding about the history of white supremacism and white nationalism, which is historically intertwined,” Ibram X. Kendi, who won a National Book Award in 2016 for Stamped from the Beginning: The Definitive History of Racist Ideas in America, told Motherboard last year.

Heidi Beirich, head of the Southern Poverty Law Center’s (SPLC) Intelligence Project, told Motherboard last year that “white nationalism is something that people like David Duke [former leader of the Ku Klux Klan] and others came up with to sound less bad.”

While there is unanimous agreement among civil rights experts Motherboard spoke to that white nationalism and separatism are indistinguishable from white supremacy, the decision is likely to be politically controversial both in the United States, where the right has accused Facebook of having an anti-conservative bias, and worldwide, especially in countries where openly white nationalist politicians have found large followings. Facebook said that not all of the groups it spoke to believed it should change its policy.

"We saw that was becoming more of a thing, where they would try to normalize what they were doing by saying ‘I’m not racist, I’m a nationalist’, and try to make that distinction"

“When you have a broad range of people you engage with, you’re going to get a range of ideas and beliefs,” Ulrick Casseus, a subject matter expert on hate groups on Facebook’s policy team, told us. “There were a few people who [...] did not agree that white nationalism and white separatism were inherently hateful.”

But Facebook said that the overwhelming majority of experts it spoke to believed that white nationalism and white separatism are tied closely to organized hate, and that all experts it spoke to believe that white nationalism expressed online has led to real-world harm. After speaking to these experts, Facebook decided that white nationalism and white separatism are “inherently hateful.”

“We saw that was becoming more of a thing, where they would try to normalize what they were doing by saying ‘I’m not racist, I’m a nationalist’, and try to make that distinction. They even go so far as to say ‘I’m not a white supremacist, I’m a white nationalist’. Time and time again they would say that but they would also have hateful speech and hateful behaviors tied to that,” Casseus said. “They’re trying to normalize it and based upon what we’ve seen and who we’ve talked to, we determined that this is hateful, and it’s tied to organized hate.”

The change comes less than two years after Facebook internally clarified its policies on white supremacy after the Charlottesville protests of August 2017, in which a white supremacist killed counter-protester Heather Heyer. That included drawing the distinction between supremacy and nationalism that extremist experts saw as problematic.

Facebook quietly made other tweaks internally around this time. One source with direct knowledge of Facebook’s deliberations said that following Motherboard’s reporting, Facebook changed its internal documents to say that racial supremacy isn’t allowed in general. Motherboard granted the source anonymity to speak candidly about internal Facebook discussions.

“Everything was rephrased so instead of saying white nationalism is allowed while white supremacy isn’t, it now says racial supremacy isn’t allowed,” the source said last year. At the time, white nationalism and Black nationalism did not violate Facebook’s policies, the source added. A Facebook spokesperson confirmed that it did make that change last year.

The new policy will not ban implicit white nationalism and white separatism, which Casseus said is difficult to detect and enforce. It also doesn’t change the company’s existing policies on separatist and nationalist movements more generally; content relating to Black separatist movements and the Basque separatist movement, for example, will still be allowed.

A social media policy is only as good as its implementation and enforcement. A recent report from NGO the Counter Extremism Projectfound that Facebook did not remove pages belonging to known neo-Nazi groups after this month’s Christchurch, New Zealand terrorist attacks. Facebook wants to be sure that enforcement of its policies is consistent around the world and from moderator to moderator, which is one of the reasons why its policy doesn’t ban implicit or coded expressions of white nationalism or white separatism.

David Brody, an attorney with the Lawyers’ Committee for Civil Rights Under Law which lobbied Facebook over the policy change, told Motherboard in a phone call “if there is a certain type of problematic content that really is not amenable to enforcement at scale, they would prefer to write their policies in a way where they can pretend it doesn’t exist.”

Keegan Hankes, a research analyst for the SPLC’s Intelligence Project, added, “One thing that continually surprises me about Facebook, is this unwillingness to recognize that even if content is not explicitly racist and violent outright, it [needs] to think about how their audience is receiving that message.”

"It’s definitely a positive change, but you have to look at it in context, which is that this is something they should have been doing from the get-go"

Facebook banning white nationalism and separatism has been a long time coming, and the experts Motherboard spoke to believe that Facebook was too slow to move. Motherboard first published documents showing Facebook’s problematic distinction between supremacy and nationalism in May last year; the Lawyer’s Committee wrote a critical letter to Facebook in September. Between then and now, Facebook’s old policy has remained in place.

“It’s definitely a positive change, but you have to look at it in context, which is that this is something they should have been doing from the get-go,” Brody told Motherboard. “How much credit do you get for doing the thing you were supposed to do in the first place?”

“It’s ridiculous," Hankes added. "The fact that it’s taken this long after Charlottesville, for instance, and then this latest tragedy to come to the position that, of course, white nationalism, white separatism are euphemisms for white supremacy.” He said that multiple groups have been lobbying Facebook around this issue, and have been frustrated with the slow response.

“The only time you seem to be able to get a serious response out of these people is when there’s a tragedy,” he added.

Motherboard raised this criticism to Fishman: If Facebook now realizes that the common academic view is that there is no meaningful distinction between white supremacy and white nationalism, why wasn’t that its view all along?

“I would say that we think we’ve got it right now,” he said.
 
So they gonna add weight verification too? Because of alot of these chicks be frauding and way past their prime.
 
Back
Top Bottom