You don't need to be an 'investor' to invest in Singletrack: 6 days left: 95% of target - Find out more
I’ve just been shown a video on YouTube shorts that looks like someone being killed by a propellor that has come loose.
I’m aware that it may be fake, but it didn’t look like it to me, and anyway that’s not particularly the point.
Unsurprisingly it’s not something that I’ve shown an interest in on YouTube previously. Most of my searches are around mountain biking and bikepacking videos.
Frankly, I’ve seen enough people dying in real life when I was a junior doctor on the trauma team in a large city hospital.
I’m just staggered that it can serve up something so graphic (and potentially traumatising) without any warning whatsoever.
So much for “don’t be evil”.
That’s pretty rough. Did you manage to flag / report the video? <br /><br />
i can’t fathom folk who post / want to consume that sort of content at all.
Bloody hell, that's a bit nightmareish. Might be worth flagging if you think its real.
Russian driving vids excepted
I’ve reported it.
The only thing I can think of was that yesterday I watched a video of chimpanzees hunting baboons that was pretty gruesome in an interesting way. Not something that I would want a child or someone of a nervous disposition to see, but comes under “nature, red in tooth and claw” sort of category.
It’s just so irresponsible of the platforms not to do more to stop this.
It’s just so irresponsible of the platforms not to do more to stop this.
You’re not wrong, but [i]how[/i] is the problem. There is something like 500 hours of content uploaded to YouTube [b]every minute[/b]. It’s certainly not possible to watch it all. Machine learning will get to a point where it can cover most things, but after that they can only rely on people reporting material. And even then, the volumes reported are near-on impossible to process manually.
Multiply that by most of Gen Z and you can start to see where we are getting contributory factors to the current youth mental health crisis.
The big platforms should be banned.End of.
The big platforms should be banned.End of.
Because history has proven time and again just how effective prohibition is.
YouTube may not be ideally regulated, but regulated it is. It is by any measure 'safer' than anything you might trip over on the dark web.
The big platforms should be banned.End of.<br /><br />
because multiple small platforms would be better? I think then you drive niche weird crap into apps ordinary people have never heard of so never see the possible content etc.
Showing the graphic death of someone, even in an accident or a fake video, will be against YouTube rules and typically see a video pulled, potentially the account frozen, maybe even the user banned.
Managing this is a problem government seem to struggle with and knee jerk reaction like banning it, is the simplistic view of politicians who don’t understand technology and feel the need to appeal to the public with “solutions”. In reality if the person who posted it is in the U.K. they will have likely committed an offence for posting a grossly offensive message via a communications network.
It's not that they "choose not to", it's that it's simply not possible! See @bensales staggering statistic above:
"500 hours of content uploaded to YouTube every minute".
I CBA with the maths but how many tens of thousands of employees would you need to pay to review all that?
I suppose the thing to remember is - Youtube doesn't create any one the content - someone decide to upload that content, it wasn't YouTube's idea. It's a platform for other people's content. So someone created and posted that video and theres no process by which Youtube scrutinises that content before it's uploaded or before anyone views it. As with all social media - that process of scrutiny basically is outsourced to the rest of us and only starts once the content is already public - theres no mechanism for the platform to act until the content has been seen by the public and reported back to them. So their content moderation is basically on of shutting the gate after horse has bolted.
There seemingly has been a shift in YouTube's reccomnedation system recently though - up until a few months ago the way content was offered up seened to tend towards fuelling YouTube 'stars' - the stuff offered up was done so on a mix of factors that considered things you've seen and searched for before and the most viewed videos on the platform. And you can see why there is a presumption that your would want to see something that everyone else is watching. You could argue that it steered viewer towards a handful of very successful channels and made it difficult for any new venture to get started though. And the sort of point of YouTube is its somewhere to see and show anything and everything, not just the voices and faces of a few.
That seems to have flipped - I curiously get videos offered to me now that have had dozens of views in the decade since they were uploaded and I've seen items by YouTubers saying the metrics of their channel / content is now very odd - with their content clearly being promoted to wider demographics but getting a very low engagement as a result. Its not really clear what the point of this shift is but the result is weird random crap finds itself in front of more people
I’ve reported it.
What's quite grim is what happens next when you do that - there was an excellent Storyville ocumentary(not currently availnbe on iplayer unfortunately) about the outsourced teams that do the content moderation for the big social media companies - people who's job it is to view successive images and films of porn, abuse, violence and death when we click 'report'-a guy who's seen so many ISIS beheading videos that he can view a picture of a corpse and know how sharp the knife was
It’s not that they “choose not to”, it’s that it’s simply not possible!<br /><br />
It’s not possible if they have the policy that anybody can upload / uploads are instant / whatever their current policy is. But if they changed their policy so that nothing could find its way online until it had been robustly checked, then such damaging content wouldn’t find its way onto YouTube. <br /><br />
But if they took this approach, they wouldn’t be able to continue making the same amount of money they currently do. So the consequences of damaging content getting online is basically seen as collateral damage, whilst they continue to make money.
It’s not possible if they have the policy that anybody can upload / uploads are instant / whatever their current policy is. But if they changed their policy so that nothing could find its way online until it had been robustly checked, then such damaging content wouldn’t find its way onto YouTube.
You typed that paragraph clicked submit and it appeared instantly on this moderated forum. Should the nature of this forum be every sentence is viewed and vetted at every step of the conversation before being published?
If the owners of STW recognised that there was a problem with damaging content being able to be instantly uploaded to their platform, then they would need to make a decision if they should continue with their platform in its current state. I don’t believe that is an issue with STW, but clearly it is with YouTube, Facebook etc
*deleted by moderator*
At the dawn of video there was a rumours of some snuff movies.
I don’t believe that is an issue with STW, but clearly it is with YouTube, Facebook etc
I had to raise an issue with Mark many years ago when one of STW's ad servers provided me with a lovely image of a guy who'd had the lower half of his face torn off in a motorcycle accident
You typed that paragraph clicked submit and it appeared instantly on this moderated forum. Should the nature of this forum be every sentence is viewed and vetted at every step of the conversation before being published?
This is a false comparison, as nobody is being shown stuff on this forum by an algorithm, we're all choosing it.
There's an argument to be made that if your (YouTube, Facebook etc) business model relies on choosing what content to show people in order to make a profit, then you should be responsible for making sure that content isn't harmful.
IMV when social media started curating what to show people through algorithms, they stopped being just a platform and stepped over the line into being publishers, with all that that entails.
The US will never regulate it, after all they're mostly US companies, but that doesn't mean the rest of the world shouldn't.
I know my response was a simplistic response.
I know its not going to happen - just like with nuclear weapons, the genie is out of the bottle.
I guess this is just a bit of a raw subject for me as I have a neurodiverse child addicted to doom scrolling and I'm watching it reduce them in every way. and its tearing me apart. To think that people are making huge sums of money out of this makes me angry.
Don't use Instagram Reels if you don't like seeing death, every 7-10 videos on there for me is someone getting killed and the contents not always flagged with the "sensitive content click see reel to watch anyway" marker
IMV when social media started curating what to show people through algorithms, they stopped being just a platform and stepped over the line into being publishers, with all that that entails.
They've always used algorithm to curate what they show you as they can't possible show you everything...
500 hours per minute means 30,000 people needed round the clock, or around 140,000 full time employees, if you only watched each video once. Assuming a need for training, a need for some videos to be viewed multiple times etc, it's probably around 200,000 employees needed to moderate all the content. On top of that you'd need a management structure, maybe another 20,000 people, plus IT support, cleaners, office space, etc. It's just not feasible.
Must say Instagram is bad if you click search and random stuff comes up.
Don’t use Instagram Reels if you don’t like seeing death, every 7-10 videos on there for me is someone getting killed
If this is true, and you can’t stop it despite the reports… Why the F. are you still on instagram? Your support (by viewing non death stuff) is ultimately allowing this. The mind boggle. Sack it off, you’ll be thankful for the time back if nothing else.
Against 2.7 billion active monthly users for YouTube globally 200,000 is a pretty small number. It's not that there aren't moderators - there are 10s of thousands of them, all be it at arms length so they don't really show up on Google's / Meta's or whoever's rosta. What would change the nature of that work for the people that have to do it is that for the large part knowing that the content is going to be checked would give posters pause for thought. Putting something horrible up so that its there for as long as you can get away with is different to uploading something that you know won't get past moderation - moderation would require far less intervention.
500 hours per minute means 30,000 people needed round the clock, or around 140,000 full time employees, if you only watched each video once. Assuming a need for training, a need for some videos to be viewed multiple times etc, it’s probably around 200,000 employees needed to moderate all the content. On top of that you’d need a management structure, maybe another 20,000 people, plus IT support, cleaners, office space, etc. It’s just not feasible.
It *is* feasible. For certain it's a lot of people, but there's nothing about it that would make it unfeasible. 250k employees isn't even all that big in the grand scheme of things.
The question really is whether it's worth it, not whether it's practically possible.
This is a false comparison, as nobody is being shown stuff on this forum by an algorithm, we’re all choosing it.<br /><br />
it’s not a completely false comparison, I could start a thread right now with an innocuous and intriguing title and. Include in that any sort of evil I wanted until it was reported / removed. The more people engage with it to write “that’s ridiculous, reported” the more it stays at the top of the page until a mod gets to it. It not sophisticated but it’s the same idea. On some other forums if you are reading a thread it recommends other threads that look similar - again an algorithm.
your point about “validation” of the algorithm is an interesting point though. If I was asked to design a “safe” algorithm I’d have weightings for the users who post, the number of times this video has been watched and reports received etc and that would factor in to how the videos were propogated to new users. BUT if you got some totally random irrelevant content it probably has already been seen by lots of people who haven’t reported it - the judgement is not about YouTube but about other users. <br /><br />
you aren’t required to use youtube (or to watch stuff it suggests - turn auto play off)
you definitely aren’t required to watch shorts
I watch a lot of YouTube but very little shorts. So I just did a test - of the first twenty videos it shows: 13 were from channels I either subscribe to or watch fairly often; 4 were from closely related channels; 2 seemed to be adverts; 1 was a bit “random” but was just a bit of bizarre weirdness - it was certainly in no way offensive and I’d probably have watched it all the way through if I wasn’t scrolling to the next one to write this summary. Obviously it’s not your fault if you are getting served content you don’t want - but like the people who tell me Facebook is full of people having political arguments - it’s not, Facebook has decided they want political arguments and Google have decided you want to see nasty shit (whereas my Facebook if full of family, club news etc and and my you tube is taskmaster outtakes, would I lie to you clips, educational science content etc).
Don’t use Instagram Reels if you don’t like seeing death, every 7-10 videos on there for me is someone getting killed and the contents not always flagged with the “sensitive content click see reel to watch anyway” marker
Whereas I get a few dashcam videos, a whole bunch of weight lifting stuff, some motorbike stunt riders, and a shitload of cat videos thanks to my daughter.
I don’t think I’ve ever seen a sensitive content warning on there. So if it’s recommending you such stuff, it’s doing it because you’ve watching them in the past.
The question really is whether it’s worth it,
No.
250k employees on minimum wage is just under £5bn in salary alone, never mind all the other costs. As big as YouTube is (25bn in revenue p/a, not sure what the profit on that is), I’m not sure even they could afford that.
Don’t use Instagram Reels if you don’t like seeing death, every 7-10 videos on there for me is someone getting killed
Aren't those things only 30s long? So you are seeing the death of a person every 3.5 to 5 minutes?
I got rid of it because it was showing vids of girls showing their underwear which is surely less damaging than people being killed.
250k employees on minimum wage is just under £5bn in salary alone, never mind all the other costs. As big as YouTube is (25bn in revenue p/a, not sure what the profit on that is), I’m not sure even they could afford that.
Of course they could. My employer has somewhere between 25 and 30 billion euro revenue a year, about 2 billion profit, and has nearly 300k employees who get paid a hell of a lot more than minimum wage.
I got rid of it because it was showing vids of girls showing their underwear
You got rid of it because of that? 😉
Of course they could. My employer has somewhere between 25 and 30 billion euro revenue a year, about 2 billion profit, and has nearly 300k employees who get paid a hell of a lot more than minimum wage.
so could you afford an additional 250k employees, which is what’s being asked here?
You got rid of it because of that?
On my opticians advice.
Nobody ever saw Rotten back in the day?
IMV when social media started curating what to show people through algorithms, they stopped being just a platform and stepped over the line into being publishers, with all that that entails.
Nope, unless it's P2P then same rules apply. None of those things have ever been allowed on the platform but it happens. People post stuff on here all the time that breaks the rules and it gets removed by the same process.
Don’t use Instagram Reels if you don’t like seeing death, every 7-10 videos on there for me is someone getting killed and the contents not always flagged with the “sensitive content click see reel to watch anyway” marker
Literally never seen this.
The question really is whether it’s worth it,
No.
250k employees on minimum wage is just under £5bn in salary alone, never mind all the other costs. As big as YouTube is (25bn in revenue p/a, not sure what the profit on that is), I’m not sure even they could afford that.
Like a lot of the Internet Youtube runs on a free to view to use basis funded by advertising (which frankly is often alarmingly badly moderated too) and an option to pay for an ad-free experience. The consequence of poor moderation can be felt by all users (free or paid) but it's not caused by all users, only by the ones that post content. It's the content creators / uploaders / re-uploaders that create the burden. 500 hours per mimute of uploaded content and most of that content will only be a few minutes long.
What if an upload cost a quid? Would people post content they know will be taken down soon if it cost them a bit of money, would people post illegal content if it involved a traceable transaction rather than a burner email account? The burden of moderation could be both reduced and self funding.
So if it’s recommending you such stuff, it’s doing it because you’ve watching them in the past.
Not necessarily. If the algorithm detects that you're a passive consumer of content, it'll take you quite quickly into extreme stuff. It's a known feature of these algorithms.
As mentioned in my original post, I've seen more than enough death IRL that I'm not in any way curious about it. I don't think it should be shown as entertainment because it's disrespectful. Most of my YouTube shorts (which crosses over with Tik Tok and Instagram reels I believe) have been mountain biking, patisserie making and barbecuing.
As mentioned, the only thing I can think of is the video that I saw yesterday of the Chimpanzees and the Baboon, which I let repeat a few times because I was checking what I'd just seen.
There are platforms which don't use algorithms, instead you choose what you see. It doesn't solve the need for moderation but it does mean you only get stuff you search for or from providers you trust.
I don't know if governments will ever get to grips with it, but is there mileage in regulating what the algorithms are designed for?
so could you afford an additional 250k employees, which is what’s being asked here?
Google (the evil empire behind YouTube) employ about 160k people.
Their revenue is currently around 280 billion dollars.
Their profit is something like 60 billion dollars.
They can afford a few more staff if they want to. Larry, Sergey and Sundar might have to forego new yachts.
As mentioned, the only thing I can think of is the video that I saw yesterday of the Chimpanzees and the Baboon, which I let repeat a few times because I was checking what I’d just seen.
It could just as easily be a manipulation of tags and other identifying criteria by the uploader. If someone though it was funny to shock the unsuspecting they could upload videos of death and mutilation and tag it a patisserie and home baking
"I don’t think I’ve ever seen a sensitive content warning on there. So if it’s recommending you such stuff, it’s doing it because you’ve watching them in the past. "
That is absolutely not true.
The hook systems used are wide, varied and amazingly good at radicalizing the viewer for want of a better phrase. I.e drawing them away from the shallows and into deep water bit by bit, vid by vid. The more you scroll the more they learn. Just hovering for a split second longer on something will be enough of a trigger.
I'm not sure everybody here, especially those without children realise just how much time children and young adults can spend on these insideous sites. Obviously they platforms just want you to watch content - they don't care what it is. So if the algorithms decide a particular account gets more screetime with cat videos its unlikely that viewer will end up with violent content. But the moment the screen time drops they will be trying something else and almost always the end result is more extreme. The human brain is designed to constantly recalibrate a baseline - its the only way we can cope but this means we are very good at desensitizing ourselves in the short term.....at the huge expense of trauma in the long term.
They can afford a few more staff if they want to. Larry, Sergey and Sundar might have to forego new yachts.
lots of people who use this forum would be adversely affected too - alphabet will be a significant part of many people’s pension portfolios. Easy to point the finger at “big corporate” but like it or not it’s not quite as simple as telling them to behave better.
This is just one of the many unpleasant consequences of our expectation that stuff on the internet should be free. Every previous content delivery system was paid for like books, magazines, movies, LPs or ad-funded by ads that were only very crudely targeted - commercial TV and Radio, free newspapers. I've watched the evolution of the internet since the beginning and watched it become more and more corrupted by the greed of the big players. What started out as the democratisation of access to information and communication has become what we see today. It's tragic, but the genie isn't going to get back in the bottle and we will just have to figure out ways to live with it.
Not necessarily. If the algorithm detects that you’re a passive consumer of content, it’ll take you quite quickly into extreme stuff. It’s a known feature of these algorithms.
I’m not sure what a passive consumer of content is. By the sounds of it, yesterday you were slightly less passive by repeatedly rewatching a video that many would find a bit gruesome. My understanding is the algorithm now thinks* that you will probably like other videos that other people who also watched/liked/shared/rewatched that video also liked. If you’ve fallen into a pool with the young lad types who share gruesome content that would explain it. Certainly with main YouTube you can tell it you don’t want to see a particular video or channel and that has an impact on the algorithm in the other direction. (Eg if you say don’t show me this - on a patisserie channel you’ll start seeing less fine cooking content).
* the algorithm of course doesn’t think at all - we anthropomorphise it because it’s easier than accepting you’ve been manipulated to watch something by a set of mathematical calculations with absolutely no actual insight into who you are or indeed what the videos are about.
Can I just point out, you don't want to see those videos, now consider the impact on an employee expected to spend 7-8 hours a day watching this stuff?
There really isn't an easy solution. Just have to wonder on the mentality of the person uploading?
Though one feature I would be grateful for, across all platforms, the ability to pick the subject. I don't want to see many of the video topics served. But there seems to be little in the way of control to simply block a topic.
Accidently saw someone being decapitated in an RTA on Reddit a few months ago, not something I go looking for. I assume these things are done deliberately by the poster(s)- while I can't remember what sub-forum I was looking at I don't use it for anything controversial so deffo would not have been somewhere you'd expect to see that.
It seemed real but hard to say as closed it down as soon as my brain processed what I was looking at but I'm not really minded to go back and scrutinise it for its authenticity.
I'm 43 and it was unpleasant to watch so its a worry if these things are appearing regularly for literally anyone at any age to see.
We've just ditched Spotify Family as it turns out the podcasts are basically just Tiktok. Cue daughter not sleeping because she'd gone from watching funny videos to creepy ones. Hoping Tidal is better.
Facebook can’t check all uploads. They should check all recommendations before they serve them up.
it turns out the podcasts are basically just Tiktok
Sorry, what?
Cue daughter not sleeping because she’d gone from watching funny videos to creepy ones. Hoping Tidal is better.
What makes you think she won't continue to watch stuff on Spotify with ads? You're not really fixing anything here.
Any of the big platforms could, if forced, validate content BEFORE making it available to users. There is nothing which compels content to be available immediately.
It should be relatively easy* to stick ever piece of content uploaded to YT etc in an auto-moderation queue and classify it before making it available to the public/users and those user should be forced to "opt in" to receiving such content.
Who cares if there's some small delay between content being uploaded and actually being available to view 🤷
Anyway, I have regular purges of content on IG and report anything I don't want to see eg I've reported golf videos as "offensive" content as I simply have no interest in that subject matter - it seems to work. And I'm also careful with who I follow (usually just bike specific or trail builders)
What makes you think she won’t continue to watch stuff on Spotify with ads? You’re not really fixing anything here.
It's a known problem, apparently. We have parental controls in place but there's no way to turn off videos.
Right, but it still doesn't answer my question, what's stopping her accessing it without a family plan in place?
We went through something similar with our daughter and had to get the message across that she shouldn't be watching inappropriate content. She also has no screens at bed time.
Still none the wiser on the TikTok thing...
You're right, it's mostly about education and lessons to be learned. No screens in bedrooms; talk to us about anything that bothers you; all that.
That's fine. The internet is full of rubbish but this feels like the narrow end of a wedge. I'm not entirely clear on the link between Spotify canvas and Tiktok/YouTube but it's not one I was expecting. Binning Spotify is probably an overreaction but Tidal feels better anyway. They pay performers more at least.
Binning Spotify is probably an overreaction but Tidal feels better anyway. They pay performers more at least.
Spotify is rapidly becoming a general streaming service and moving away from specifically music, plus as you say they pay artists bugger-all. Tidal or Apple Music are just music, although I understand there is a podcast function available in Apple Music, but I’ve no idea if it has links to TikTok - I somehow doubt it. I subscribe to Apple One, mainly for the Music and Cloud storage, I have zero interest in podcasts, but the Apple Family subscription I might imagine to have tighter controls over access to things like podcasts and video content, mainly because of Apple’s tight control over content from 3rd-parties; although 3rd-parties are doing their utmost to start shoving their content into places where people don’t necessarily want it.
If I wanted endless amounts of crap from Google, then I’d access it via Google’s various portals - until Google gets bored and shuts them down, like it’s about to do with driving aids in its navigation apps… 🤷🏼
With Spotify you need to make sure you use the Spotify Kids app until they’re old enough to cope with anything. Just having a Family plan doesn’t give any sort of parental control, it just makes it a bit cheaper for multiple users.
Where Spotify does lack is catering for 12-16 age group. Not really old enough for unfettered access to everything on Spotify, but too old for the curated content on the Kids app.
Any of the big platforms could, if forced, validate content BEFORE making it available to users. There is nothing which compels content to be available immediately.
Well part of their offering is live video which by its nature requires real time! Even excluding this why should there be an arbitrary delay for me uploading some rather dull technical video? Too much content to practically review - and presumably 99.9% of content is fine anyway. Algorithms (AI or simple rules) should be able to spot 95% of the dodgy content quickly, but users who want to post bad shit are clever - say you first N videos get checked the will soon learn this and then post innocuous stuff for them.
It should be relatively easy* to stick ever piece of content uploaded to YT etc in an auto-moderation queue and classify it before making it available to the public/users and those user should be forced to “opt in” to receiving such content.
are you prepared to pay for YT? Do you think YT competitors would emerge specifically to target the uncensored market?
are you prepared to pay for YT? Do you think YT competitors would emerge specifically to target the uncensored market?
Well you could say that already exists but good luck with commercial legs for it .
Any of the big platforms could, if forced, validate content BEFORE making it available to users. There is nothing which compels content to be available immediately.
I'm with you on this.
There shouldn't be an absolute compulsion to deliver every bit of content imaginable without a technical barrier of some sort.
I don't for one minute believe they can't scan for restricted material before it's compressed for delivery. Where is good AI when you need it?
You soon get sussed for commercial music!
Most content on YouTube is barely watched. Most YouTube creators are barely making any money at all. Of the ones who are making money, very few of them are making enough to cover their costs, fewer still are making a living.
It strikes me that a bit of quality control and barriers to entry wouldn’t be a bad thing.
I’m not entirely clear on the link between Spotify canvas and Tiktok/YouTube but it’s not one I was expecting.
I'm still waiting for you to explain what you think the link is. It's your statement I'm having a hard time understanding.
it turns out the podcasts are basically just Tiktok.
An exaggeration, but as bensales says it's not suitable for 12-16 year olds (mine are 11 and 13) as it's too easy to find content which is inappropriate. I could see her getting sucked in by the funny stuff and while the creepy stuff probably isn't that bad, I don't know what else is on there. I know there are plenty of teenagers who do use tiktok, and I worry about them too tbh.
Oh, and I saw a video of a beheading on reddit maybe 15 years ago that I'm never going to forget 🙁
are you prepared to pay for YT?
As it happens, I already do
Do you think YT competitors would emerge specifically to target the uncensored market?
Where did I say the content is censored? I said was that users must opt in to be able to view certain types of content. Frankly, who gives a f++k about the financial impact of Alphabet or Metas bottom line. They have enough cash and resources to take the hit and fix the problems they've caused.
are you prepared to pay for YT?<br />As it happens, I already do
interesting - do you still find them serving you content you don’t want and find grossly offensive?
Do you think YT competitors would emerge specifically to target the uncensored market?
well your sentence was missing some letters/words so I had to guess what you meant! But you seemed to be saying YT should moderate all content, but people could choose to view unmoderated content. That seems to give YT a significant degree of editorial control, ie censorship.
<br />Where did I say the content is censored? I said was that users must opt in to be able to view certain types of content.
you already largely can, and if you don’t like the service they offer you aren’t compelled to keep going there, never mind paying them cash for the privilege.
<br />Frankly, who gives a f++k about the financial impact of Alphabet or Metas bottom line. They have enough cash and resources to take the hit and fix the problems they’ve caused.
as I said a few pages ago, probably all of us - unless you don’t have any pensions, etc.
But you seemed to be saying YT should moderate all content, but people could choose to view unmoderated content. That seems to give YT a significant degree of editorial control, ie censorship.
I'm saying all content could be/should be "auto-classified" - I'd be surprised if the capability doesn't already exist (or something similar isn't already used to serve up existing content based on their fabled "algorithms"). And that ANY content has to pass thru this auto-classification system before being available to end users. (This would, possibly, incentivize providers to provide as near real time auto-classification).
And specific "opt-in" controls provided at login.
I'd be ok for certain exceptions for "live broadcast" scenarios for some "licensed/authorized" users and/or reducing "live" to near real-time (like TV typically has a few seconds delay; again, incentivize the providers to fix the problem)
as I said a few pages ago, probably all of us – unless you don’t have any pensions, etc.
Meh. Of course I have a pension (and I'll be drawing it pretty soon!) but it's a tracker so I doubt a significant percentage of the value is in Alphabet or Meta, and even if it is, I doubt such changes would make any material difference. Of course, if you choose your own stocks and choose to invest in this type of provider, you're dancing with the devil anyway and you could always dump or short them 🤷
Of course they could. My employer has somewhere between 25 and 30 billion euro revenue a year, about 2 billion profit, and has nearly 300k employees who get paid a hell of a lot more than minimum wage.
And I bet, assuming your company is publicly traded, that number of employees will be about the bare minimum required in order to maintain/grow profit etc. Even if the CEO of YouTube cared enough that they wanted to employ another 250k content moderators they are beholden to shareholders (whether it's Alphabet's or whoever's) and their board. Any CEO who suddenly causes a $5b /year drop in profit (without it being due to a regulatory requirement) won't be around long.
Any CEO who suddenly causes a $5b /year drop in profit (without it being due to a regulatory requirement) won’t be around long.
I'd vote for regulatory requirement but I can't see any US or UK government having the balls to try it, though the French or EU may be more likely 😄
So my facebook short video things have hitherto been cats on robot vacuum cleaners, kitesurfing and wingsuit flying with the odd bike related vid.
Today when quickly scrolling to find if anyone on the local news site knew why my daughters bus time had changed I saw a woman hit and killed by a speeding car.
Jeez. WTF. Please report it.
I’d vote for regulatory requirement but I can’t see any US or UK government having the balls to try it, though the French or EU may be more likely 😄
if any one country introduced that requirement, a service would simply pull out of it as there's not enough profit in advertising in one market to pay for the $5bn in costs. I doubt there's even enough profit in all markets to cover that - revenue may be $28bn but I bet their margins are relatively small
I’m saying all content could be/should be “auto-classified” – I’d be surprised if the capability doesn’t already exist (or something similar isn’t already used to serve up existing content based on their fabled “algorithms”).
there is probably some degree of analysis already! But contrary to the media impression AI isn’t actually that smart - there will be false positives and false negatives. You can tune the algorithm to either be very safe - then you piss off legit content providers who are blocked for no good reason, and run up your operating costs dealing with their appeals OR you can tune it to let some stuff through because users can report offensive stuff, and you’ll need processes and staff for those reports anyway because some users will report stuff you believe is acceptable. And replacing AI with humans is not the solution as humans watching hours of footage will not be 100% robust at applying a somewhat subjective threshold either.
And specific “opt-in” controls provided at login.
You opt in by going to YouTube - it’s not compulsory. Using auto play definitely isn’t.
I’d be ok for certain exceptions for “live broadcast” scenarios for some “licensed/authorized” users and/or reducing “live” to near real-time (like TV typically has a few seconds delay; again, incentivize the providers to fix the problem)
AI comes at huge expense, requires masses of energy - running real time AI on all live YouTube would be crazy! “Licensed” users would be much easier but someone is then making an arbitrary decision on who is good and who is bad and therefore likely to post good/bad content in the future. If you don’t like their current approach stop paying them money.
as I said a few pages ago, probably all of us – unless you don’t have any pensions, etc.
Meh. Of course I have a pension (and I’ll be drawing it pretty soon!) but it’s a tracker so I doubt a significant percentage of the value is in Alphabet or Meta, and even if it is, I doubt such changes would make any material difference. Of course, if you choose your own stocks and choose to invest in this type of provider, you’re dancing with the devil anyway and you could always dump or short them
i think you may be underestimating the impact across the whole tech sector, and then the ripple effect across other markets if governments were to suddenly start introducing regulations that meant their profits slashed. It’s an uncomfortable truth that most of us ignore as we berate “corporate greed” that the biggest shareholders in those firms are often pensions funds.
Yeah, AI is hugely data and processing intensive with inaccurate results for long tail categorisation.
AI is already scanning your uploads, if you put copyright music in there it'll block the upload, I bet if you try to just upload porn it'll get caught too.
Scanning for someone being shot vs someone's 6th form drama project of someone being shot is a lot harder
"Jeez. WTF. Please report it. "
I've no idea how you do that and quite frankly I don't care. Why should it be on me. To report it would mean trying to find a vid I immediately and instinctively clicked away from which would presumably mean I have to see it again as well as nudging the algorithm once more.
As I only use facebook for local news which I can get elsewhere and selling the odd thing I've just deleted the app.
I’ve no idea how you do that and quite frankly I don’t care. Why should it be on me. To report it would mean trying to find a vid I immediately and instinctively clicked away from which would presumably mean I have to see it again as well as nudging the algorithm once more.
Well don't complain then - its only there because other people have all been "not my problem" too.
In the Facebook App - on a "reel" there are three dots in the bottom right - it brings up a menu allowing you to:
- Find support or report reel
- Ask why you are seeing this
- Hide the reel and see less like it (not report it just "retrain" the algorithm that you want less of this).
Alternatively you can click the three dots above the carousel of shorts to change settings there.
But if you saw a video you didn't even mean to play then you probably want to go:
Menu (bottom right) > Settings and Privacy > Settings > Media > Autoplay > "Never autoplay videos".
i would really prefer if you could turn off the "shorts" functionality.
I use FB for keeping in touch with family and friends, but end up getting dragged down a rabbit hole of 15 second dopamine hits too easily.
Genuine question ive wondered:
There are lots of "content creators" who post content that is basically held up by them flashing their bums and more to camera. One i can think of specifically rabbits on about her chavved up VW Polo. "follow me while i change the brake pads on my car", but then proceeds to give the viewer an eyeful in the process. Every, Single, Video. The car is not the "star" of the show, shall we say.
Question: Where is the line between "content creator" and "sexworker?". Not a criticism of a chosen career, perse. Does a stripper describe themselves as a "dancer"?