The Pollyannish Assumption
Apple is the easy one, and I started with them on purpose: using a term like “return on investment” gets a whole lot more problematic when dealing with abuse and human exploitation. That doesn’t mean it isn’t a real calculation made by relevant executives though: in the case of Apple, I think most people would agree that whatever investment in forum moderation would be effective enough to catch this post before it was surfaced on Twitter a couple of weeks later would be far better spent buttressing the internal quality control teams that missed the bug in the first place.
I’m not sure I agree with that. Apple’s developer forums (contra the general support forums) do not actually have overwhelming volume. Is it really not worth paying someone to read them? I don’t mean to try to reproduce every issue that people mention, but rather to look for outliers (in either frequency or severity) like the root access one. I got to thinking about this while listening to Brian Covey talk about how The Omni Group does support. Not everyone is going to file Radars about every problem they encounter, but there is a lot of valuable information already out there if Apple is willing to sift through it. If no one is doing this already, I would think that adding one person who can leverage the experiences of thousands would make more of a difference than that marginal increase in resources for the existing QA team. Along the same lines, why isn’t there someone whose job it is to go through the top Stack Overflow questions and advocate for fixing the bugs and the documentation that is missing or unclear?
A major factor driving this growth is YouTube’s machine-learning algorithm for watching more videos[…] This should expose the obvious flaw in YouTube’s current reporting-based policing strategy: the nature of search and recommendation algorithms is such that most YouTube viewers, who would be rightly concerned and outraged about videos of child exploitation, never even see the videos that need to be reported. In other words, YouTube’s design makes its attempt to leverage the Internet broadly as moderator doomed to fail.
[…]
This is why it is critical that YouTube lose its pollyannish assumptions: were the company’s moderation approach to start with the assumption of bad actors, then child exploitation would be perhaps the most obvious place to look for problematic videos. Moreover, we know it works: that is exactly what Uziel and BuzzFeed did. If you know what you are looking for, you will, thanks to Google/YouTube’s search capabilities and recommendation algorithms, find it.
But as always, there’s a difference between what the law says and how the law is implemented. You could make a video that meets the criteria for fair use, but YouTube could still take it down because of their internal system (Copyright ID) which analyzes and detects copyrighted material.
So I learned to edit my way around that system.
Nearly every stylistic decision you see about the channel — the length of the clips, the number of examples, which studios’ films we chose, the way narration and clip audio weave together, the reordering and flipping of shots, the remixing of 5.1 audio, the rhythm and pacing of the overall video — all of that was reverse-engineered from YouTube’s Copyright ID.
Via Nick Heer:
If YouTube’s automatic flagging system didn’t exist, it’s likely that “Every Frame a Painting” would feel completely different. Whether it would have been better, I’m not sure, but I think the limitations of YouTube helped birth something truly unique and very, very good.
1 Comment RSS · Twitter
Paying someone to read Apple's developer forums?!?! Heck, they don't even read to my radars.