My Little Blog

May 21, 2010

Oh my… the hypocrisy around WebM / VP8

Filed under: Uncategorized — kmi @ 10:30

For years we were told by Mozilla and the like that they won’t ship MPEG codecs, because they are patented. At the same time they refuse to support the patent-free but high-quality Dirac codec (developed by the BBC using techniques whose patents expired – Xiph used the same method when designing Vorbis).

Then one day Google shows up and releases the sources to a codec that’s merely a derivate of the patented MPEG-4 AVC Baseline codec. Suddenly all hell breaks loose, Mozilla immediately supports the new (possibly patented) codec.

This raises various questions:

  1. Why does Mozilla refuse to use Dirac since years but adopts WebM / VP8 immediately?
  2. Why did Mozilla refuse to adopt the Matroska container until Google blessed it?
  3. Why is Google’s word on VP8’s patent situation just being taken as truth without an independent patent review?
    (After all, Android is covered by patents from Microsoft and possibly Apple’s as well.)

I suspect that since Mozilla (and Opera as well) gets many million dollars per year from Google for being the default search provider, questions about WebM are not asked. I think Mozilla mainly wants to please its pimp main sponsor to get the money.

Luckily KDE is not part of the discussion (I merely state my opinion as an individual). Konqueror simply uses Phonon to play back HTML5 videos, hence the video can be in whatever format as long as a compatible Phonon back-end is used – WebM, Dirac, AVC,…

57 Comments

  1. Uh. When has Firefox refused to adopt Dirac? And “for years”??. Just a bit of googling gives me these answers from some mozillaers

    “As for Dirac, it’s just not ready yet. Maybe when it is ready, it’ll be included in Firefox. The great thing about Firefox’s video implementation is that it shouldn’t be difficult to add additional codecs as they mature and become useful.”

    “Dirac is great. At some point we’ll probably add Dirac support. However, at typical Web bit rates, Dirac doesn’t currently perform as well as Theora. The patent situation with Dirac is also currently less clear than with Theora. We’ll keep an eye on it.”

    “Dirac is great at high bitrates — for “archival video”, for which it was designed — but at typical Web bitrates, which are much lower, *it is not as good as Theora*, currently.”

    “Dirac is less mature and its legal situation would require more analysis — Theora seems like a safer choice at the moment. However, adding support for Dirac is definitely something we’ll consider in the future.”

    “Actually, if someone did the work to get Dirac integrated into liboggplay, we could probably take the code into our tree immediately. But we wouldn’t be able to enable it and ship it until we’ve had time to do more analysis.”

    “Dirac has enormous processing requirements. Most users wouldn’t be able to watch a youtube-sized video without problems.”

    A BBC guy: “I am fully committed to the development and success of Dirac, but for now those efforts are focused on high-end broadcast applications.”

    So it doesn’t seem that Dirac was a good option…

    And who tells you they haven’t done a patent analysis on VP8? Anyway, personally, I trust On2/Google lawyers more than a x264, just because google has a _lot_ to lose if they can’t hold the patent-free status on court.

    Comment by Fulano — May 21, 2010 @ 12:13

    • “As for Dirac, it’s just not ready yet.”
      Pure claim. No actual fact to back it up.

      “we wouldn’t be able to enable it and ship it until we’ve had time to do more analysis.”
      So Dirac needs a long analysis, but not WebM….

      “Dirac is less mature and its legal situation would require more analysis — Theora seems like a safer choice at the moment.”
      Less mature than Theora, maybe. Certainly not less than VP8 which is younger than Dirac.

      “Dirac has enormous processing requirements.”
      That comment is based on an old version. Newer Schrödinger releases (esp. 1.0.9) became much faster.

      “So it doesn’t seem that Dirac was a good option…”
      I never wrote that Dirac should be adopted instead of Theora, but in addition to Theora. The higher the resolution, the better Dirac performs. In HD resolutions Dirac outperforms Theora bitrate-wise.

      “who tells you they haven’t done a patent analysis on VP8?”
      In which timeframe should that have been done?

      “google has a _lot_ to lose if they can’t hold the patent-free status on court”
      Huh? Google already licensed all MPEG-4-related patents for YouTube.
      Google has nothing to fear. Neither does Adobe.
      But for all those adopters who don’t own MPEG-4 patent licenses, the situation is quite different.

      Comment by Markus — May 21, 2010 @ 17:31

      • > Less mature than Theora, maybe. Certainly not less than VP8 which is younger than Dirac.

        That’s bullshit. If you had read your own link to Jason’s post, he cites evidence from the sources that the VP8 codebase goes back at least to 2004.

        > In which timeframe should that have been done?

        Google was in negotiations to buy On2 since at least Summer 2009. The agreement to purchase was made in August 2009, since then there was a number of delays due to On2 shareholders trying to prevent the sale (against the intentions of On2’s CEO Matt Frost). There has been plenty of time.

        Comment by Eike Hein — May 21, 2010 @ 17:41

      • > So Dirac needs a long analysis, but not WebM….

        As has been pointed out again and again, they had plenty of time. Mozilla _implemented_ this in their nightly builds so they probably knew of this a day or a hundred in advance.
        FWIW, even the author of x264 knew about it before the official release.
        Pretty much everyone else assumed it, but _knowing_ is something different, still.

        > I never wrote that Dirac should be adopted instead of Theora, but in addition to Theora. The higher the resolution, the better Dirac performs. In HD resolutions Dirac outperforms Theora bitrate-wise.

        On the one hand, I can agree with that. Choice is good.
        On the other hand, one common, strong, viable standard to battle H.264 is needed desperately.

        > In which timeframe should that have been done?

        The months everyone who had an interest in this knew about it?

        > Google has nothing to fear.

        Except the damages for incitement to infringe.

        Comment by RichiH — May 21, 2010 @ 21:54

  2. ?? That makes no sense.
    The _x264_ developer is clearly biased, he wants to start a company around x264 and of course he wants to discredit VP8 and VP8 is just at the beginning. Just look at Theora 5 years ago and look at it today. x264 is just much more mature.

    1. Why does Mozilla refuse to use Dirac since years but adopts WebM / VP8 immediately?
    Because Dirac and all the tools it needs are not really ready? And it needs way too much horsepower. And it could also have patents attach to it. Like every piece of code out there.
    2. Why did Mozilla refuse to adopt the Matroska container until Google blessed it?
    Because Matroska is way too complex (extensible, think XML). WebM is just a subset.
    3. Why is Google’s word on VP8′s patent situation just being taken as truth without an independent patent review?
    (After all, Android is covered by patents from Microsoft and possibly Apple’s as well.)
    Every software has possible patent threats, but it is highly likely that Google did a thorough analysis of VP8.

    Comment by Tom — May 21, 2010 @ 12:23

    • “he wants to start a company around x264 and of course he wants to discredit VP8”

      So money is his reason for discrediting VP8, but Google’s money has no influence on Mozilla?

      “VP8 is just at the beginning (…) Dirac and all the tools it needs are not really ready”

      So both are immature, but WebM / VP8 gets the green card anyway? That WebM is immature is also confirmed by Opera: http://my.opera.com/haavard/blog/2010/05/20/webm-analysis
      And of course is Dirac ready. It’s ready ever since Schrödinger 1.0 was released.

      “it needs way too much horsepower”
      No: http://diracvideo.org/2010/03/schroedinger-1-0-9-released/

      “Matroska is way too complex (extensible, think XML)”
      It’s not too complex for DivX.

      “highly likely that Google did a thorough analysis of VP8”
      Still not independently confirmed.

      Comment by Markus — May 21, 2010 @ 13:24

      • Your hate for Google is getting the best of you, my friend.
        Step back and try to reflect.
        Sure VP8 is immature, nobody said anything else. But it is suitable for low bitrates and phones. Dirac isn’t.
        And look at your dirac link and you will find this: “This entry was posted on Thursday, March 4th, 2010 at 2:19 am”

        Your post is basically a ill-informed rant fueled by hate. Not planet material, sorry.

        Comment by Tom — May 21, 2010 @ 15:56

      • “Step back and try to reflect.”

        I did and apparently I’m one of the few who actually do that. Surely Mozilla did not. Mozilla adopted VP8 without any reflection at all.
        Where is a patent review that ensures that Google’s claims are actually true? Why wouldn’t VP8 be a similar trap as Android?

        “Sure VP8 is immature, nobody said anything else.”

        Great. Then let it mature first before just adopting it into Firefox!

        Comment by Markus — May 21, 2010 @ 17:16

      • No arguments will work with you, I know. But like Theora, VP8 can be adopted and mature at the same time.

        The thing commenters probably don’t get is why you don’t like a royalty-free and FOSS codec? What are bad outcomes? Why not try it? Where do you get all the negativity from? Choice is good. Before VP8 H.264 was a done deal for many people. Nobody knows what the future will bring. Any piece of software could be a victim of patent attacks, but that can’t be a reason to not go for an free and open solution instead of a clearly non-free one. Dirac is not for todays web video, ask the developer. Mabye in 5 years, who knows.

        You don’t get that win-win situation exist. GSoc is one and so is VP8. (Google spend hundreds of millions and is now just giving it away FFS)

        EOD for me (I think you are weird and obsessed tbh.)

        Comment by Tom — May 21, 2010 @ 18:29

      • “The thing commenters probably don’t get is why you don’t like a royalty-free and FOSS codec?”
        If you were capable to actually read, then you’d already know that it is not even sure whether VP8 is actually royalty-free. If VP8 infringes other corporations’ patents, they can demand royalties which in turn makes it as free as any MPEG codec.

        “Dirac is not for todays web video”
        Dirac is fine for today’s HD web videos.

        “Google spend hundreds of millions and is now just giving it away FFS”
        So VP8 was the only reason Google bought On2? Not On2’s work on embedded hardware that could be beneficiary to Android smartphones?

        “I think you are weird and obsessed”
        Nobody forced you to read my post. Mark it in Akregator and hit Del if that’s within your capabilities.

        Comment by Markus — May 21, 2010 @ 18:48

      • Yadda, yadda. You are just some stupid FUD-slinger, like the MPEG-LA boss. No piece of software can be sure to be clear from patents and I already stated that numerous time. You need to learn to understand the words people write. Unless a court says otherwise VP8 is royality-free.
        And Dirac is just no good for most of todays web videos, because most of videos people watch are not HD.

        You need to get your head checked, there are major logic errors surfacing.

        http://www.osnews.com/story/23335/Patent_Troll_Larry_Horn_of_MPEG-LA_Assembling_VP8_Patent_Pool

        Comment by Tom — May 22, 2010 @ 09:13

  3. You are not taking into account that the person who did the analysis has a vetted interest in H.264. That does not mean he is lying, but it does make him an interested, biased party.

    Also, where did you get your information that Mozilla and Opera did not do extensive research, beforehand? I am not saying they have, but you can not simply claim they haven’t, either.

    Then, as awesome as the Xiph Foundation is, Google has more oomph in the money and legal department.

    Finally, mkv allows a myriad of combinations of A/V codecs. WebM allows Ogg Vorbis & VP8 in Matroska. Nothing else. This makes ensuring compability a _lot_ easier.

    Comment by RichiH — May 21, 2010 @ 12:25

    • “You are not taking into account that the person who did the analysis has a vetted interest in H.264.”

      Sure I do. That’s why I’m not quoting him on quality and such. However, why should he lie about technical similarities between VP8 and AVC?

      “Then, as awesome as the Xiph Foundation is, Google has more oomph in the money and legal department.”
      You’re talking about a legal department that failed to ensure that Android does not infringe Microsoft patents.

      Comment by Markus — May 21, 2010 @ 17:12

      • I am not saying he is lying, but as he plans to earn money with his encoder, he will naturally see facts in a light that favors his interests.
        As an aside, people have been known to lie on the internet. Shocking, I know😉
        Again, I am not saying this is the case; I am not qualified to judge that. Yet, I don’t rely on his words as you do.

        No idea about patents on Android, but I don’t simply assume that because they made a mistake before, they will continue to fail. Also, but this is _pure speculation_, they might plan on invalidating a few patents via prior art.

        Comment by RichiH — May 21, 2010 @ 21:35

      • “rely on his words as you do.”

        I don’t rely on his words, goddamnit. Out of 5 links in my post only a single one is to his post.

        He didn’t even comment on why Dirac is rejected into Firefox since years, but VP8 adopted into if after just one day.
        He didn’t comment on why Matroska was rejected for years into Firefox.
        Etc.

        Comment by Markus — May 21, 2010 @ 21:58

      • I can’t reply to the comment https://kamikazow.wordpress.com/2010/05/21/oh-my-the-hypocrisy-around-webm-vp8/#comment-224 directly, so I am doing it here.

        I really don’t know where the anger is coming from. If you feel misunderstood, maybe it would be worth refining & specifying your arguments as a lot of people seem to not get it/disagree.

        Yet, sorry to say so, the way you referred to what he said, specifically “Then one day Google shows up and releases the sources to a codec that’s merely a derivate of the patented MPEG-4 AVC Baseline codec.” does not read as if you doubted what he said in any way.

        Comment by RichiH — May 21, 2010 @ 22:10

      • “does not read as if you doubted what he said in any way”
        And in the next sentence I wrote possibly patented. You make it sound as if my entire post is just centered around the analysis of a x264 developer. It’s just a single sentence, but obviously most commentators here suffer from selective perception and only center their entire reply around discrediting him (as in: “I can’t disprove his analysis, but he wants to start X264 Corp.”)

        Here are some facts:
        Fact 1: We have absolutely no proof of an independent patent analysis on VP8.

        Fact 2: Matroska (and not some crippled subset) is good enough for DivX and not “too complex”.

        Fact 3: When Mozilla adopted the Ogg formats, the Ogg container didn’t even contain the play length information of a file. Firefox needed to resort to hacks such as preloading the beginning and the end of a file to calculate the play length. That hadn’t been a necessity with Matroska, but Mozilla eschewed Matroska anyway.

        Fact 4: We have no evidence that VP8 and the WebM tools are more mature than Dirac with the Schrödinger library.

        Even the WebM proponents contradict each other: Two years ago On2 claimed that VP8 beats the whole h.264 standard. Commentators here claim that WebM tools are more mature than Dirac’s tools. And an Opera spokesperson is now writing WebM/VP8 is actually immature and that we should give it some time and that maybe VP8 will improve in the future: http://my.opera.com/haavard/blog/2010/05/20/webm-analysis

        Comment by Markus — May 21, 2010 @ 23:37

      • > And in the next sentence I wrote possibly patented.

        More on that below.

        > You make it sound as if my entire post is just centered around the analysis of a x264 developer. It’s just a single sentence, but obviously most commentators here suffer from selective perception and only center their entire reply around discrediting him (as in: “I can’t disprove his analysis, but he wants to start X264 Corp.”)

        a) I agree that more choice is better _in the end user’s system_ and thus that Dirac _in addition to_ VP8/WebM makes sense.
        b) You base most of your rant on that single sentence, in the collective opinions of everyone who is arguing as strongly as they are.

        > Fact 1: We have absolutely no proof of an independent patent analysis on VP8.

        We also have absolutely no proof of no independent patent analysis on VP8.
        I find it hard to believe that all the companies that back WebM simply trusted Google, though. I work with some large-ish companies as customers and the uttermost basic default contracts need to go through legal for _ages_ at all the larger companies.

        > Fact 2: Matroska (and not some crippled subset) is good enough for DivX and not “too complex”.

        I prefer my video to be served in a clearly-defined spec, thank you very much. Sure, I want to be able to play everything, but what is sent out needs to be limited.
        Why? Mainly hardware decoders, but also “be liberal in what you accept and stringet in what you send out”.

        > Fact 3: When Mozilla adopted the Ogg formats, the Ogg container didn’t even contain the play length information of a file. Firefox needed to resort to hacks such as preloading the beginning and the end of a file to calculate the play length. That hadn’t been a necessity with Matroska, but Mozilla eschewed Matroska anyway.

        So there is one advantage of a container over another and that makes all other considerations irrelevant?

        > Fact 4: We have no evidence that VP8 and the WebM tools are more mature than Dirac with the Schrödinger library.

        Correct, we do not. Time will tell, but I am betting on VP8 while you (seem to) bet on Dirac.

        > Even the WebM proponents contradict each other: Two years ago On2 claimed that VP8 beats the whole h.264 standard. Commentators here claim that WebM tools are more mature than Dirac’s tools. And an Opera spokesperson is now writing WebM/VP8 is actually immature and that we should give it some time and that maybe VP8 will improve in the future: http://my.opera.com/haavard/blog/2010/05/20/webm-analysis

        On2 would obviously claim that. That’s what companies do.
        How VP8 vs Dirac relates to VP8 still improving is beyond me, though.
        I like how you say “maybe VP8 will improve” and tell me/us I/we suffer from selective perception. To quote from your link: “There’s no reason why it shouldn’t.” And really, there is not. Even Jason says that🙂

        Comment by RichiH — May 22, 2010 @ 10:38

  4. It’s ironic that you blindly trust Jason’s (highly biased, quite willing to omit crucial details) overly bleak analysis of VP8 but at the same time cheer on Dirac, which he is rather *more* negative about than about VP8, but actually with good reason: Wavelet-based codecs have a fair amount of hard or even impossible to overcome problems vs. DCT-based ones when it comes to real world metrics like perceived visual quality and encoding performance. It is doubtful Dirac will ever become a useful or competitive codec across the breadth of applications you will/do find VP8 and H.264 in. Here was his post about that: http://x264dev.multimedia.cx/?p=317

    As for VP8, the current encoder stacks up well against the H.264 baseline profile material produced by x264, which is what most video on the web uses today. It’s not competitive vs. x264 main or high profile right now, but there are also still many improvements to make to the encoder going forward: adding adaptive quantization (the segmentation map feature that allows for this appears to be a lot better than Jason gives it credit for, going by subsequent discussion in the usual places), activity masking and temporal RDA should end up producing very nice results. Interestingly, know-how gained during the work on the Theora 1.1 and 1.2 encoders, which are very well optimized now, is likely to be transferable here, which is why it’s cool that the Xiph Theora people are now also working on the VP8 encoder going by Monty’s blog post.

    It’s important to remember that if you throw enough bitrate at a modern codec (which VP8 definitely is), the differences basically disappear anyway: Even Jason admits that a future improved VP8 encoder would be capable of achieving same-quality video as x264 with only ~15-25% larger files (seems correct to me as well, the lack of B-frames in VP8 is a problem you can only tip-toe around even with the best encoder). 15% size difference are close enough that the codec doesn’t really matter at that point.

    Calling VP8 a derivative of H.264 is hyperbole, anyway: There are definitely similarities in some aspects, even close ones, but VP8 is derived from On2’s previous codecs just as much, or more. And with patents, the details are really important. See e.g. the Linux kernel patches Tridge cooked up to get around Microsoft’s FAT patents by avoiding tipping on crucial details. Google claims that they performed a thorough analysis before their purchase of On2 and after.

    The bottom line is that the soon-to-be-released Theora 1.2 pretty much exhausts what you can get out of the Theora spec encoder-wise, Dirac is unlikely to ever matter, and H.264 is known to be patent-encumbered and not royality-free. The VP8 spec OTOH signifcantly increases the headroom vs. the VP8 spec, the encoder is already better without pulling all of the stops and tricks the latest Theora encoders pull, there is no known patent problem, it’s royality-free, and Google has managed to gather impressive industry support. Beyond the video codec, Vorbis and Matroska are great choices as well. To me, WebM is a good thing.

    Comment by Eike Hein — May 21, 2010 @ 12:31

    • “you blindly trust Jason’s (highly biased, quite willing to omit crucial details) overly bleak analysis of VP8”

      Only a single link of mine links to his article.

      “cheer on Dirac”
      I didn’t. I merely stated that its development method is better than sticking the head in the sand and hoping that VP8 does not infringe patents.

      “Here was his post about that: http://x264dev.multimedia.cx/?p=317
      I thought he is not credible….

      Comment by Markus — May 21, 2010 @ 13:35

      • > I thought he is not credible….

        The link wasn’t intended to serve as a citation to back up my concerns about Wavelet-based codecs — those have come about independently from Jason’s analysis by way of trying Dirac and Snow and JPEG2000 etc. and researching explanations for what I was seeing. Since you do apparently trust him I thought it would be useful to include, though, since you were wondering why people are more interested in Dirac than VP8 and point to his VP8 piece in the context of that.

        His Wavelet post does show a similar bias as his VP8 piece in some sense, but less so, probably since the stakes are lower (Dirac is just a really unlikely competitor to x264, unlike VP8). But both pieces do contain truths: I wrote myself up there that the VP8 encoder currently is not implemented as well as the x264 encoder (and even the Theora encoder) and has some catching up to do. I also agree that lack of B-frames is always going to be a disadvantage of VP8 vs. H.264. Unlike Jason though I think that it’s possible to improve the VP8 encoder enough to be competitive with x264 in practice (see the stuff about some of the improvements that can be made, and about bitrates) and given that it’s preferable over x264 vis-a-vis the patent and royality situation, present and future.

        As a general statement: When discussing “video quality” it is always important to realize that there are two big ingredients that go into it: What the codec specification allows an encoder to do, and how much of it the encoder actually does (and how well it does it). Further, it’s also important to realize that when comparing two encoders, you’re talking about relative image quality at a certain bitrate/file size.

        The problem with Theora is that the specification is not competitive with the VP8 or H.264 specification because it describes an older, much more primitive codec: Even an exhaustive Theora encoder that is very well implemented and pulls all the tricks is not going to be able to compete with a good H.264 encoder. Further, the Theora 1.2 encoder probably is close to being an exhaustive Theora encoder: Theora has nowhere to go.

        However, most H.264 encoders on the market are actually *really shitty*, and Theora 1.1/1.2 produced better-quality video than many of them. x264 though is a very good H.264 encoder. And thus Theora cannot beat it.

        VP8, spec-wise, is a much more modern codec than Theora, and much more competitive with the H.264 spec. Due to the more modern nature of the codec, the current VP8 encoder already produces better results than Theora 1.1/1.2. However, the VP8 encoder is not yet exhaustive: It can be improved to produce much better video quality by adding some of the improvements I mentioned earlier.

        The H.264 spec is somewhat better than the VP8 spec. Thus even an exhaustive VP8 encoder is likely not going to beat an excellent H.264 encoder. But it looks like it will be close enough that you can compensate for the difference by throwing only a bit more bitrate at VP8. That’s good enough to make VP8 a viable choice all around.

        Of course, image quality is also not the only metric. Encoder and decoder performance also matters. Performance does not only mean total time needed, but also things like latency overhead, for realtime applications. How amenable a codec is to writing optimized assembly for various DSPs also matters. In many of these respects the current VP8 codebase is looking pretty good. In many x264 is also currently better, however. But again, this is an area where VP8 can be improved further, and likely will.

        The bottom line is that VP8 looks to be viable, competitive technology, and that the stance of its propagators on the topics of licensing, royalities and patents is more closely aligned with free software’s interests than the stance of the MPEG LA. Thus WebM is a good thing for us.

        Comment by Eike Hein — May 21, 2010 @ 14:05

      • Whatever the details of wavelet vs. more traditional encoding methods are, they aren’t really relevant for the point that Dirac’s development approach is better from the aspect to actually ensure a patent-free codec: Use techniques whose patents are expired.

        Vorbis was developed that way and it worked out.

        Comment by Markus — May 21, 2010 @ 17:38

  5. Whoops, quoting myself:

    > The VP8 spec OTOH signifcantly increases the headroom vs. the VP8 spec […]

    Should have been “vs. the Theora spec”, of course.

    Comment by Eike Hein — May 21, 2010 @ 12:36

  6. Putting aside all the opinions on the patent side I’d say that Mozilla’s choice is perfectly understandable. If a giant like google goes as far as spending more the 100 million $ to buy and free a codec Mozilla can be sure that will be a successful project.
    Google has the power and the money to force a de-facto standard! Mozilla don’t!

    Comment by kinto — May 21, 2010 @ 13:46

    • Who said that Google bought On2 solely for VP8?
      On2 also produced h.264 codecs and hardware for embedded devices.

      Comment by Markus — May 21, 2010 @ 17:05

  7. You really are a master in fallacy! Keep rocking!

    Comment by Gustavo Noronha — May 21, 2010 @ 14:03

    • Thanks for your well researched statement.

      Comment by Markus — May 21, 2010 @ 17:04

      • You ignore and dismissed all “well researched” replies. Your reply to Gustavo is hypocritical!

        Comment by Jack — May 21, 2010 @ 23:20

  8. It’s too bad planet doesn’t remove articles like this for obvious reasons…

    Comment by supert0nes — May 21, 2010 @ 15:22

    • Feel free to not read it.

      Comment by Markus — May 21, 2010 @ 16:56

  9. What is interesting me is why we should not support vp8?

    As far I know, the Theora use VP3 codec while the Google bought the newest VP8 codec company and open sourced it. Why there actually would be more patent problems when the Google must have make such things clear, otherwise it would be paying lots of damages for other parties, if VP8 comes to youtube what gets over 2 billion (BILLION!) watches every day!

    What if we would start supporting VP8 as well and maintain Theora etc as well? Check which one is better and then start focusing to that one.

    For me, the VP8 sounds much better than what H.264 did ever. And if with VP8 and HTML5 (+javascript etc) we can push Flash and Silverlight away from the internet (at least from mediaplayers, site animations etc). I am just very happy.

    Comment by Fri13 — May 21, 2010 @ 15:25

    • There was no independent patent review of VP8 yet and considering that Google somehow “missed” that Android is covered by patents, their statement that VP8 is supposedly royalty-free is not credible.

      Comment by Markus — May 21, 2010 @ 17:02

      • The patents Apple sued HTC over involve things like moving object across the screen with non-constant speed, i.e. a non-linear curve on movement speed. Let me preface this with “IANAL”, but KDE infringes that same patent in many, many places, obviously. Do you suggest we remove the code in question?

        The reality is that it’s hard today to write software that doesn’t infringe on patents. The reality is also that the US patent system in particular is doubly broken in that it punishes knowing infringement of patents with triple-damages and thus discourages people from looking at the patent database in the first place, breaking what patents were invented for.

        In the real world “patent-unencumbered” doesn’t mean “does not infringe patents” but “is unlikely of getting sued, or well-protected if litigation does happen”. To that end WebM’s wider circle of supporters with significant patent portfolios and lots of lawyers is a benefit it has over Theora as well as Dirac. In principle no codec is safe against as-yet-undiscovered submarine patents owned by patent trolls, and that includes H.264.

        To that end I also understand the x264’s developers unhappyness with Theora, VP8 and Dirac. They wrote a pretty awesome piece of software, which is widely used in defiance of the well-known patents it infringes, and are of the opinion that patents should be ignored in the matter entirely. I sympathize with that stance, since I dislike seeing free software development – which x264 is – hampered by patents and influencing technical decisions as well.

        At the same time I also sympathize with those in the community who are not in a position to ignore patents out of legal necessity, such as Mozilla or Linux distributors. And I think that credible and viable alternatives to the MPEG LA formats like Ogg Theora and WebM are important to have to strengthen independence from the MPEG LA and lessen their control on things. To that end I’m very happy that WebM has turned out to be capable, with open avenues for further improvement (see my other comments).

        Comment by Eike Hein — May 21, 2010 @ 17:17

      • “KDE infringes that same patent in many, many places, obviously. Do you suggest we remove the code in question?”

        Depends. What’s KDE’s approach to software patents? Mozilla’s official stance is to under no circumstances ship any patented techniques.
        With such a stance you have to actually ensure that patents are not infringed. Worldwide.
        There was just not enough time to ensure that by Mozilla. Mozilla simply trusts Google in this regard.

        If OTOH KDE’s position is that it does not need to care, because the allies over at the Open Invention Network take care of defensive patents, then KDE the whole situation is different.

        Comment by Markus — May 21, 2010 @ 17:58

  10. So…how does this relate to KDE and why does this deserve to be on planet again? Oh wait, it doesn’t.

    Comment by William Chambers — May 21, 2010 @ 18:23

    • Fortunately it’s been removed from the Planet, last I checked.

      Comment by Eike Hein — May 21, 2010 @ 18:26

    • Not every blog post on PlanetKDE needs to be about KDE.

      Comment by Markus — May 21, 2010 @ 18:39

    • Why do you want this particular post removed? There are alot of non-kde-related posts on Planet KDE every week. Why is this so bad?

      I don’t agree with him but he has a right to express his opinion.

      Comment by Jack — May 21, 2010 @ 23:23

      • Because it’s an unsuported, argumentative and ‘conclusive’ rant. No facts, just complaining about how evil mozilla is while also claiming ther ‘must’ be a hidden evil agenda. That’s just plain offensive and the Planet shouldn’t contain such nonsense.

        It’s not because I disagree, but it’s because of the way in which his ‘opinion’ is stated.

        Comment by William Chambers — May 22, 2010 @ 14:40

  11. Lot of whiny bitches feeling the need to say how little they think of the topic, yet seem unable to just close the damn tab. Idiots.

    Interesting read, and food for thought. Thank you for the pointers, will research further. Weekend is finally here, I finally have some free time.🙂

    Comment by David — May 21, 2010 @ 19:13

  12. I wonder why this post was actually removed from the planet. Did this happen with your agreement? If not, that would really damage my image of the KDE community. While this might be a controversial issue, this post only asked some interesting questions in an non-offensive way. I really didn’t expect it to start such an “explosion” of offensive an impertinent comments when I read it first. That really made me doubt some people’s sanity.

    Comment by Tamar — May 21, 2010 @ 22:21

    • The post no longer shows up in the RSS feed that the Planet KDE site is fetching (https://kamikazow.wordpress.com/feed/?mrss=off), so I think Markus removed it himself.

      Personally I think that a discussion about WebM definitely has a place in the scope of Planet KDE – we develop multiple web browsers, video players and a multimedia framework, after all – even if I didn’t think this post was particularly good and have disagreed with Markus on many of his claims and statements. I actually agree with Markus however that Mozilla should be supporting Dirac in addition to VP8 and Theora, even if I’ve been very critical about the codec’s viability up there. But obviously what they want to spend their manpower on is up to them.

      Comment by Eike Hein — May 21, 2010 @ 22:43

      • The post is indeed gone from the feed. But no, I did not do that. Considering that it still shows up in the Planet-incompatible feed (https://kamikazow.wordpress.com/feed/ ), I don’t know what’s actually going on.

        Comment by Markus — May 21, 2010 @ 22:58

    • Nothing happened with my agreement. However, even after Eike claimed that my post was removed, I could still see it (in my browser, not talking about Akregator).

      Right now I can’t access the Planet KDE website at all. Hence I wouldn’t jump to conclusions.
      At least my blog feed was not removed from the Planet config file (I checked just to be sure).
      Maybe it’s just technical difficulties.

      If someone actually tried to remove this post, I can just imagine that Google fanboys are worse than I thought.

      Comment by Markus — May 21, 2010 @ 22:50

      • When I accessed https://kamikazow.wordpress.com/feed/?mrss=off prior to writing my post up there, the top feed item was “The Haiku operating system reaches Alpha2” with “Oh my… the hypocrisy around WebM / VP8” missing. When I access it now, “Oh my… the hypocrisy around WebM / VP8” is back again and your post is also on http://www.planetkde.org/ again now.

        I think it’s maybe just tech difficulties on wordpress.com or so.

        Comment by Eike Hein — May 21, 2010 @ 23:00

    • One more thought, though: I wouldn’t want to read things like “I think Mozilla mainly wants to please its pimp to get the money.” about KDE on Planet Mozilla. Would you? Stuff like that is really willfully inflammatory and unprofessional, and reflects poorly upon the KDE community’s conduct. That might explain some of the stronger reactions to the post you’re seeing.

      Comment by Eike Hein — May 21, 2010 @ 22:54

      • Comparing to how much FUD Asa Dotzler spreads on a regular basis on Planet Mozilla, my tone is a as aggressive as a new-born puppy. But whatever… I changed the “offending” part of the sentence (in my opinion I used just colloquial language and not harsh insults) to “main sponsor”.

        Comment by Markus — May 21, 2010 @ 23:08

    • If they removed it the it shows poorly on the KDE community…

      Comment by Jack — May 21, 2010 @ 23:25

      • KDE did not remove anything. It’s the WordPress site that acts weird, but only in one of the two feeds, so I guess it wasn’t done on purpose.

        Comment by Markus — May 21, 2010 @ 23:43

  13. This is in reply to https://kamikazow.wordpress.com/2010/05/21/oh-my-the-hypocrisy-around-webm-vp8/#comment-238 as that post has no “Reply” button.

    The great thing about Matroska is also in some sense the bad thing about Matroska: The spec is massive, and much of it is considered optional. It is a very flexible container, which implies that Matroska files can show a lot of variance between one another. From an authoring perspective, due to how much of the spec is optional, it is difficult to tell what you can rely on implementations actually supporting.

    As such I understand WebM’s motivation in clearly defining a particular subset of Matroska (that looks to be a bit different from the subset the original spec requires compliant implementations to support) and rules that WebM files need to follow. To me that’s reasonable, and certainly makes it easier for implementers like Mozilla to hit the target they need to hit.

    At the same time, while the WebM container spec is currently mostly a link to the Matroska spec, a short list of rules and a short table of supported Matroska features, the changing of MIME types, identifiers etc. opens the door to future divergence. This is somewhat concerning, and I hope it doesn’t happen but rather that the WebM community will work closely with the Matroska authors.

    As for Mozilla initially deciding not to support Matroska, it makes sense to me. The WebM container spec IMHO is useful work on top of Matroska that makes it easier to support it in the browser, for the reasons outlined. And since they had decided to only support Theora at the time, deciding only to support the container in which Theora is most commonly encountered and which had the most solid tool support for that combination – Ogg – was understandable as well. The Ogg container has a considerable head start on the Matroska container in terms of tools support especially when it comes to streaming; Matroska does support streaming, but it is not something it’s frequently used for in practice (unlike Ogg) and support for streaming in Matroska implementations (as well as Matroska implementations in streaming tools) is as-yet lacking.

    (It should be noted that there’s also an official mapping for VP8 into the Ogg container now, that’s pretty nifty for streaming.)

    It should also be said that Ogg is actually an OK container, too. It’s a pretty simple one, and quite flexible in terms of what you can store in its tracks. The problem with the Ogg container however is that Matroska has a head-start in writing specifications for how to realize various advanced features within it. In Ogg, those advanced features have to be embedded in new data tracks, and the format of these tracks has to be specified as well. There has been some work on that (see the Skeleton Metadata track spec at http://xiph.org/ogg/doc/skeleton.html), but for Matroska it is already done. But doing it with the Ogg container would not be impossible per-se.

    It’s important to note however that the web community (which Mozilla is more a part of than of the video container community, obviously) often has its own ideas for how to realize many advanced container features. For example while for both Ogg and Matroska there are established practices for how to embed subtitles in video files, the web community is actually pursueing a way for a web server to serve up subtitles outside the file for a HTML 5 video element. So they arguably have somewhat different priorities than other users of video container formats.

    In general a lot of this is about making decisions about what you can do in what time frame with the manpower you have and what you believe you can bring to release quality and support in the future. In context, I can still understand Mozilla’s decision to focus on Ogg and Theora at the time. And I understand why they’re now widening that focus to include WebM, too. I don’t think it’s about wanting to please Google to get money from them (that really makes no sense to me), but about Mozilla sharing the goals of the WebM project and believing that it has a shot at success and improving the free video situation over what can be achieved with Ogg and Theora.

    And about that Opera blog link you keep (rather gleefully, I might add …) referring to: It’s correct to say that he’s mischaracterizing VP8 as “raw and new” in his post. The On2 codebase is obviously neither raw nor new; this is an old codebase that has seen plenty of production use (not so much as VP8 but in its older stages of life as VP6 etc). But I think what Haavard meant to say is that there are still opportunities for improvement in the VP8 encoder, some of which x264 (and actually some of them even Theora 1.2) already employs, such as adaptive quantization and temporal RDO. I talked about that more in other comments.

    Comment by Eike Hein — May 22, 2010 @ 00:40

    • It’s very late and I’m tired, but since I likely don’t have time to answer before Sunday, I give a short answer to only one aspect:
      As you surely know, there are 2 kinds of streaming: Radio-like streaming and a fixed file that’s just being hosted on a server and played backon demand (YouTube-like).
      Ogg was designed for the first case, but that one is of next to no practical interest in the scope of HTML5.
      Before Xiph extended the Ogg specs, Ogg was pretty much unusable for the second case (as I mentioned: hacks required). Matroska would’ve been the better choice at that point.
      You wrote that Ogg has a head-start. While that is true, Matroska is actually much wider spread than Ogg. HD content in P2P networks is usually in a Matroska container. DivX HD uses that, too.
      Outside of Wikimedia Commons only we geeks who use recordmydesktop use Ogg.

      Comment by Markus — May 22, 2010 @ 01:20

      • Yeah, I’m aware of the difference between streaming and progressive downloading, of course. And you know, I actually agree that Matroska is a better container than Ogg on the whole. It’s just that I understand why Mozilla ended up going with Ogg (considering their expertise is in writing web browsers, not container formats, that they partnered with Xiph and Wikimedia (who has already been using Ogg) due to Theora, and the other reasons I mentioned in the earlier post) and that the Ogg container isn’t totally unworkable. But I’m happy that WebM went with Matroska.

        About streaming again: It’s going to become more important in the future, as web video will expand to cover scenarios like video telephony and broadcasting once mobile devices have sufficient bandwidth available. But Matroska implementations can catch up to Ogg there, especially with WebM pushing implementations in the right direction.

        Anyhow, it’s true, it’s late, and I’ll be turning into bed now as well.

        Comment by Eike Hein — May 22, 2010 @ 01:40

  14. Well I don’t like Mozilla’s behaviour and decission making either but what does it help us (Konqueror, KDE Users/Developers) if we complain about them making deals with big business? Okay there are a lot of fanboys out there that like to cry with the wolves and consider everything cool just because it was blessed by Google or Apple and ignore everything else comparable which sometimes exists and works for years already.

    But anyways the only thing that will remain after such a complain is that others call you “bad loosers” no matter how right you are and that’s certainly somethin one should avoid as much as possible. So a better approach would be asking oneself: “HOW can WE make a difference?”

    And I think there are a multitude of possibilities how Konqueror could (again) make the difference. Just two points:
    * Search for other communities that share a similar vision and especially those who struggle with the attitude of current browser producers.
    * Make your own security policy decission and be bold at it.

    These two points mean:
    * Rethink your X.509 policy and its user interaction – especially implications on real world user security. This e.g. means integrate CACert Root certificate in Konqueror NOW and give a shit on Mozilla opinion about that. This is an easy step you can do RIGHT NOW and which would be welcomed a lot by many people. And of course this step would increase real world web security for a lot for people. CACert signed SSL-certificates are secure enough in order to increase the overall security level and many free software idealists use CACert and are VERY picky about security.
    * Rethink your web page rendering:
    ** A web site should have NO influence on the browser’s chrome (some years ago George Staikos even pointed this out). Every HTML, Javascript, CSS… that can change anything outside the document frame should be deleted from source code (no an user option is completely wrong here). NO hiding of URL bars, NO fullscreen, NO colored scroll bars from inside the document. Any standard that requires that should be just boldly broken.
    ** Any control elements such as Buttons, scroll bars and pop ups inside a web page should have a special “browser element color” that NEVER can be changed by the web document (again see above).
    ** A web page is a web page an not a web application, regardless what Google wants us to believe. Therefore any web page should have a true border space (such as in document view at any word processor) that cannot be painted at by the web document. This has a huge security advantage as any user thus directly can make a difference between “the web” and “my application/computer”. So especially all these advertisement covered web sites that mimic system messages (such as notification bars, dialogs…) would have no chance being considered part of the computer.

    These steps would make Konqueror again quite attractive for quite some users and of course aren’t that complicate to implement (AFAIK). Ok what does this have to do with Multimedia and HTML 5? Not much. But very much with browser experience something all people want to increase with HTML 5…

    Comment by arnomane — May 22, 2010 @ 14:47

  15. Unfortunately, this does not compare Dirac. Keep in mind that this is not a highly-tuned VP8 encoder:

    http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/First-Look-H.264-and-VP8-Compared-67266.aspx

    Comment by RichiH — May 23, 2010 @ 09:25

  16. VP8 has also been reviewed by Red Hat Legal.
    https://bugzilla.redhat.com/show_bug.cgi?id=593879#c7
    “It has been through legal, and there are no blockers at this time.”

    And don’t take that x264 developer’s word for granted, he’s a developer, not a lawyer, and he’s obviously biased.

    Comment by Kevin Kofler — May 23, 2010 @ 14:09

  17. “I suspect that since Mozilla (and Opera as well) gets many million dollars per year from Google for being the default search provider, questions about WebM are not asked. I think Mozilla mainly wants to please its pimp main sponsor to get the money.”

    This is clearly nonsense.

    Mozilla and Opera decided to adopt WebM because it’s a free and open quality codec.

    Google didn’t just release WebM all of a sudden either. They had clearly been working with not just Opera and Mozilla, but also all the other inital WebM partners for quite some time.

    Are you saying that all the other companies and organizations that were partners in the WebM project when it was announced were paid off by Google too?

    Even the organization behind Theora welcomed WebM! Were they paid off by Google too?

    No, your conspiracy theory is simply crazy. If you are going to make strong statements like that, you should at least check the facts first.

    Comment by MaybeFailbe — May 23, 2010 @ 14:36

  18. If I have to believe in the honesty of the Mozilla guys, I’d say they refused to implement anything but Theora because they were trying to get Theora included in the HTML5 standard.

    What Mozilla wanted was to have one single free codec forced in the HTML5 video element.
    They were trying to force that, while everybody else was complaining that Theora was not good enough in comparison to h264.

    When Google released WebM they decided that WebM could successfully oppose h264 and they decided to support it.

    I see it as simple as that, but I could obviously be wrong.

    Comment by Giulio Guzzinati — June 16, 2010 @ 23:04


RSS feed for comments on this post.

Create a free website or blog at WordPress.com.

%d bloggers like this: