Suche

Christian Steiger

some selected webbuzz

Monat

Januar 2010

Mozilla Labs Releases Weave 1.0

Advertisements

Streit über E-Book-Preise eskaliert – futurezone.ORF.at

Maus liegt auf einem Buch, das auf einer Tastatur liegt

Streit über E-Book-Preise eskaliert

Kategorie: USA
30.01.2010|Erstellt um 12:23 Uhr

US-Verlage und der weltgrößte Online-Einzelhändler Amazon streiten seit längerem über die Preisgestaltung bei E-Books. Nun droht der Disput zu eskalieren. Amazon stellte den Verlegern die Rute ins Fenster und nahm die Titel des US-Großverlages Macmillan kurzerhand aus dem Angebot.

Seit Freitag suchen US-Kunden von Amazon vergeblich nach Titeln des US-Verlages Macmillan, der unter anderem die Autoren Hilary Mantel und Jeffrey Eugenides unter Vertrag hat. Zwar gibt es keine offizielle Stellungnahme zum Verschwinden der Bücher aus dem Amazon-Angebot, der Grund für die Ausmusterung dürfte jedoch in einem seit rund einem Jahr gärenden Konflikt zwischen dem Online-Einzelhändler und US-Verlegern über die Preise von E-Books liegen, berichtete die „New York Times“ („NYT“) am Samstag.

Weil Macmillan eine Anhebung der Verkaufspreise seiner E-Books für die Kindle-Plattform um 50 Prozent erwirken wollte, habe Amazon die Titel des Verlages vorübergehend ausgemustert, so die Zeitung unter Berufung auf eine mit der Situation vertraute Person. Lediglich über Drittanbieter können Macmillan-Bücher auch weiter über Amazon.com erstanden werden, wie ein Blick auf die Website zeigt.

Verlage wollen bei Preisgestaltung mitreden

Amazon bietet in den USA die elektronischen Ausgaben von Bestsellern üblicherweise für rund zehn Dollar (7,1 Euro) zum Download an. Den Verlagen ist das Preisdiktat des weltgrößten Online-Einzelhändlers seit langem ein Dorn im Auge. Sie wollen bei der Preisgestaltung mitreden und streben Preise von rund 15 Dollar (10,7 Euro) für E-Book-Bestseller an.

Dabei verdienen die Verlage mit dem Verkauf ihrer E-Books auf Amazon zumindest mit Bestsellern nicht schlecht. Obwohl der E-Book-Händler Verkaufsschlager für zehn Dollar verkauft, erhalten die Verlage die Hälfte des Verkaufspreises der Hardcover-Ausgabe, üblicherweise rund 15 Dollar, für einen verkauften Bestseller. Bei Katalogware werden die Einnahmen aus dem E-Book-Verkauf laut Marktbeobachtern geteilt.

Verluste bei Bestsellern

Amazon nehme die Verluste bei Bestsellern in Kauf, um seine Vormachtstellung am E-Book-Markt abzusichern und den Verkauf seines Lesegerätes Kindle zu fördern, rechnete vor kurzem das „Wall Street Journal“ („WSJ“) vor. Marktbeobachter gehen davon aus, dass Amazon bislang rund drei Millionen Stück seines E-Book-Lesegerätes Kindle verkauft hat.

Die Marktmacht Amazons macht wiederum den Verlegern Sorgen. Sie fürchten die Kontrolle über den Vertrieb ihrer Inhalte zu verlieren und drängen vehement auf Mitsprache bei der Preisgestaltung.

Apple lockt Verleger

Das will der US-Unterhaltungselektronikhersteller Apple für sich nutzen, der am Mittwoch sein Tablet iPad vorstellte und künftig auch elektronische Bücher verkaufen will. Apple lockt die Verlage mit Zugeständnissen. Sie sollen die Preise selbst bestimmen können. Kolportiert werden Apple-Verkaufspreise für E-Books zwischen 13 und 15 Dollar. 70 Prozent davon sollen an die Verlage gehen.

Die Verlage bekommen zwar von Apple für Bestseller in Summe weniger als von Amazon. Sie behalten aber die Kontrolle über die Preisgestaltung. Als einer der ersten US-Verlage hat auch Macmillan einen Vertrag mit Apple unterzeichnet.

Verlage bremsen

Andere große US-Verlage steigen unterdessen bei der Veröffentlichung von E-Books auf die Bremse. Vor kurzem kündigte etwa Simon & Schuster an, ausgewählte elektronische Versionen von Neuerscheinungen erst vier Monate nach der Hardcover-Veröffentlichung anbieten zu wollen. Laut „Wall Street Journal“ hat auch die Hachette-Gruppe ähnliche Pläne. Die Verlage wollen so Einbußen beim Verkauf der Hardcover-Ausgaben eingrenzen, die wesentlich teurer sind als ihre elektronischen Pendants.

Mehr zum Thema:

(futurezone)

Posted via web from urban-listening’s webbuzz

Facebook for Silverlight Puts a Dark, Stylish Facebook on Your Desktop

Windows/Mac: Back in November, Microsoft featured an attractive Facebook client to demo Silverlight 4, and many people wondered where it went. Well, Microsoft has finally released the fancy Facebook client for download, and it has almost everything you could want from Facebook.

Above all, this app is beautiful, certainly more beautiful than Fishbowl, the client that was released after Silverlight’s introduction and confused many by its lack of similarity to the client demoed. Not only does it have cool photo grids integrated into your news feed, but there are some really cool (but subtle) sweeping animations when viewing photos in an album. It even features a photo uploading tool to replace Facebook’s mediocre uploader.

Viewing photos is certainly where Silverlight’s strengths come into play the most, but the app has pretty much every other part of Facebook integrated—you can view your news feed, friend list feeds, profiles, and even your inbox. You can view a list of upcoming events, but the event pages themselves are not integrated—clicking on it will take you to the event’s Facebook page in your browser. The app doesn’t currently support chat, which isn’t really a drawback due to Facebook chat’s suckage—but if you are that addicted to Facebook chat, you can already get it on your desktop easily through a client like Pidgin or Adium.

The app has a few minor bugs, which is to be expected due to the fact that it’s still a developer preview. It’s supposed to have TweetDeck-style notifications, which I’m not seeing, and sometimes new posts just don’t load. A new post on my wall showed up in my news feed but took awhile before it showed up on my wall, so it seems there’s a bit of a delay in updates. In addition, it seems you can’t use your scroll wheel in the application—you actually have to click the arrows or the scroll bar to scroll up and down. These things are hardly a deterrent, though—if you, like many, have been looking for a desktop version of Facebook since 2005, this looks like the app to use.

The Microsoft Silverlight 4 Beta Client for Facebook is a free download, requires Windows or Mac OS X with Microsoft Silverlight to use.

Send an email to Whitson Gordon, the author of this post, at whitson@lifehacker.com.

Posted via web from urban-listening’s webbuzz

Freebie: Massive Web UI & Button Set

iPad Funny Pictures

After the introduction of the most anticipated gadget of the year, iPad, a lot of people were disappointed. Some of them created funny and smart pictures illustrating their thoughts about the iPad. So here they are. All I could find 🙂

Posted via web from urban-listening’s webbuzz

HTML5 video and H.264 – what history tells us and why we’re standing with the web

For background on the free software angle on this story please check out Robert O’Callahan’s post on this topic. Also check out Mike Shaver’s shorter background post as well. This post differs from theirs in that I want to talk about network effects, why codecs should be considered a fundamental web technology and what the long-term effects of the choices at this inflection point might look like.

Recently Youtube announced that you could test out an HTML5-enabled version of their site. They said that they were doing this partially based on people’s “number one request” that Youtube do more with HTML5. (They left out the other half of that #1 request – that the implementation be based on open codecs, but more on that later.) Not to be outdone, Vimeo rushed to announce a beta version of their player based on their site that claims HTML5 support as well.

To be clear, this is great news. This is just the latest in a long string of changes for video on the web. We started with a raw “player” delivered by Real Media. Then on to media embedded directly in pages via Windows Media + Quicktime. More recently video on the web has been a a platform play by Flash. And finally to a place where media becomes a first class citizen on the web without a single source provider. These moves by Google and Vimeo (and before either of them, DailyMotion) show that things are changing for the better, and faster than I think anyone could have imagined.

The players from Google and Vimeo do present a pretty serious problem, though. Each of these require a proprietary H.264 codec to be able to view them. These codecs aren’t compatible with the royalty-free web standards that the rest of the web is built on. The fact that they are being so unabashedly hyped along with the new darling of the web – HTML5 – means that most people don’t understand that something very dangerous is taking place behind the scenes.

If you think that this isn’t an issue that’s worth worrying about you need to read the rest of this post. In particular the history of GIF shows us what happens when patented technologies are used on the web and what happens when network effects over-run the natural drive to royalty-free technologies at scale. MP3 pricing gives us a glimpse into the strategy around H.264 licensing and what the landscape might look like 5 years from now, assuming H.264 were baked into the web platform as a requirement. I’ll also talk about other options that might be coming in the near future that most people don’t know about.

The Web Exploded on Royalty-Free

The web has always been based on the assumption of Royalty Free. In fact, participation in a working group at the W3C requires that any parties disclose and make available any essential claims on the technology covered by that working group.

But that’s just a technicality. The truth is in the tests: you can still build a web browser, spider, client, web server, image editor, a JS library, a CSS library, an HTML editor, a web publishing system, commerce system – anything that is based on fundamental web technologies – without asking anyone for permission. This is a fundamental reason why the web has spread everywhere. Because everyone had the chance to add to the mix.

It’s worth saying twice. Anyone can create technology or services on the web and they don’t have to ask anyone for permission to do it. This is why we’ve had billions of dollars of investment and a fundamental shift in the way that western society acts and communicates – all in the course of a very short period of time. The web grew up on Royalty-Free.

Learning from GIF

The web in 1999 was a lot smaller than it is today, so a lot of people don’t remember what happened back when Unisys decided to start to enforce their GIF-related patents. GIF was already widely used on the web as a fundamental web technology. Much like the codecs we’re talking about today it wasn’t in any particular spec but thanks to network effects it was in use basically everywhere.

Unisys was asking some web site owners $5,000-$7,500 to able to use GIFs on their sites. Note that these patents expired about five years ago, so this isn’t an issue today, but it’s still instructive. It’s scary to think of a world where you would have to fork up $5000 just to be able to use images on a web site. Think about all of the opportunity, the weblogs, the search engines (even Google!) and all the other the simple ideas that became major services that would never have been started because of a huge tax being put on being able to use a fundamental web technology. It makes the web as a democratic technology distinctly un-democratic.

We’re looking at the same situation with H.264, except at a far larger scale.

So let’s talk about what makes a fundamental web technology, and how they should be licensed. First, the licensing. I think that Apple said it best:

After careful consideration of the draft patent policy, Apple believes that it is essential to continued interoperability and development of the Web that fundamental W3C standards be available on a royalty-free basis. In line with the W3C’s mission to “lead the Web to its full potential,” Apple supports a W3C patent policy with an immutable commitment to royalty-free licensing for fundamental Web standards. Apple offers this statement in support of its position.

(The post then goes on to talk about an opt-out mechanism for participating members.)

This leads to the obvious question: is the codec a fundamental web technology? The HTML5 working group argued and punted on the issue. Given that the standard is mum on the issue it falls to the actors in the market to determine what’s required to support HTML5. Given the state of online video today this basically boils down to one actor: Google.

Google has a near-monopoly in online video thanks to the ubiquity of Youtube. This means that they are the effective arbiter of codec choices for HTML5 video. If you want Youtube to work, you have to support whatever they are using. (Right now that’s Flash or a native Youtube app for mobile devices, but it’s clearly changing.) Let’s set up a strawman and say that it’s going to be H.264. (I’ll discuss later why I don’t think that this will be a requirement, but let’s say that it is.)

Their choice for H.264 had an immediate effect. It’s a signal to the market that it’s OK to start using H.264 as the main codec for HTML5 video. This is proven out by Vimeo’s HTML5 beta player launch. Vimeo is a secondary player, but were perfectly happy doing what Google did. The effects of that move have spread quickly and you can see it in people’s reactions: John Gruber getting angry at Mozilla as a result of Google’s actions, people on twitter claiming that we don’t support HTML5 at all because of Google’s use of a proprietary codec (Not true! Firefox actually leads in HTML5 support in a huge number of areas.) and Gizmodo’s choice comment: “Luckily, YouTube accounts for a hefty chunk of said architecture, their catalog is rendered in HTML5-friendly h.264 format. This is what network effects look like in real time.

So if you think that Google has settled on H.264 as the only codec they will support (unlikely) it would appear that they have set us up to have another GIF-like situation. Note that I think that this will not actually be the case, as I discuss later, but it’s worth thinking about as a framework. So instead let’s talk about what that situation would look like, with MP3 as the model for the H.264 licensing strategy.

Rising Costs and Unpredictable Licensing

So what can we learn about H.264’s licensing strategy as it pertains to pricing? Much like GIF we already have a model to look at that is already near the end of its cycle: MP3. Network effects and goverment-sponsored monopolies make a very powerful combination. But getting the most out of them requires a very specific strategy, one we saw with MP3 and we’re seeing again with H.264.

History is instructive. We know that MP3 was licensed quite liberally early in its lifespan. Before 2002, “no license fee is expected for desktop software mp3 decoders/players that are distributed free-of-charge via the Internet for personal use of end-users”. They changed that after the network effects had already taken their toll. Not only were decoders free for free software, but bulk flat rate licenses were available to large distributors. That’s how widely distributed software picked up with ability to play back MP3-encoded files.

But as the cycle continued and MP3 became a requirement for playback the pricing changed to where we are today. So let’s talk about what it looks 8 years later.

If you look at the public published rates for a couple of the MP3 licensors (and there are more than just two) someone who wanted to use it would be looking at a royalty rate of about $1/downloaded unit. So if you were doing, say, two million downloads a day you would be looking at about $2,000,000 per day just to have permission from those companies to include an MP3 decoder. Could you negotiate a lower rate? Probably. But that gives you a sense of the scale if you’re a small provider in a world where getting started on the web is hard and you don’t have much negotiating power.

People casually say that we should support licensed codecs like MP3, but they haven’t done the research. We have.

Much like MP3, H.264 is currently liberally licensed and also has a license that changes from year to year, depending on market conditions. This means that something that’s free today might not be free tomorrow. Like sending an H.264 file over the Internet.

In fact there are already royalties charged to some people using H.264 for streaming, according to Jan Ozer. Some quotes from that article:

Whenever I speak at industry groups about H.264, and detail the upcoming royalty obligation, some attendees are invariably surprised that using H.264 will generate royalties. Here’s what you need to know about H.264 and royalties, in an except from an article that I wrote for StreamingMedia.com [ed: full article here.]

When I spoke with Harkness, he stated that the patent group hadn’t yet decided the license provisions for internet broadcast, or even if there would be a license, though he conceded that it would make little sense for the patent group to forego this revenue. The only thing certain is that the royalty provisions must be announced by January 2010 for royalties that would be payable the following year.

OK. This paragraph hits all of the big points:

  • Right now there aren’t any fees for “internet broadcast.”
  • But there might be in the future
  • The license changes from year to year.

Remember, this is still very early in H.264’s history so the licensing is very friendly, just like it used to be for MP3. The companies who own the IP in these large patent pools aren’t in this for the fun of it – this is what they do. They patent and they enforce and then enjoy the royalties. If they are in a position to charge more, they will. We can expect that if we allow H.264 to become a fundamental web technology that we’ll see license requirements get more onerous and more expensive over time, with little recourse.

Selective Enforcement

One reason why a lot of this isn’t known is because patents can be selectively enforced. And because it’s still early in H.264’s lifespan it’s extremely advantageous to lightly enforce the patents in the patent pool. MP3 and GIF both prove that if you allow liberal licensing early in a technology’s lifespan, network effects create much more value down the road when you can change licenses to capture value created by delivering images and data in those formats. Basically wait for everyone to start using it and then make everyone pay down the road. (Three words: unpredictable business costs.)

The other problem is that the Internet, because of it’s global nature, hides many of these costs. Everyone – and I mean everyone – uses tools from parts of the world where there are no software patents to transcode and edit videos. (One of the world’s largest free software downloads after Firefox? VLC. 111M last time I checked.) This grey area for tools means that these heavily patented formats gain much of the same advantage as free formats – lots of free tools and tons of ad-hoc support from free software people – but with the ability to still enforce and monetize in parts of the world where patents are enforced. It’s actually a brilliant strategy, even though the outcome is that the true costs of patents are hidden from the view of most people.

So What Now?

Remember that my setup for this was that Google’s choice was going to be H.264-only and that their decision would have network effects on the web, setting up another GIF-like situation for the web.

But I, like many others, have reason to believe that H.264 will not be Google’s final choice. There’s good reason to believe this: they are purchasing On2. On2 has technologies that are supposed to be better than H.264. If Google owns the rights to those technologies they are very likely to use them on their properties to promote them and are also likely to license them in a web-friendly (i.e. royalty-free) fashion. Google actually has a decent history of doing this. In particular you can get a sense of this from their post on The meaning of open:

If there are existing standards for handling user data, then we should adhere to them. If a standard doesn’t exist, we should work to create an open one that benefits the entire web, even if a closed standard appears to be better for us (remember — it’s not!). In the meantime we need to do whatever we can to make leaving Google as easy as possible. Google is not the Hotel California — you can check out any time you like and you CAN, in fact, leave!

In this case they were talking about user data, not video formats. But it’s the same set of principles at work. It’s also very hard to imagine Google licensing proprietary codecs as a revenue stream. It just doesn’t align with how they have worked in the past.

So it’s very likely that Google will be using a codec that’s superior to H.264 in terms of bandwidth usage and will also have web-friendly licensing attached to it. I know that at Mozilla we would support that and would very likely incorporate that technology into our browser, much like we did with Theora and Vorbis.

In Summary

So that’s the case for supporting free formats and also describes why we should be avoiding H.264 as a fundamental web standard. We don’t want to set ourselves up with another GIF situation and set up licensing like MP3 where we’ll be dealing with increased costs and restrictions over time. Google is likely to support something other than H.264 on Youtube and we’re likely to end up with something that’s better on a royalty-free basis as a result. And as I mention below, Theora and Vorbis are still excellent alternatives even if they for some reason don’t do as we expect.

Mozilla and Firefox continue to stand with the web on this topic. We don’t think that fundamental web technologies should be encumbered with patents and our actions and messages reflect that. We hope that you will stand with us on this.

A Note About Theora and Vorbis

Many of you might notice that I haven’t talked much about Theora or Vorbis. In fact some of you might read this post as me throwing them under the bus. That couldn’t be further from the truth. What I’ve really been talking about is one part of a larger ecosystem. What the web is really asking for is a codec that is implemented everywhere, that competes well on quality and doesn’t come with GIF-like surprises. Theora and Vorbis fit every part of this bill. You can actually use them on all of the desktop browsers, either via native support or via a Java plugin that actually works pretty well.

On the quality side what we’ve been able to do at Mozilla, with the help of the rest of the Xiph community, is to show that even though Theora is based on older, royalty-free technology, most people can’t really tell the difference between a video encoded with a decent Theora encoder and a video encoded with H.264.

But given the situation with submarine patents it would actually be a good idea for us to have more than one royalty-free codec available for browser vendors, site owners and content publishers. That way if one of them turns out to have issues, you just turn one of them off and continue to use the other one That’s why I think that if Google did offer a new codec that it would make a wonderful addition to the list of codecs we could use on the web. And if they want to use it on Youtube and other Google sites, that’s great. But it’s good to have other options in the wings.

So this means that Theora and Vorbis aren’t going anywhere. There are other reasons to continue to support (and promote!) Theora and Vorbis as well:

  • There’s a growing corpus of Theora content on sites like the Internet Archive, WordPress and Dailymotion, not to mention all the private sites that are out there starting to use it.
  • Vorbis is far better quality than MP3 for the same bandwidth and I would expect that Google would use it as the audio codec of choice to match a free video codec.
  • Vorbis is actually supported in a large number of hardware devices, often quietly. My phone supports it, for example.
  • Theora with Ogg as a container actually is a fantastic live streaming format for HTTP. This is often overlooked. While Apple has had to add a bunch of code and description files trying to get live streaming to work with their proprietary H.264 codec and MPEG containers, we’ve been doing live streaming over HTTP out of the box ever since Theora and Ogg were part of the browser without any changes to standards. This is largely a function of history. Vorbis and Ogg were originally built as a radio streaming format. It’s possible to jump into the middle of a stream and start decoding. (As a side note it will be interesting to see if Google ends up trying to build their own container format. Ogg is simple, and it works.)

So I wouldn’t expect these formats to go anywhere. Instead I would expect to see them implemented everywhere either as backups or to support existing content and streaming.

Posted via web from urban-listening’s webbuzz

The iPad provides the ultimate browsing experience?

Panoramic Views from Haiti – Interactive Feature

iPad nano is out!

Erstelle eine kostenlose Website oder Blog – auf WordPress.com.

Nach oben ↑