More PS3 troubles for Sony

Post ImageYet more disappointment for fans of the Sony PS3. Just think about all the bad news we’ve seen so far: critics had a field day with the early controller, pricing for the console is rumored to be really high, and then there’s the whole Blu-Ray issue. And now? Delays:

Sony will delay the European launch of its PlayStation 3 game console by about four months to March and cut its target for worldwide shipments this year by half, the company said Wednesday.

Flagging potential problems with the PS3 launch, Mitsubishi UFJ Securities last month cut by half its shipment forecast to 3 million of the new PlayStations in the current business year to March, citing Sony’s difficulties in procuring its cutting-edge parts.

Is it just me, or does an Xbox 360/Nintendo Wii combination seem more appealing than ever?

Read: CNET News.com

Podcasting University Lectures

Post ImageBlogMatrix has a post up today about podcasting university lectures – particularly appropriate since I start classes again for the Fall semester bright and early tomorrow morning. While I fully intend to go to at least the first week of classes, all bets are off after that. And no, it’s not because I am lazy, or going shopping or anything like that, I simply have a business to run. Sometimes business and school conflict, and you need to make a decision – which is more important, this meeting, or a lecture? Most times, for better or for worse, I choose the meeting.

I wouldn’t miss anything though if the lecture was being recorded and made available as a podcast.

While the BlogMatrix post is more a point-form plan for how to implement such a thing, and how it would work, it touches on a few important points that deserve to be highlighted.

Podcasting a lecture is for the students in attendance too!
Of course there will be people like me who skip the lecture to do something else and simply want to listen to the podcast later. More importantly though, podcasting a lecture is useful for the students in attendance, as BlogMatrix points out: “students, instead of taking notes (or only notes), would record the time of a particular interesting or salient comment”. That would be incredibly useful. This point needs to be made very clear to the decision makers in a University, as they will most certainly protest the idea initially, citing fears that no one will go to class. I think such fears are baseless – there is value in attending the lecture, such as being able to participate in the conversation.

(As an aside, if the lecture contains no interaction and is just the professor standing at the front talking, then I’d be GLAD if podcasting it made attendance drop to zero. It’s ridiculous that students pay $500 for something like that, because you know most of the fees go to paying the professor anyway. It’s examples like this that show just how antiquated and bureaucratic the university system can be.)

The Wisdom of Crowds
Or in this case, the wisdom of students in the class. Let’s assume students can bookmark parts of the lecture – perhaps the most important or interesting parts. As noted in the BlogMatrix post, this is powerful stuff: “Collecting all these bookmarks across all students (and potentially across time) will provide collective intelligence/data mining/insight into what is really import in the lecture”. The ability to tag lectures and specific segments would further this collective wisdom.

Is security really an issue?
I don’t think so. The University doesn’t want people getting the lectures for free – I understand that. But how is making an MP3 file available any different than having some random person walk in off the street, sit in the class for an hour with a recorder, and put it online later? Especially in a lecture with 400+ students, I am surprised this doesn’t happen more actually. As long as sensitive or personal information is not included in the podcast, I don’t see security being much of an issue. I do agree with BlogMatrix though: “I don’t believe it’s the place of the vendor (i.e. me) to dictate requirements to a client”. If a university really wanted to integrate security, it shouldn’t be that difficult, as all universities have pretty extensive systems in place already.

Now, let’s look at this from the perspective of Podcast Spot (if you want a test account, email me). Could our technology support such a thing? With a few tweaks here and there, I believe so. We’ve got all the basics covered (like tags and comments), as well as a few of the more interesting requirements (such as random access). And there’s a bunch more features on the way too (such as improved methods of working with segments). It’s not going to happen (because I better graduate in April) but it sure would be cool to see Podcast Spot being used in my school. Maybe I’ll see it as an alumni 😉

I think podcasting will catch on in schools and other similar institutions, but it will take time. People inside the education world need to grok the benefits of podcasting, and still more have to lose their fear of the technology. When that happens, I think everyone will benefit.

Read: BlogMatrix

I still like magazines!

Post ImageDon Dodge asks whether newspapers and magazines are dying. I’ve been in this discussion before, at least for newspapers:

I hate almost everything about newspapers. I don’t like the size of the paper. I don’t like the way it makes everything black. I don’t like that every page has to be jammed full of stuff. I don’t like that the pages are not full color. I don’t like that once I find something interesting, I can’t do anything with it (like send it to a friend, or blog about it with a link, etc).

Needless to say, I think newspapers are a dying breed. Or if not dying, at least drastically changing (I still read newspaper websites online, for instance). The physical newspaper as we know it, won’t be around too much longer.

Magazines, on the other hand, will be around for a while I think. I’ll give you two pieces of evidence to support this. One is Chris Anderson’s mainstream media meltdown which shows that while newspapers, television, music, and others are losing eyeballs and subscribers like crazy, books and magazines are somewhat mixed. This suggests to me that people find magazines more valuable than say, a newspaper. Not the content itself (I am not suggesting that people don’t find a TV show valuable) but the medium – I think people like physical magazines and books.

Which brings me to my second piece of evidence – the magazine itself! Despite still not being able to do anything with the content in a magazine, the size is usually comfortable, and the pages are cleanly laid out and colorful (and don’t make my hands black). I often will refer back to a magazine article (and the articles themselves are usually longer and more indepth than your typical newspaper story). Don thinks the outlook for magazines might be worse than newspapers because newspapers are local focused. Perhaps he’s right, but I think it takes longer for a magazine article to be out of date than a newspaper story. There’s hope for magazines yet.

Don also asks: “What are your reading habits? How do they compare to your parents reading habits?” Probably not fair for me to answer that question, as my parents are fairly young and very tech savvy. My Dad subscribes to the Edmonton Journal online, and I doubt they read any other physical papers except the local “Inuvik Drum” (which I think is probably the norm in towns of only 3000 people).

Bottom line – newspapers will disappear and I won’t be sad to see them go. Magazines may disappear too, but it will take longer, and until we have digital books or magazines*, I’ll be sad to see them go.

Note: I’ve never actually subscribed to a magazine. I’m very a much a “buy on the spot when I see one that looks interesting” kind of magazine shopper.

* – by this I mean a physical book or magazine that looks like one today, except that it wirelessly connects to the Internet to update the content to be whatever I want to read. So pages don’t have “print” on them per se. This gives you the full benefits of say, a laptop, but with a form factor that is more natural and easy to read. And believe me, it’s coming.

Read: Don Dodge

The 2 Biggest Problems With Web 2.0

Post ImageOkay, I admit, there are far more problems with Web 2.0 than simply two, but there are two in particular that bug me. The first is the general idea that it’s okay to not have a business model from the get-go. The second is the idea that Web 2.0 will be funded almost entirely by advertising. I think both of these things are very wrong.

1. No Business Model? No Problem!
This one drives me nuts every time I see it. Dead 2.0 nailed it today when he ripped apart an interview with venture capitalist Paul Graham. In fact, I think it might be Dead 2.0’s best post yet. Anyway, I don’t understand why so many people think it’s okay to figure out a business model later. There can only be one Google, can’t there?

I’ve been to countless seminars, courses, speeches, and other events with really incredibly smart business people, and I’ve never heard any of them say it’s okay to figure out how you’re going to make money later. If we had taken that approach with Paramagnus, there’s no way we’d have made the finals of VenturePrize or won the Wes Nicol. At no point in the training sessions did we hear “make something people want first, then try to make money off it later.” Like Dead 2.0 says:

Great. I would like a flying car, and a lasergun. Also, a Web site with all the news, music, porn, and copyrighted videos I want, and it should all be free. I want that. Please build it.

I think this is the single biggest problem with Web 2.0. I look at it this way: the original bubble burst because you had lots of companies with no products (seriously, there were lots of companies who did something, but you weren’t sure what) and no revenue streams. With Web 2.0 thus far we have lots of great products, but we’re lacking in the revenue stream department.

I think both are required to be successful.

2. We’ll just sell advertising!
I don’t know when it happened, but somehow the world thinks that advertising will be the key to monetizing all of the new Web 2.0 products. Scoble said this as if it were plain fact today: “Web 2.0 is largely funded by advertising.” Maybe that’s true right now, but will it be true in the future? For some reason, I just have a gut feeling that advertising is not the key. Scoble’s right, advertising is an audience business. So what happens if your livelihood depends on advertising and your audience is dwindling? You’ll probably do some stupid, desperate things to keep the audience. That can’t be good for consumers.

I’m reading James Surowiecki’s The Wisdom of Crowds right now, and advertising in Web 2.0 reminds me of the section where he talks about plank roads. He uses it as an example of an information cascade. Basically what happened is that in the 19th century, a couple of entrepreneurs came up with the idea for a plank road, it seemed to solve all of the major transportation problems of the day. This led others to copy them, and soon there were hundreds if not thousands of these plank roads. Everyone thought plank roads would change the world! They were a panacea! The problem is that they did not last nearly as long as the original creators expected them to. Plank roads weren’t really a panacea. They simply covered up the real problems for a few years.

Is advertising the same? I mean, Google ads are great, because they are usually relevant to what I am looking for. But you can’t put them everywhere can you? Once you leave the web page, you’re screwed (unless Google comes up with some amazing new technology, which they might).

Still, I can’t help but think that advertising is the plank road of Web 2.0 – covering up the real problem (no business model) for a few years. If someone isn’t willing to pay for your service or product, is it really worth offering?

Conclusion
So there you have it, the two biggest problems with Web 2.0 according to me. I’d love to be proven wrong, but I don’t think its going to happen. To say that you’ve created something of value, you need a way to determine whether or not it has any value. Having someone pay for what you’ve created has worked for hundreds of years – why should it change now?

Windows Vista RC1 Released

Post ImageWell so much for beta 3 – Microsoft announced today the release of Windows Vista Release Candidate 1, a “near-final” test version of the oft-delayed operating system. I am not planning to download or install it, so I’ll be watching the usual suspects to see what they think of the release. Apparently it contains lots of improvements:

Youll notice a lot of improvements since Beta 2. Weve made some UI adjustments, added more device drivers, and enhanced performance. Were not done yet, however quality will continue to improve. Well keep plugging away on application compatibility, as well as fit and finish, until RTM. If you are an ISV, RC1 is the build you should use for certifying your application.

I hope they have fixed all or at least most of the big problems that testers were citing during the beta 2 phase. I have always said to delay if required, but still, I really want Vista!

Read: Windows Vista

Amazon EC2

Post ImageI’ve been meaning to post about this for some time now, but haven’t had a chance. I was really excited last Thursday when I read about Amazon’s new web service called “Elastic Compute Cloud” or EC2 for short. After seeing what they did with S3, I was particularly interested in the how EC2 would fit in. And boy does it ever fit in:

Create an Amazon Machine Image (AMI) containing your applications, libraries, data and associated configuration settings. Or use our pre-configured, templated images to get up and running immediately. Upload the AMI into Amazon S3. Amazon EC2 provides tools that make storing the AMI simple. Amazon S3 provides a safe, reliable and fast repository to store your images.

Nicely integrated with S3. The other great feature? Bandwidth between EC2 and S3 is FREE. I cannot even imagine how much cost savings that could equate to. With EC2, you pay only for instance hours used. Each machine instance is equivalent to “a 1.7Ghz Xeon CPU, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth”. Pretty darn sweet.

I’m already thinking of ways we could integrate this into Podcast Spot (we’re already using and loving S3). I’ve only taken a cursory glance at the forums, API and other documentation, but it seems to me there are two missing features that are extremely desirable: persistent storage and support for Windows (currently it only supports Linux). The AWS guys seem to be pretty on top of things though, so if enough people request them, I’m sure the features will get implemented.

I can’t wait to see what Amazon releases next!

Read: TechCrunch

Does the Bush Veto matter?

Post ImageAs you have probably heard by now, US President Bush made the first veto of his presidency yesterday, rejecting legislation that would have expanded federal support for embryonic stem cell research. While I applaud his ability to make a decision and stick to it (something he has done throughout the last six years, for better or for worse) I think that his veto was a little short-sighted. The issue is a touchy one, no doubt, but there is lots of support for such research.

And if I understand things correctly, ignoring the political drama the veto has and will continue to create, it doesn’t really matter anyway. The result of Bush’s decision is that federal funding for such research will not happen any time soon, but that doesn’t prevent private research from taking place. Do some reading on the subject, and you’ll find that medical research is starting to undergo something of a revolution – from taking place only in huge labs and Universities to taking place almost everywhere thanks to recent technology advances, falling costs, and “open source” type methodologies. I think we’ll start to see more and more research happen in the unlikliest of places, without any need for federal funding.

That’s why I think the Bush veto doesn’t matter in the long run.

Read: NYTimes.com

All the good domain names are gone!

Post ImageI came across a really fascinating article yesterday about Dennis Forbes, who has been studying a huge list of domain names in his spare time, making him something of a domainologist. Some of the things Mr. Forbes has found by looking at the list (which he got from someone at VeriSign) are truly amazing:

All of the 1,000 most common English words have been snatched up. The word “a” appears more than any other, though most of the time, of course, it’s just a letter in a longer word. The least-used common word is “consonant,” Mr. Forbes says, which is in just 42 domains, including “consonantpain.com,” which isn’t a misspelling but a word game.

Mr. Forbes checked the U.S. Census Bureau’s 1,219 most-common male names, the 2,841 most-common female names and the 10,000 most-common surnames; all were booked. Not only that, but when you link the top 300 first names with the top 300 last names, 89 percent of the resulting combinations are taken for male names and 84 percent for female ones.

And more generally?

For example, for every possible two-character and three-character combination, including both letters and numbers _ all possible domains are taken. Virtually all English words with four letters are claimed; those that aren’t are usually contractions, and Web rules don’t allow apostrophes.

Half of all domains are between nine and 15 characters long; the average length is 13. A domain can have, at most, 63 characters, and there are 550 such domains. In fact, some people have made a haiku-like art out of 63-character domain names.

Told you it was interesting! I’ve been known to buy domains on a whim, but there are people who have turned it into a multi-million dollar business. Digital real-estate is valuable as well it seems!

Read: NFD News

Odeo giving up on podcasting?

Post ImageMaybe it’s time everyone stopped calling Odeo a podcasting company. I’ve been critical of Google’s apparent lack of focus and direction many times in the past, but they’ve got nothing on Odeo. I mean here’s a company with some very smart people working for them, some substantial venture capital behind them, and yet very little to show for it. Perhaps the last notable thing Odeo did with regards to the core offering was redesign the website – and that was in December 2005. I have to agree with what Alex Williams said – “These dudes must have some pretty mellow investors.”

That’s not to say they are standing still. Odeo recently launched two new products, Hellodeo and Twttr. The former is somewhat related to podcasting, while the latter appears to have absolutely no connection whatsoever. Hellodeo lets you record a video message from your webcam to embed on other websites, and Twttr allows you to stay up to date with your friends using text messaging. Notice a trend? Yep, moving further and further away from podcasting.

I think it’s fair to say that LibSyn has done far more in terms of getting people into podcasting than Odeo has, and somehow I doubt that Evan Williams and crew have any tricks up their sleeve. Odeo, quite simply, seems lost. It’s a shame too, because they had the opportunity to do something great with podcasting. Maybe they should just purchase LibSyn?

You might recall that in May of last year, Fortune magazine named Odeo one of their 25 Breakout Companies for 2005. I wonder what they would say about the company today? I’m pretty sure they wouldn’t make the list again.

Maybe Odeo will come out with something amazing and I’ll be forced to eat my words, but I don’t think it’s going to happen. I do however, think Odeo would be wise to read Dead 2.0 Skeptic’s 11 Suggestions for Not Being a Dot-Bomb 2.0.

Intel Keifer: 32 Cores

Post ImageBack in January I sort of predicted that by 2007, a common question won’t be how fast your processor is, but how many cores it has. I think my prediction is starting to look more and more like a reality. I don’t think I wrote about it, but we purchased new machines for the office a while ago, and they each have dual core processors. This last week saw the official launch of Intel’s new Core 2 Duo chips, and as the name suggests, they have more than one core.

But if you think two cores is good, wait another three to four years:

I have to say I can’t remember performance gains anywhere near 16x in only four years. Comparing a 2002 Pentium 4 3.06 GHz with a Core 2 Extreme 2.93 GHz will give you a two to five fold increase – if most. 16x more performance by 32 cores in 2010 versus today’s two cores, should it come true, equals linear scaling, which means that performance would double with the core count. Many of you will say this is utterly impossible, because even sustaining the clock speed levels at doubled core count might be difficult – and I agree, unless you start to think out of the box.

Yep it seems Intel is working on having 32 cores on a chip by 2010, a project code-named “Keifer”. According to some sources, each core would run at 2 GHz, which is slower than today’s fastest chips, but adds up when there’s 32 of them. No word on how much power this beast might devour.

Now 2010 is still a ways off, and Intel has been known to change course in the past, but if they get this project completed according to plan, the future for computing performance looks very bright indeed. That and AMD is going to have some catching up to do.

Read: Tom’s Hardware