TechDays 2010: Edmonton

This year Microsoft finally brought TechDays, its Canadian technical training conference, to Edmonton. Some of us had been asking Microsoft to add our city to the cross-Canada tour for a while, and when enough people spoke up, they listened. And it paid off too. Initially Microsoft was expecting 250-300 people to register for the Edmonton event, but we blew that out of the water! Nearly 500 people registered! And judging by the large crowds, I’d say that most of those people attended too (it’s probably quite uncommon pay the registration fee and then not attend).

There is always criticism of the sessions offered at TechDays, but I think they had a decent mix this time around. Lots of introductory stuff I suppose, but that seemed to match the makeup of the audience. The addition of the Local Flavours track was a good start toward including some more diverse content as well. I was the track host for the “Optimizing the Development Process” track, and I did two presentations of my own.

TechDays 2010

My first presentation was Top 10 Mistakes in Unit Testing, adapted from a similar talk that was done at TechEd. The goal of the session was really to get people thinking about the little things that can help them be more successful with unit testing. I included three demos: a simple MS Test demo, a more involved demo using Ninject and Moq, and finally a demo showing JavaScript unit testing. Here are some resources for the session:

For my second presentation, I teamed up with Devin Serink to present A More Programmable World with OData. We talked about open data in general, about the work the City of Edmonton is doing, and then showed how easy it is to create and consume OData services. We spiced things up by using some PHP and Google Charts in the demos! Here are some resources for the session:

I thought both talks went well, and I hope people found them useful!

TechDays 2010

Given the success of the inaugural TechDays in Edmonton, I’m sure they’ll be back again next year. You can follow along as TechDays continues to travel across the country using #techdays_ca on Twitter.

Joey wrote about Day 1 here, and you can see the rest of my photos here.

OpenID Connect

I’ve been doing some work with OpenID and OAuth lately, making use of the excellent DotNetOpenAuth library. I am pretty much a beginner when it comes to these technologies, but I have been able to get up-to-speed fairly quickly. I was a big fan of Facebook Connect, and I quite like the new Graph API too (which uses OAuth 2.0). Though it was easy to develop against, I think the biggest benefit of Facebook Connect was the excellent end user experience. It was consistent and simple.

In contrast, OpenID is a little more cumbersome, and a lot less consistent. The discussion on how to make it easier and sexier has been going on for a while now. It seems like some significant progress will be made this week when OpenID Connect is discussed at the Internet Identity Workshop. What is OpenID Connect?

We’ve heard loud and clear that sites looking to adopt OpenID want more than just a unique URL; social sites need basic things like your name, photo, and email address.

We have also heard that people want OpenID to be simple. I’ve heard story after story from developers implementing OpenID 2.0 who don’t understand why it is so complex and inevitably forgot to do something. Because it’s built on top of OAuth 2.0, the whole spec is fairly short and technology easy to understand. Building on OAuth provides amazing side benefits such as potentially being the first version of OpenID to work natively with desktop applications and even on mobile phones.

Chris Messina has some additional thoughts on the proposal here:

After OpenID 2.0, OpenID Connect is the next significant reconceptualization of the technology that aims to meet the needs of a changing environment — one that is defined by the flow of data rather than by its suppression. It is in this context that I believe OpenID Connect can help usher forth the next evolution in digital identity technologies, building on the simplicity of OAuth 2.0 and the decentralized architecture of OpenID.

It sounds very exciting – I hope OpenID Connect becomes a reality!

Help bring Tech Days Canada to Edmonton!

Microsoft is planning the 2010 edition of Tech Days Canada, and they’re considering a stop here in Edmonton. In previous years, local developers have had to make the trip down to Calgary. If you’ve never heard of Tech Days, here’s what it’s all about:

With forty 200+ level sessions, Tech Days is the learning conference on both current technologies and new products like Windows 7, Exchange 2010 and much more.

The idea is to bring technical training content from TechEd, Mix, PDC, and other Microsoft conferences to Canadian developers and IT pros. There are sessions on Silverlight, test driven development, virtualization, IIS7, SharePoint, refactoring, Visual Studio, and more. I have led three sessions at Tech Days Calgary in past years, on ADO.NET Data Services, Internet Explorer 8, and REST Services with WCF.

When Microsoft was planning Tech Days 2009, they considered stopping here, but we lost out to Halifax. I don’t know about you, but I don’t want to see that happen again.

I think there are definitely enough local developers and IT professionals to host Tech Days here, so let’s make the decision for Microsoft an easy one! If you want to see Tech Days come to Edmonton this year, email, or tweet your interest!

Amazon S3 keeps getting better, now supports versioning

A good thing really can get better! Amazon S3, perhaps the most well-known cloud computing infrastructure service, just got another upgrade. The simple storage service now supports versioning:

Versioning provides an additional layer of protection for your S3 objects. You can easily recover from unintended user errors or application failures. You can also use Versioning for data retention and archiving.

This new feature will give the thousands of websites and services using S3 a quick and easy to way to support undo or file revision histories, among other things. It kind of moves S3 “up the stack” a little, in that it can now do something that developers could have built themselves, but in a simple and easy-to-use way.

Combine this powerful new functionality with Import/Export that launched last year and a couple of recent price drops, and it’s easy to see why Amazon continues to lead the way. Developers continue to make extensive use of the service too. At the end of Q3 2009, there were over 82 billion objects stored in Amazon S3. Just incredible.

I remember when S3 launched back in March 2006, when I was building Podcast Spot, a hosting service for podcasters. It completely changed our business. Global, scalable storage with Amazon worrying about all the details? And for such a small cost? It seemed too good to be true. I’m thrilled to see that S3 just keeps getting better, with relatively frequent price reductions too.

Open Data comes to Edmonton

Today I’m excited to share the news that Open Data has arrived in Edmonton! In a presentation to City Council this afternoon, Edmonton CIO Chris Moore will describe what the City has accomplished thus far and will outline some of the things we can look forward to over the next six months (I’ll update here after the presentation with any new information). This morning, he announced the initial release of, the City of Edmonton’s open data catalogue. Starting immediately, developers can access 12 different data sets, including the locations of City parks, locations of historical buildings, and a list of planned road closures.

PDF You can download the report to Executive Committee here in PDF.

The report was created in an open fashion – the information inside was provided by 39 contributors who had access to a shared document on Google Docs.

Data Catalogue

The data catalogue is currently in the “community preview” phase, which basically means that the City of Edmonton may make breaking changes. Critically, the data available in the catalogue is licensed under very friendly terms:

“The City of Edmonton (the City) now grants you a worldwide, royalty-free, non-exclusive licence to use, modify, and distribute the datasets in all current and future media and formats for any lawful purpose.”

Developers access the data in the catalogue using the APIs. This might seem a little cumbersome at first, but it actually means you can programmatically traverse and download the entire catalogue! Developers can also run simple queries and view preview data on each data set page.

The catalogue features a prominent “feedback” link on every page, so check it out and let the City know how to make it better.


The City of Edmonton’s data catalogue is built on Microsoft’s Open Government Data Initiative (OGDI) platform. OGDI is an open source project that makes it easy for governments to publish data on the web. The City of Edmonton, which is the first major government agency in Canada North America to use OGDI, will be contributing enhancements back to the project. OGDI is built atop the Windows Azure platform, and exposes a REST interface for developers. By default it supports the OData, JSON, and KML formats. Developers can access ODGI using their technology of choice, and C#, Java, and PHP developers can make use of the toolkits provided by Microsoft.

History of Open Data in Edmonton

We have been talking about open data for roughly a year now (and probably even longer). On February 18, 2009, Edmonton Transit officially launched Google Transit trip planning, which made use of a GTFS feed provided by ETS. At TransitCamp Edmonton on May 30, 2009, that data was made available to local developers. I led a discussion about open data a couple of weeks later at BarCampEdmonton2, on June 13, 2009. Councillor Don Iveson submitted a formal inquiry on open data to City administration on October 14, 2009. A few days later, the community talked again about open data at ChangeCamp Edmonton on October 17, 2009, focusing on Councillor Iveson’s inquiry. That event led to the creation of the #yegdata hashtag, a UserVoice site to identify potential data sets, and a number of smaller follow-up events. It also prompted Chris Moore to open up access to the creation of his report. On November 23, 2009 the City of Edmonton hosted an Open Data Workshop at City Hall that was attended by about 45 people.

What’s next?

First and foremost, developers need to start using the data! There will also be opportunities to provide feedback on the catalogue, to help prioritize new data sets, and to get involved with crafting the City strategy. Here’s the Program Plan for the City’s Open Data Initiative:

  • January 13, 2010: Initial release of City of Edmonton data catalogue
  • January 2010: Sessions with utility & organizational partners to obtain more data
  • February 2010: Public Involvement Plan
  • February – April 2010: Official data catalogue release, application competition!
  • March – April 2010: Development & approval of open data strategy for the City of Edmonton
  • May 2010: Open Data Administrative Directive, approved by City Manager
  • May – June 2010: Open Data Road Show, to communicate the strategy

In Vancouver, the policy came first and the data catalogue came second. In Edmonton we’re doing the reverse. We end up with the same result though: by the spring we’ll have a data catalogue in use by developers, and an official policy and strategy for open data in the future. This is fantastic news for all Edmontonians!

Congratulations & Thanks

Congrats and thanks to: Chris Moore for providing the leadership necessary at the City of Edmonton for all of this to become a reality; James Rugge-Price and Devin Serink, for organizing the workshop in November, for doing most of the behind-the-scenes work, and for always keeping the discussion alive and interesting; Jacob Modayil, Stephen Gordon, Jason Darrah, and Gordon Martin for supporting this initiative from the beginning, and for bringing valuable experience and leadership to the table; Don Iveson, for recognizing the positive role that open data will play in building a better a Edmonton; all of the members of the community who have contributed ideas and helped to spread the word about open data; all of the other City of Edmonton employees who have supported open data in Edmonton. And finally, thanks to Vancouver, Toronto, and everyone else who came before us for leading the charge.

Enough reading – go build something amazing!

TweetSharp for Twitter developers using .NET

Since January I’ve been using a library called TweetSharp in my various Twitter-related programming projects (including my monthly stats posts). Not only has it saved me from all of the effort that would have gone into writing my own Twitter library for .NET, but it has also taught me a few things about fluent interfaces, OAuth, and other topics. Here’s the description from the relatively new official website:

TweetSharp is a complete .NET library for microblogging platforms that allows you to write short and sweet expressions that convert automatically to web queries and fly to Twitter on your behalf.

Maybe this is a generalization but I often feel that .NET developers get the short end of the stick when the “cool kids” release sample code for their APIs. Or more accurately, C# developers get the short end of the stick (because you can run Python, Ruby, and other languages on .NET if you really want to). Thus I’m grateful that Dimebrain (Daniel Crenna) has developed such a useful library.

TweetSharp is open source and under active development (hosted on Google Code), with a growing base of users reporting and fixing issues (I helped with the Twitter Search functionality initially). If you’re writing any kind of software for Twitter using .NET, you should be using TweetSharp.

Questionmark Open House in Edmonton!

It might be hard to tell, but Twittering isn’t actually my day job! As some of you know, I’m a software developer for a company called Questionmark. Though the company is based in London, UK, we have a growing team here in Edmonton. We recently moved into a new office downtown, and we’d like to invite you to come check it out and get to know us a little better:

Date: Friday, May 15, 2009
Time: 4:00pm
Location: #806, 10080 Jasper Avenue (map)
Cost: Free

Feel free to stop by anytime after 4pm! We’ll have food, wine, beer, etc. If you’re planning to come, please RSVP by emailing me at

Also taking place that evening is the #twilightYEG Guest Bartender Friday, in support of the Royal LePage Shelter Foundation. It’s happening at Lux, which is right across the street from us, so join us for a drink at the office and then head over to Lux to support a worthy cause!

If you’re a local software developer, definitely stop by and say hello – we’re hiring!

Job Description for Software Developer & Open House Invite

Hope to see you on the 15th!

Questionmark still hiring .NET developers in Edmonton!

questionmark The software development company I work for here in Edmonton, Questionmark, is once again looking for developers to join our team. The job descriptions I posted back in September are still relevant, but here are the requirements again:

A minimum of 3 years of commercial development experience. Highly skilled in software development using our core technologies: C#, ASP.NET, XML, Ajax, Javascript, T-SQL. Experience with SCRUM a plus. Excellent written and oral communication are essential.

You’d be working on the latest and greatest, both technology-wise (.NET 3.5, etc) and product-lineup-wise (the company’s newest products). It’s a great opportunity!

We’re currently in the process of moving to our brand new office in the Empire Building downtown (10080 Jasper Avenue). As some of you may know, I’ve had offices in the building twice before, and I think it’s a fantastic place to work. It’s great to be right in the heart of downtown, with easy access via public transit and lots of amenities within walking distance (parking isn’t so great, of course).

Job Description for Software Developer

If you’re interested in applying or would like more information, either send me an email or email Kaitlyn Lardin. Thanks!

My Tech Days Sessions: ADO.NET Data Services and Internet Explorer 8

I’m in Calgary right now at Microsoft’s new paid conference, Tech Days. Despite being a little critical of the event when I first heard about it, I was asked to speak in Calgary. I figured it would be a great opportunity to get a first-hand look at the event so that I can offer more constructive feedback for future editions of Tech Days, and besides, I love sharing what I know with others!

I did the first two presentations in the Web Developer track – a session on ADO.NET Data Services followed by an introduction to Internet Explorer 8 for developers. I think my presentations went well for the most part, despite a few glitches with the demos. Initial feedback from people in the audience was positive anyway! Here are a few resources.

Goin’ Up to the Data in the Sky: ADO.NET Data Services for Web Developers

Internet Explorer 8 for Developers: What You Need to Know

Thanks to everyone who came to the sessions – feel free to contact me if you have additional questions.

Also, thanks to John Bristowe and the team at Microsoft for the opportunity to be involved with Tech Days. I’m looking forward to the rest of the sessions!

Google Native Client: ActiveX for the other browsers

Today, Google announced Native Client, “a technology that aims to give web developers access to the full power of the client’s CPU while maintaining the browser neutrality, OS portability and safety that people expect from web applications.” Basically it’s a browser plugin that hosts a sandbox for native x86 code. So instead of writing a web page, you’d write a normal application and execute it in the browser.

I admit that I’ve only scanned the documentation and research paper so perhaps I’m missing the details, but Native Client seems entirely unnecessary for a bunch of reasons:

  • There are lots of ways to accomplish this already – Java, ActiveX, Flash/Flex, Silverlight 2, Alchemy, etc. Why do we need another one? Will it be very different or better? Heck even ClickOnce seems better than this.
  • What’s the point of running native code inside a sandbox inside a browser? Unless the sandbox is super efficient and our browsers improve by an order of magnitude, it would seem to me that the benefits of native code would be erased.
  • Similarly, with the performance of Javascript/HTML/CSS in browsers consistently improving, why write native code at all? Web apps are becoming very fast.
  • I don’t really want to install yet another plugin. The classic “chicken and egg” plugin problem will be in effect here (users won’t install the plugin without great apps and developers won’t create great apps if no one has the plugin).

This project feels a lot like Google is reinventing the wheel. Or at the very least, throwing something else out there to see if it sticks. I hope developers think about this before jumping in. A bunch of the comments on Google’s post suggest that will happen, such as this one:

Um, isn’t this called desktop software?

That kinda says it all, I think!

When you get right down to it, Native Client is just ActiveX for browsers other than Internet Explorer. Sorry Google, but that doesn’t sound very appealing to me.