Where do you want free wireless Internet access in Edmonton?

free wifi Slowly but surely, the Free WiFi Project here in Edmonton is growing. We’ve added a number of new locations over the last week, including Boston Pizza on Jasper Avenue and 106th Street, Boston Pizza in Mayfield Common, and Transcend Coffee on the south side which just went live today. And despite being down for much of the week, our nodes at the Fringe are back up and should remain that way. We’ve learned quite a lot with that particular setup!

We need to expand the network to make it more useful and to fulfil our vision of free, accessible wireless Internet access for all Edmontonians. We’re actively working on that, checking out potential locations and getting some marketing information together. Still, I figure a little informal market research can’t hurt! So I’m asking you – where would you like to have free wireless Internet access in Edmonton?

I’m curious to know where you want to use it. We think cafes, restaurants, and similar locations are the most obvious, but maybe we’re missing something? We have received a few suggestions already, such as the Legislature grounds and more mobile setups like the Fringe.

Leave a comment here with your suggested locations, or use our contact form! Thanks!

Offline access is more important than ever

offline folder Even though we still don’t have “wireless everywhere” (as I like to say), access to the Internet is indeed becoming more pervasive. Until the world is blanketed in wireless however, there will always be a place for offline applications. Sometimes you need to get some work done, with or without an Internet connection. Unreliable access or no access at all might have been the driving force behind offline applications in the past, but now there’s a new reason: cloud computing.

The term “cloud computing” is a bit like Web 2.0 in that it is used as a blanket term, but essentially it means accessing applications and services via the Internet (“in the cloud”) without worrying about the infrastructure that supports them. One of the best examples is GMail, Google’s email service that lets you manage your messages in any browser. It’s also a good example of why offline, synchronizing applications are so important – GMail went down completely yesterday:

Gmail is having a systemwide outage affecting multiple countries, and a whole bunch of its 100 million users are screaming about it on Twitter. Around 20 million people visit Gmail each day, according to Comscore, and they’re all seeing the same message. The first outages were reported at about 2 pm PST, 44 minutes ago.

One of the things that makes cloud computing different than services in the past is that more and more businesses rely on things like GMail to operate. When it goes down, so does a significant part of their business.

There’s a transition underway. Businesses are realizing that it doesn’t make sense to operate their own data centers and services when Google, Microsoft, and others can do it far more efficiently. But don’t let those names fool you, as GigaOm points out:

If an outage of this magnitude can strike Google, the company with a fearsome infrastructure, I wonder who — if any — can plan for the worst.

It’s extremely difficult to maintain 24×7 operations, even for a company like Google. The only reasonable thing to do is assume that service will go down at some point, and to plan accordingly.

For that reason, I think offline access and synchronization are two things that developers will need to focus on in the future. Like the other big challenge facing developers, multi-core computing, improved technologies and toolsets will be needed. Vendors are working on it, Google with Gears and Microsoft with the Sync Framework, but there’s still a lot of work to be done.

Cloud computing is great, and I’m excited about the opportunities that it provides. We have to realize that it’s only part of the equation, however. Offline access and synchronization are more important than ever.

Pros and cons of telecommuting

telecommuting The company I work for, Questionmark, is a big believer in telecommuting. As a result, I work from home usually two days a week. We were talking about it in the office this week, and this article in the New York Times made me think about it again recently:

Gasoline has become the new workplace perk, as employers scramble to help workers cut its use and cost. A dollar a gallon ago, things like telecommuting, shortened workweeks and Internet subsidies were ways of saving time and providing workers with a little more balance in their lives. Now they have become ways to save money and to keep workers from, well, walking.

Saving money on gas is definitely a good thing about telecommuting. Not everything about it is positive though. Here are some pros and cons for me.

Pros:

  • I save money on gas, likely extend the life of my vehicle, and get to avoid traffic headaches.
  • Rolling out of bed and turning on the computer is great. No need to rush around and get ready! This also helps with really early morning meetings.
  • If I need to run a quick errand, it’s easy to do so.
  • Often there are less distractions, and I can really focus on something.

Cons:

  • It’s really easy to eat too much. With the kitchen a few steps away, I find myself snacking more than I would in the office.
  • No air conditioning in my apartment…when it’s 30 degrees outside, the A/C in the office is definitely nice.
  • Sometimes to solve a problem you simply need to talk to someone else in person.
  • Technology isn’t perfect, and sometimes the VOIP phones fail or for whatever reason I can’t connect to something I need.

You can read more about telecommuting at Wikipedia.

Another popular trend is the shortened work week, where you work four ten hour days instead of five. That would definitely save money on the commute too, but again would have pros and cons.

Seems to me that the standard 9 to 5, five day work week is becoming a bit antiquated. At the very least, more and more organizations are comfortable experimenting with alternate schedules and ways of working.

Just use OpenDNS

warning! Unless you frequent tech publications on the web, you’re probably not aware that a critical flaw in many DNS system implementations was found recently (DNS is what translates http://www.google.com into an IP address – learn more at Wikipedia). On July 7th, news of the design flaw that researcher Dan Kaminsky discovered started to spread. The next day, many vendors (including Microsoft, which hosted the press conference) participated in a coordinated release of patches. A few days ago the first exploit code started to appear, making it even more critical that DNS systems are patched soon.

As of today, many major ISPs are not patched and remain vulnerable. You can see if your ISP is vulnerable by visiting Kaminsky’s site and clicking the “Check My DNS” button on the right side.

Or, you can just switch your DNS servers to OpenDNS and be done with it. I came across OpenDNS on the day it launched two years ago, and have used them on some machines ever since. Turns out that OpenDNS is one of the few that were unaffected by this flaw:

I’m very proud to announce that we are one of the only DNS vendor / service providers that was not vulnerable when this issue was first discovered by Dan. During Dan’s testing he confirmed (and we later confirmed) that our DNS implementation is not susceptible to the attack that was discovered. In other words, if you used OpenDNS then you were already protected long before this attack was even discovered.

Switching your DNS settings to OpenDNS is really simple and takes about two minutes. To get started, just visit http://www.opendns.com/start and follow the instructions. Or if you know what you’re doing, then the nameservers you want are 208.67.222.222 and 208.67.220.220.

As always, make sure you have installed all of the latest patches for your computer (that would be Automatic Updates for Windows users).

I want some of whatever Union Square Ventures is smoking!

meetup I cannot recall when exactly I happened upon Meetup.com, but it seems like a long time ago. I thought it was a neat idea and signed up. I never got much use out of it though, as there weren’t many other users in the Edmonton area. I forgot about it for the most part. Eventually I checked the site out again when they announced that it was no longer free to organize a meetup. It’s been on my radar since then, but I still don’t pay it much attention.

Today they announced that they have accepted funding from Union Square Ventures:

So why take an investment? Because the world needs more Meetups — and more powerful Meetups.  We’re at-risk of living in front of screens, endlessly Twittering and not forming powerful local community groups. There’s endless possibilities to make Meetup  better able to help people self-organize powerful local groups! With a shaky economy, it’s best to secure and strengthen Meetup for the future with an investor.

I feel obligated to point out that “endlessly Twittering” can in fact lead to worthwhile and enjoyable face-to-face meetings with others, both individually and with a large group such as the EdmontonTweetup.

The justification for the deal makes sense from Meetup’s point-of-view. Frankly, I’m surprised they haven’t taken funding until now. The justification from USV made me do a double take though:

Organizing people online to make a difference offline has been the central mission of Meetup since the beginning. The team there has always understood that there was a difference between collective intelligence and collective action.

So we are thrilled to be an investor in a company that has been organized since its inception around the key insight that we believe will drive the next several years of innovation on the web – the need to solve real problems in the real world for real people.

I’m confused. A company that charges $19 a month in exchange for a glorified mailing list is going to “drive the next several years of innovation on the web”? I don’t think so.

I agree with the argument that Tim O’Reilly, John Battelle, and indeed USV themselves are making about harnessing collective intelligence on the web and using it to make a difference in the real world. I get that.

What I don’t get is how Meetup is supposed to help us accomplish that, nor how they are supposed to drive innovation on the web while doing it? Last time I checked, we didn’t need Meetup to organize the EdmontonTweetup, or DemoCamp, or BarCamp, or Northern Voice, or smaller meetings for drinks, or coffee, or lunch. I don’t think any of the major fundraising initiatives (such as the CIBC Run for the Cure) use Meetup, though all of them certainly use the web.

Granted, there are certain niches that Meetup is very successful in. As Brad points out, the company “organizes over 2300 moms Meetup groups in 1100 cities in 11 countries.”

Still, I’m confused. Meetup is taking the money basically to stay afloat during a shaky period in the economy, and hopefully to grow. USV is giving them money to make a difference in the real world and drive innovation on the web. Something doesn’t add up.

Either Brad and Fred know something the rest of us don’t, or they’re smoking something really good.

Wireless Internet at the Edmonton Public Library

edmonton public libraryI’m not entirely sure what a “library of the future” might look like, but I’m certain it would have readily available wireless Internet access. Actually that idea isn’t very futuristic at all – many libraries now offer free Wi-Fi service to patrons, such as the Edmonton Public Library (EPL).

Launched in early February, the EPL’s wireless Internet service is available at almost every library branch in the city (Lessard and the temporary Idylwylde location being the only two exceptions). In its first five months of operation, the service has seen nearly 7500 sessions with an average of 450 sessions per week in June. Via email I was able to find out some additional details about the service from Lachlan Bickley, Acting Director of EPL’s eServices.

Like the Next Gen wireless service, the EPL’s wireless runs atop existing infrastructure. Wireless network traffic runs over an IPSec/GRE tunnel and eventually makes its way onto the Alberta SuperNet. The service is currently limited to 250 users per branch, and each user is restricted to 500 KB/s of throughput. Web content itself is not filtered, but only the HTTP, HTTPS, and FTP protocols are allowed. The EPL chose Aruba Networks to provide the equipment for the service. They are capable of supporting 256 access points in total, or 128 redundant access points. The EPL is currently using 52 and expect to add an additional 30 over the next few weeks. They constantly monitor the network and will make adjustments wherever necessary to ensure reliable access.

Initial costs included the purchase of hardware and software, as well as installation. Ongoing costs are minimal aside from annual support agreements with Aruba because the network needs to be up and running to support internal administration anyway. Again, this is very similar to the cost structure of Next Gen’s Wireless Edmonton.

Lachlan told me that the EPL wanted to enable customers to access library services using their own wireless devices for convenience, and to reduce demand for wired public workstations. I suspect another reason for launching the wireless service was to keep up-to-date with other libraries around the world.

If you have a library card, you can sign on for an unlimited connection time. Otherwise you need to request “guest access” by speaking with staff at a service desk, who will set you up with 3 hours of connection time. I’ve tried the service a few times at the Stanley A. Milner library downtown using a library card, and I found it fast and reliable. The connection worked quite well in the Second Cup on the corner too.

Kudos to the Edmonton Public Library for offering this service. I look forward to seeing how it evolves.

City-provided Wi-Fi project to continue in Edmonton

wireless This morning I attended an Edmonton City Council meeting along with Eric. I had never been to a council meeting before, so the whole process was rather interesting and at times even entertaining. That said, I wonder how they get anything done! Item E1 was titled “City-Wide Wireless Internet and Wi-Fi Service – Pilot Project Internal Evaluation” and was marked on the agenda as “time specific, first item at 9:30 AM”. They finally got around to it at 10:30 AM.

Two members of Next Gen Edmonton joined a representative from the city’s IT branch to provide council with an overview of the report on Wireless Edmonton that was published on May 15, 2008. I haven’t actually seen the report, but it outlines the following information:

  • The first eZones were established at City Hall, Churchill Square, Kinsmen Sports Centre, and Commonwealth Sports and Fitness Centre
  • Usage is increasing and currently averages 250 users per day with an average connection time of 30 minutes
  • Public feedback has been generally positive, and indicates a demand for expansion of the service
  • Marketing efforts have been largely word-of-mouth, supported by media coverage, signage, and brochures
  • Ongoing annual operating costs are estimated at $1000 per eZone
  • Setup costs for each new eZone are estimated at $20,000

The current service is built atop the City of Edmonton’s existing Internet infrastructure, which is how they can keep costs fairly low (Eric and I still think it’s too expensive though). That means that future eZones could quite easily be setup at any City-owned location that has Internet/wireless already for administration purposes. Other potential expansion sites include transit corridors (LRT and/or high priority bus routes) and mobile units that would travel to smaller festivals and events.

The council passed the following recommendation/motion:

  1. That the City continue to provide and promote publicly accessible Wi-Fi (Wireless Edmonton) service at Main Floor City Hall, Sir Winston Churchill Square, Kinsmen Sports Centre and Commonwealth Sports and Fitness Centre.
  2. That the City continue to explore opportunities to expand the Wireless Edmonton service where existing City network infrastructure is available and where there is a public interest, as outlined in the May 15, 2008, Corporate Services Department report 2008COT002.

There wasn’t too much discussion, but a few interesting questions were raised:

  • Councillor Ben Henderson asked about the quality of the service, noting that the current practice of filtering means that common services such as email do not work for many users.
  • Councillor Karen Leibovici questioned the business case, and wondered why the city should provide such a service when Telus, Rogers, and others already provide similar services for a fee.

I think Councillor Henderson’s question is extremely pertinent. What’s the point of offering the service if you’re just going to cripple it? I’m definitely in favor of getting rid of the filtering.

Councillor Leibovici’s question is responsible, but largely misses the point in my opinion. The city isn’t operating the wireless service to turn a profit, but rather to facilitate indirect returns. The productivity gains and everything else that comes along with having free wireless is what really matters.

The IT representative (didn’t catch his name…might have been Stephen Gordon, who is Manager of Operations) made a really great point. He said that offering the wireless service is important for Edmonton’s credibility. There’s an expectation that world class facilities have Wi-Fi available, and Edmonton needs to live up to that expectation if it wants to compete on the world stage.

The presentation today made it clear that the City of Edmonton doesn’t want to compete with commercial providers of wireless Internet access. Instead the city can serve a particular niche, offering service in public locations that commercial providers would probably ignore (such as the library). I think that makes sense.

I think more needs to be done to improve the state of wireless in Edmonton, but it doesn’t have to fall on the city. There’s definitely opportunity for the private sector to get involved. I’m glad the city is doing something though, and I look forward to the expansion of their eZones.

The power cable is holding us back

power I spent some time over the weekend chatting with my friend Eric Warnke, who owns and operates the Third on Whyte Internet cafe here in Edmonton. We talked about a bunch of things, but mostly about wireless mesh networks. I’ve been writing about “wireless everywhere” for over five years now (since Imagine Cup 2003 to be exact), and Eric is one of those guys who is actually making it happen.

Eric has been experimenting with both the Meraki and Open Mesh technologies recently. There are others available as well, and we briefly brainstormed about creating our own little devices. The technology for extending 802.11g wireless is actually surprisingly simple and mature. And on the horizon of course, is WiMax and a host of other emerging technologies.

The problem with all of them, is power.

Even if the hardware becomes extremely energy efficient, each part still requires at least a little bit of power. The obvious solution for a mesh network with nodes located outdoors is to use solar panels, except that Edmonton’s climate is very unfriendly to such an idea (and don’t forget that solar panels are still relatively inefficient). That leaves us with either batteries or a power cable.

The main problem with batteries at the moment is that they need to be quite large if you want them to last for any reasonable about of time. Think of a laptop battery or the battery for an electric drill – each is about four times the size of the wireless components, and probably ten times the weight. Then there’s the problem of replacing the batteries when they die, or changing them when they need recharging.

So we’re stuck with the power cable. Despite all the technological progress we’ve made over the last 100 years, we’re still tethered by the power cable.

The first two chapters of Nicholas Carr’s book The Big Switch provide an extremely engaging history of Henry Burden, Thomas Edison, Samuel Insull, and the other individuals who were instrumental in making electricity the utility it is today. I like this part in particular:

Unlike lesser inventors, Edison didn’t just create individual products; he created entire systems. He first imagined the whole, then he built the necessary pieces, making sure they all fit together seamlessly.

Of course, Edison’s DC system eventually lost out to the superior AC. Still, I can’t help but think that we desperately need a modern day Edison. Just as Edison re-imagined urban gaslight systems, we need someone to re-imagine the modern electrical system.

Is wireless energy transfer the answer? I’m not sure. Maybe it’s better to start with a question – how can we eliminate the need for contact? Or at least make that contact less restrictive? For instance, instead of connecting a wireless node to a cable inside a lamppost, why can’t I just stick the node on the lamppost itself? That would be a good first step.

We need “power everywhere” before we’ll ever get to “wireless everywhere”. Unfortunately, batteries, solar panels, and other technologies aren’t getting us any closer to that reality at the moment. Surely there must be something else then?

What if Twitter had been built by Amazon.com's Web Services team?

twitter by aws? I’ve been using Twitter for a long time now, and I can’t remember a period of downtime quite as bad as the current one. Features have been disabled, and there’s no ETA for when everything will be back to normal. Who knows, maybe it won’t ever be. Which got me wondering about why Twitter’s reliability is so terrible. Is it the nature of the application, or is it something to do with the people behind Twitter?

What if Twitter had been built by a different team, a team with a pretty good track record for high-availability services? What if Twitter had been built by the Web Services team at Amazon.com?

I think it’s safe to say that things would be quite different:

  1. Reliable, redundant infrastructure
    Twitter would be run inside Amazon’s high-availability data centers. We would never know (or care) that Twitter’s main database was named db006, nor would we ever wonder whether it has a good backup. We’d just know that if it’s good enough for Amazon, it’s good enough for us.
  2. No wondering, “is Twitter working?”
    Instead of wondering if Twitter is working correctly or waiting for Twitter messages or blog posts that explain what the problem is, Twitter would be part of the AWS Service Health Dashboard. We’d be able to see, at a glance, how Twitter is working now, and how well it has worked for the last month. This is what transparency is all about.
  3. Twitter wouldn’t be free, but we’d be cool with that
    Twitter would have had a business model from day one, and we’d all be cheering about how affordable it is. A pay-as-you-go model like all the other web services from Amazon would work quite well for Twitter. You get what you pay for, right?
  4. Premium Support and SLAs
    Speaking of getting what you pay for, Amazon would likely have realized that there are lots of different types of users, and they’d react accordingly. We’d probably have Premium Support for Twitter, to service support requests more efficiently. We’d also have Service Level Agreements.
  5. We wouldn’t call it Twitter…
    Of course, the service wouldn’t be called Twitter. In keeping with Amazon’s other services it would probably have a name like “Amazon Simple Messaging Service”, or SMS for short. Though I suppose that acronym is already taken!

I am a huge Twitter fan, and I really do hope that Ev, Biz, Jack, and the rest of the team get things working and fixed. With every passing hour of downtime though, I lose a little bit of faith. I wonder if Twitter would be better off in someone else’s hands.

Of course, if Twitter really had been built by AWS, there would be far more differences than just the items in my list above. The service may not be recognizable as Twitter!

That doesn’t mean that they couldn’t adopt some of these items as improvements, however. I’d love to see an official Twitter health dashboard, for instance. One can hope.

Twitter doesn't know what's wrong

twitter Even occasional Twitter users will no doubt be familiar with the service’s frequent downtime. It’s a rare day when I don’t run into at least one or two “something’s technically wrong” messages on the site. That has prompted a lot of discussion about how to improve Twitter, and also some discussion about how things could be so bad.

I’ve been willing to cut them some slack. They’ve grown exponentially, and continue to do so. Then on Wednesday, Twitter founder Jack posted this on the official blog:

We’ve gone through our various databases, caches, web servers, daemons, and despite some increased traffic activity across the board, all systems are running nominally. The truth is we’re not sure what’s happening. It seems to be occurring in-between these parts.

Transparency is great, but surely they must have some idea about what’s wrong? I don’t know much about their architecture or systems, but it seems odd to me that they’d be totally stumped. It suggests to me that their architecture was never designed, and was instead thrown together over time. Now they’re in too deep to start over.

Twitter developer Alex suggests that the main problem is the system was originally put together as a content management system, when in reality it’s a messaging system. If that’s the case, fine, but messaging systems are not new. They must be able to examine and learn from some existing stuff right?

Posts like the one Jack made don’t inspire much confidence that they’ll be able to turn things around, but I sure hope they do. I really love Twitter. Maybe the $15 million in additional funding that they recently secured will help.