Tuesday, December 8, 2009

The great OpenStreetMap license debate

If you are involved with OpenStreetMap, you may or may not be aware that a lot of work has been been going on to develop a new license. If you are signed up for any of the OpenStreetMap mailing lists, then you certainly know that this work has been going on, as there has been a torrent of emails containing very heated debate on the topic over the past week or so (although work in this area has been going on for a couple of years).

From the long and passionate emails of a small number of people, you might get the impression that the new license is some sort of subversive scheme to somehow take over or undermine OpenStreetMap.

Duty Calls

Everyone is entitled to their opinion of course, but a few key points I would make are as follows:
  • I support the license change and encourage you to do so too.
  • A lot of very bright people who know way more about data licensing than I do have spent a huge amount of time working on the new license and I respect their knowledge and opinions very highly.
  • The License Working Group followed a very open and consultative process that allowed for lots of input from the community (you can see the minutes of over 50 meetings they have held this year alone).
  • The current license from Creative Commons has some significant shortcomings for databases - it is intended to apply to "creative works" and even Creative Commons says that it should not be used for databases.
  • The new license embodies very much the same spirit as the old license, but is much more enforceable, and better protects both the data and users of the data. So anyone who was comfortable contributing data to OpenStreetMap under the old license should be comfortable with the principles of the new license.
  • Switching from a "ShareAlike" license (which basically is more aggressive about ensuring that OpenStreetMap data remains open) to a "Public Domain" license (which has no restrictions), which is something that has been discussed, but would be a much bigger change than what is being proposed. This is worthy of further debate in the future but even if you support the Public Domain approach (which I lean towards but am undecided), I believe that the right short term approach is to get the project on a more solid legal footing than it is on currently (but without a major change to the original license philosophy).
  • There have been concerns expressed about the possibility of losing some data if existing contributors refuse to make their data available under the new license terms. There is a little risk here but I think it is overstated. The two licenses are so similar in their philosophy that it is hard for me to see a sensible reason why someone who was happy to contribute data under the old one would not do so under the new one. With data imports from larger organizations it may take a little time to work through some bureaucracy but I really think that should be doable, and various members of the community have volunteered to help out with this as needed.
This "human readable summary" does a good job of simply conveying the key principles of the new license.

If you would like to hear a longer discussion on these topics, you can listen to this 45 minute podcast which is a discussion on this topic including myself and various other more knowledgeable people, including several members of the License Working Group.

As I said in an email to one of the OpenStreetMap mailing lists, I think that the License Working Group has put in a huge amount of work on this effort, and I would like to sincerely thank them for that. As I said above, I think that this proposal would move OpenStreetMap onto a much stronger legal footing.

Finally, as I said at the end of the podcast and in an email to one of the lists, while the license is important, it is not the main aim of OpenStreetMap - that is to produce a great free and open map of the world. A lot of work has gone into the new license and big improvements have been made, but now is the time to vote and move on, and to let the community focus more of its energy on more important items like getting more people mapping, further improving the quality of the map data, and so on.

Wednesday, November 25, 2009

iPhone app review: flook is fun!

The past few days I have been playing around with flook, a new "location browser" application for the iPhone, and I like it. You can access some aspects of it in a regular browser, but it's really focused around the iPhone application. The basic idea is that users create "cards" with a photo, title and brief caption, together with a location, and you can browse for cards that are close to you. The user interface is very simple - you just swipe to go from one card to the next. The following is an example of one of my cards, featuring my local brewpub (which I live above):

Wynkoop on flook

The application is well designed, and has a fun and quirky feel to it. The following screenshot is an example - the oddball robots feature throughout the app.

Flook application for iPhone

As flook starts up it gives you strange messages like "flook is snurging its capitulators" and "flook is polishing its locbots" (though I haven't seen that one much recently, so I guess my locbots must be pretty shiny!). Those still make me smile, though maybe the novelty will wear off at some point :).

So anyway, there's nothing very complex about the functionality, but that's part of the appeal too. It's simple but fun to use.

I think that two of the main challenges that the flook folks face are first, getting enough content in there to make it more interesting (it has just been released and I am the first user in Denver, so there is limited appeal to me in just browsing through my own cards, cool though they are ;)!). I have put in a couple of feature requests in regard to content creation, including the ability to more easily take my existing pictures from flickr and create cards from those (especially from photos which are already geotagged). The second big challenge, once they have a good volume of cards in the system, will be how to filter them effectively, to show you ones that are interesting to you.

There are a few things missing at the moment which I imagine will be addressed soon. One is the ability to find friends who are using flook, via the usual assortment of mechanisms like searching your contacts or Facebook friends. You can tweet from flook but I'd like a bit more control over how that works - currently there's just a global toggle which will cause a tweet to be sent whenever you create a new card (or not). I'd like a third option which would prompt you whether you want to tweet or not for each card, and give you the ability to edit the tweet message (currently it shows the title and caption from the card, with a link, which is a good starting point but depending on the situation I might like to tweet something a little different). The tweets are geo-located too, using the new geo feature of the Twitter API, which is cool. Flook has a scoring system, but at the moment there's no way to see how you rank, and nothing that I can see at the moment that promises to have that somewhat-silly-but-somehow-addictive quality of foursquare's mayor system.

The team behind flook has a strong pedigree from Psion / Symbian and they are well funded, so I think that while flook is clearly still in very early days, it will be an application to keep an eye on. If you have an iPhone, I recommend that you give it a try, it's fun!
Reblog this post [with Zemanta]

Wednesday, November 18, 2009

OpenStreetMap helps free Ordnance Survey data with suicide bombing mission

So as I talked about in my previous post, Ordnance Survey is going to make its small scale data freely available. I think that in many ways, OpenStreetMap has been a major influence in making this happen. The growth of OpenStreetMap has increased the awareness of the benefits of free geospatial data, and it was becoming apparent that there would no longer be a significant market for the Ordnance Survey to sell small scale map data, certainly not at the sort of high prices it has traditionally charged.

However, the fact that this is happening raises some major questions about the future of OpenStreetMap in the UK, and could even lead to its demise there. At the very least, it dramatically changes the nature of OpenStreetMap in the UK. People have different motivations for contributing to OpenStreetMap. Some do it just because they think it's fun, and they like mapping their local area. For many people there is motivation around the fact that they believe it's important to have freely available and open map data. Suddenly at a stroke, the second motivation is seriously diminished (in the UK), as this aim has been achieved if the Ordnance Survey makes a high quality and very complete dataset freely available. Now we don't know for sure yet what Ordnance Survey will release - it is possible that it could just make raster map data available (like Google does). But it seems likely to me that they will probably make the small scale vector data available too - there is certainly lots of demand for this.

We also don't know the licensing terms yet, but it seems likely that the Ordnance Survey data will be in the public domain. So ironically it will be more open than OpenStreetMap, whose current and new licenses are fairly "viral" - roughly speaking they say that if you enhance the data, you have to make those enhancements available on the same terms as the original data (i.e. the enhanced data has to be freely available). This more or less precludes having a commercial ecosystem of "value added" data providers on top of OpenStreetMap. And many commercial companies, like Google, have expressed concern about using OpenStreetMap because of licensing (even with the new license that should be rolled out soon). But potentially Google, Microsoft et al will be free to use the Ordnance Survey data with no constraints.

So where does this leave OpenStreetMap in the UK? It is interesting to compare the situation in the UK with the US. OpenStreetMap took off very quickly in the UK, driven in many ways by frustration with the Ordnance Survey and the lack of free map data. In the US it has taken off more slowly, and this is widely thought to be because there are more sources of free map data (albeit often poor quality ones, as I've discussed previously). There has also been a lot of spirited discussion recently on the OpenStreetMap mailing lists about the pros and cons of importing TIGER data as a starting point in the US. There is a strong contingent that argues that cleaning up existing data is less interesting and motivating than mapping something from scratch, and that this is why there is less interest in OpenStreetMap in the US than the UK. The counter-argument, which I support in general, is that we are much further along in the US with TIGER data than we would have been without it. But anyway, suddenly the UK finds itself in a similar situation to the US, but with a much higher quality free data source (assuming there are no licensing issues, which there won't be if the data is public domain, which is what I expect).

This raises a lot of practical issues in terms of data imports, which we have already faced (but not solved) with OpenStreetMap in the US. OpenStreetMap in the UK has a rich database already - according to Muki Haklay, it is about 65% complete in terms of geometry, and 25% complete if you consider attributes. Now you have a 100% complete high quality dataset that you could import, but how do you reconcile this with existing data? This is a complex problem to solve. And how about subsequent updates? Do you just do a one time import of OS data, and let the community do updates after that? Will people be motivated to do this, if the OS is updating its own dataset for free in parallel? Is there some way of using the OS data for items that they maintain, and having OpenStreetMap focus on more detailed items (benches, trash cans / bins, etc)?

The ideal world might be to have some sort of integration between OpenStreetMap and the Ordnance Survey. I have spoken often about the disruptive impact of crowdsourcing and how government agencies and commercial companies need to leverage the power of this approach to bring down the cost of creating and maintaining data. Now that Ordnance Survey will have reduced revenues and require increased subsidies from taxpayers, they will be under increasing pressure to cut costs. If there was a way to leverage the power of the thriving OpenStreetMap community in the UK that could reduce costs quite significantly. There are challenges with doing this and it may just be wishful thinking ... but we can hope :).

So anyway, this move raises lots of questions about what OpenStreetMap will look like in the UK in future. If you regarded the mission of OpenStreetMap in the UK as being to create a free, open and high quality map of the UK, you can argue that the mission is completed (or will be in April), perhaps in a slightly unexpected and sudden fashion, like the falling of the Berlin Wall. Steve Coast quotes Gandhi on the OpenGeoData blog: "First they ignore you, then they laugh at you, then they fight you, then you win." The question is should we add "... and then you die"? (Or less drastically perhaps, retire, or have no reason to exist any more?)

There are some other aspects to OpenStreetMap of course, like I alluded to before - making more detailed maps of aspects of your neighborhood than the Ordnance Survey does for example. But working out how those other aspects can coexist alongside the new reality of free OS data is complex. And how many OpenStreetMappers will lose the incentive to participate in this new world, if there is an alternative source of good quality, free and open data? We live in interesting times in the geo world today - this is the second hugely disruptive announcement (following the Google earthquake) in a month or so!

I should just reiterate that of course all these specific questions apply to OpenStreetMap in the UK, they don't affect its aims and benefits in the rest of the world - except that a lot of energy for the global movement has come from the UK, so if that energy diminishes it could have some knock-on effect in the rest of the world. But I hope not!

This move by Ordnance Survey will also increase pressure on National Mapping Agencies in other countries to make more data freely available (where it isn't already).
Reblog this post [with Zemanta]

Ordnance Survey free data: right decision, various wrong justifications cited

So yesterday the UK government announced that some data sets (not all) from the Ordnance Survey (the UK national mapping agency) will be made available for free - 1:10,000 scale data and above is included (so this includes popular OS maps like the 1:25,000 and 1:50,000, in digital form). The more detailed maps (1:1250 and 1:2500) are not included - but I believe that issues related to derived data will also be resolved, which will be useful in regard to those datasets too. Overall I think this is the right decision, in what is a much more complex issue than most people realize, as I discussed briefly in my recent georant on the topic. This approach makes available most of the data that is useful to the broader world, while minimizing the impact to Ordnance Survey's revenue, most of which comes from the large scale 1:2500 and 1:1250 map data (known as MasterMap). I was recently asked for my opinions by some consultants working for the UK government on the Ordnance Survey strategy, and this is the option I favored.

As I pointed out in my georant, making all Ordnance Survey data free would cost the UK taxpayer an extra 50 million pounds a year (a total of 100 million pounds a year). This approach should cost the taxpayer substantially less - though there will still be a cost, in the order of a few tens of millions a year. Nobody has said where spending will be cut to pay for this - but I personally think it will be money well spent in this scenario (Update: I have heard other estimates that the lost revenue due to sales may be more in the £5-10m range, but nobody seems to have a firm estimate yet - well nobody knows which products are involved so that makes it harder. My number was just an order of magnitude guess).

The Guardian, though my favorite newspaper, continues to make several incorrect statements in support of this move. They say "the move will bring the UK into line with the free publication of maps that exists in the US". What maps are they talking about? Again as I talked about in my georant, there are two main sources of central government maps in the US, the USGS "national map" and the US Census Bureau TIGER data. Both of these have very limited funding (as they are paid for by the taxpayer and deemed low priority compared to other things), and their map products are not good enough quality to be used as base maps by local government or infrastructure companies like gas, electric and telecoms. As a result, all of these companies and agencies do their own base mapping, leading to huge inefficiencies with all cities being mapped multiple times in inconsistent fashion, so data from the electric utility often doesn't line up at all with data from the gas utility, for example. More detailed map data created by local government agencies (like parcel maps) has a huge array of different policies - some give it away free, some free for non-commercial use only, and some charge significant amounts for it. So please don't hold up the US as an example of what you want in a national mapping infrastructure, it's a real mess here I'm afraid! I really hope that the UK government will step up to the increased taxpayer funding that Ordnance Survey now needs to continue its work as one of the premier National Mapping Agencies in the world, and that funding for mapping won't be cut drastically as it has been in the US (where for example, USGS has gone from having ~2500 people working on the national map in the 1980s to two hundred and something today).

The other thing that annoys me is that the Guardian cites the so-called "Cambridge Report" (PDF), which in my opinion is a very questionable document anyway, in a totally incorrect way. They say that the report says "making all OS data free would cost the government £12m and bring a net gain of £156m". Firstly, that quote alone is very misleading, which is just one of the problems I have with the Cambridge Report, but I won't get into that here (but may return to it in a future post). However, the scenario studied in the Cambridge report was not "making all OS data free", it studied the option of giving away all the large scale data for free, and not giving away the small scale data - in other words the EXACT OPPOSITE of what is being proposed. So the specifics of the Cambridge Report in regard to the Ordnance Survey have ZERO RELEVANCE to the decision which has been announced (except to reinforce that there is some benefit to society in making free map data available, which is stating the obvious anyway). I am in favor of the decision as I said, but like to think of myself as a scientist and a scholar :), and it really annoys me when people blatantly misrepresent evidence to make their case.

So anyway, don't get me wrong - I think that this is a very good thing for the UK geospatial industry, and for the general principle of open data, which I am a strong supporter of, despite that fact that I will also point out the challenges with it when appropriate! I think that the right broad option has been chosen out of a complex array of possible choices. But there are risks with the decision too, including the potential for reduced funding and deterioration in quality of Ordnance Survey Maps. And there are likely be some big losers too - including NAVTEQ and Tele Atlas (again), and in many ways OpenStreetMap, which is the topic of my next post.
Reblog this post [with Zemanta]

Tuesday, November 17, 2009

PublicEarth goes live

Boulder-based startup PublicEarth, a free wiki for places, has gone live today. I've been on their Advisory Board since they started last year, so am excited to see them launch - congratulations to founder Duncan McCall (pictured below talking at WhereCamp5280), Chairman David Hose, and the rest of the team.

Duncan McCall of PublicEarth at WhereCamp5280

Duncan talks about his original vision for PublicEarth here:
So the idea for a wiki of ‘interesting’ or ‘long tail’ places was born, with ability for users to create, edit and share information in an structured, categorized and intelligent format – with the simple ability to get this information onto mobile and GPS devices.
I see PublicEarth being complementary to OpenStreetMap, which is of course another well known crowdsourced effort around geospatial data. OpenStreetMap is primarily focused on creating a free and open basemap for the world, with some basic points of interest. PublicEarth is not about the basemap, but about creating richer information about interesting places. For example, there's a lot of good information about Public Art in there - I just found out that the big Blue Bear sculpture in Denver was created by a fellow Brit, which I hadn't known before! That data came from Public Art Review - one of the things that PublicEarth has done, and is looking to do more, is to help a variety of organizations who have interesting datasets (like this one about contemporary public art) and make them more accessible, in a way in which they can be easily queried by location. There are all kinds of other interesting datasets in there, with more to come. And of course the whole idea of a wiki or crowdsourced approach is that anyone can create or update information, so I encourage you to give it a try and create some interesting places of your own!

Monday, November 16, 2009

Nice flickr search tool for Presentation Zen afficionados

I recently came across a nice flickr search tool called ViewFinder (for Mac), which makes it very easy to search flickr for photos with appropriate Creative Commons licenses, and then download a suitably sized version and insert it into a Keynote presentation. There's a way to copy attribution information too, though that's not done automatically with the "insert Keynote slide" function - hopefully they will provide an option to do that in a future update. But even as it is, it will save me a lot of time putting together presentations with lots of photos.

So if you're into Presentation Zen style and use a Mac, I recommend you check it out. And if you're not into Presentation Zen, you should be - read this :) ...

Monday, November 9, 2009

Transit routing on iPhone maps is cool!

I have been meaning to blog for some time about how useful I find the transit information in Google Maps on the iPhone. It's been around for a while, but I have been using it quite a lot recently and haven't blogged about it before. For those who haven't used it, when you calculate directions you can pick one of three options: driving, public transit, or walking (you also have the same options on Google Maps online). When you choose public transit, it shows you the next available trip, as follows:

iPhone Maps Transit information

You can click on the clock icon to see later trips and alternative options. This is one example where the application knowing your current location really adds to the convenience of getting the information - you just choose where you want to go, from a search or your contacts, and then the default is to show you how to get there from your current location. Even if you know the route, being able to easily find the time of the next bus is a great convenience. Another nice aspect is that the GPS tracks the bus location as you travel, and shows the scheduled arrival time, making it easy to figure out where to get off, which is another potential source of stress when you're traveling on a route you don't know.

iPhone Maps Transit

I think that one of the main inhibitors to people using public transit when they're not used to doing it is just the effort of figuring out their options. In a lot of US cities like Denver the culture really isn't to use transit - the default option is just to take the car. We've had some new light rail lines opening over the past few years, with more being built, and a few more people use those, but very few people I know really think about using the bus here. But we actually have a very good bus system, even though many people don't realize it. I am fortunate to live downtown so most of the time I just walk when we go out in the evening, but increasingly if we go further afield I try to take the bus or the light rail, and a strong contributing factor to that is the convenience of figuring out the transit routes (which are often ones I haven't taken before, or at least don't take regularly) on the iPhone.

This is also available in Google Maps Mobile on other platforms apart from the iPhone. But it's only available for some cities - it depends whether the transit agency has made their data available to Google. So it works for example in Denver, and even in Cropston, the small village in Leicestershire in the UK where I grew up, but not (at the moment) in London or Washington DC.

Anyway, I think that mobile multi-modal transit routing applications like iPhone maps and others have great potential to encourage people to use public transit more. I encourage you to try it, and leave the car at home! Other iPhone applications that can supplement this include things like Taxi Magic, which lets you call a taxi to your current location, and car sharing schemes like zipcar, which now has a cool iPhone app (of which one of the niftiest features is that you can use your iPhone to unlock the car, or even honk its horn to help you find it!).

By the way, I only found out fairly recently how to capture iPhone screen shots - in case you don't know, to do this you hold down the "Home" button and at the same time press and release the power button, and this will save an image of the current screen in your Camera Roll photos. I found this out from the 9 year old son of Dale Lutz!

Thursday, November 5, 2009

Was the Google Maps data change a big mistake?

So the discussions about the great Google map data change in the US rage on, and we are seeing more and more reports of significant data quality issues. I wrote about how Central City Parkway was completely missing, and I reported this to Google to see how the change process would work. I posted later about how it had been partially fixed, with a new geometry visible but not routable, and with the wrong road name and classification. The latest state (on November 5, after reporting the error on October 9), is that is now routable, but it still has the wrong road classification, being shown as a minor road rather than a major highway. This means that if you calculate the best route from Denver to Central City, Google gets it wrong and doesn't go on Central City Parkway, choosing some much smaller mountain roads instead, which take a lot longer. Microsoft Bing, Yahoo, MapQuest and CloudMade (using OpenStreetMap) all calculate the correct route using Central City Parkway. Another substantial error I found recently is that if you search for one of the companies I work for, Enspiria Solutions (by name or address), the location returned was about a mile off. This has now been partially but not entirely fixed (after I reported it).

Steve Citron-Pousty recently wrote about whether Google made the data change too soon. He talked about how his wife has always used Google Maps, but it has got her lost four times in the past week and she has now switched to using MapQuest or Bing. And Google got Steve lost in the Bay Area last week too. Maitri says she is "splitting up with Google Maps" over issues in Ohio, as "there is no excuse for such shoddy mapping when MapQuest and Yahoo do exceptional work in this area the first time around". She links to an article in a Canton, Ohio, newspaper about how the town was mis-named after the recent data change (we call it Canton, Google calls it Colesville). James Fee pointed out an error with Google showing a lake that hadn't been there for 25 years or so. Matt Ball did a round-up discussion on the importance of trusted data. The well known tech journalist Walt Mossberg reviews the new Motorola Droid phone (which uses the new Google data for navigation), and in passing says when reviewing the navigation application "but it also gave me a couple of bad directions, such as sending me the wrong way at a fork in the road". And then in news which is presumably unrelated technically (being the in the UK), there was a lot of coverage of a story about how Google Maps contained a completely fictitious town called Argleton - which even though a separate issue does not help the public perception of the reliability of Google Maps data.

Update: see quite a few more stories about data issues in the comments below.

So anyway, this is a long and maybe somewhat boring list, but I think that it is worth getting a feel for the number of stories that are appearing about map data errors. As anyone in the geo world knows, all maps have errors, and it's hard to do a really rigorous analysis on Google's current dataset versus others. But I think there is strong evidence that the new Google dataset in the US is a significant step down in quality from what they had before, and from what Microsoft, Yahoo and MapQuest have (via Tele Atlas or NAVTEQ).

Google clearly hopes to clean up the data fairly quickly by having users notify them of errors. But looking at the situation, I think that they may have a few challenges with this. One is just that the number of errors seems to be pretty large. But more importantly, I think the question for Google is whether consumers will be motivated to help them fix up the data, when there are plenty of good free alternatives available. If Google gives you the wrong answer once maybe you let it slide, and perhaps you notice the link to inform them of the problem and maybe fill it out. But if it happens a couple of times, is the average consumer likely to keep informing Google of errors, or just say "*&#% this, I'm switching to MapQuest/Bing/Yahoo"?

Google has made some reasonable progress with Google MapMaker (its crowdsourced system for letting people create their own map data) in certain parts of the world, but these are generally places where there are not good alternative maps already, or people may be unaware of alternatives like OpenStreetMap. So in those cases, people have a clearer motivation to contribute their time to making updates. People who contribute time to OpenStreetMap have a range of motivations, but in general for most of them it is important that the data is open and freely available, which is not the case with Google (at least not so much, I won't get into the details of that discussion here). Most if not all the people I know who contribute effort to OpenStreetMap (myself included) would not be inclined to contribute significant updates to Google (except for some experiments to see how good or bad the update process is).

Consumer confidence is a fickle thing, and you probably don't need too many stories in the newspapers of mishaps due to bad data, or more than a couple of direct experiences of getting lost yourself due to bad data, to switch to a different provider (especially when you are choosing between different free systems - you have a bit more incentive to stick with a navigation system and try to make it work if you've spent a few hundred dollars on it).

The risks are even higher with real time turn by turn directions - no matter how many caveats you put on it, you are likely to get some drivers who follow the directions from the system and don't notice relevant road signs. You only need a couple of accidents because people drove the wrong way up one way streets because of bad data to damage consumer confidence even further.

So I think it will be very interesting over the next few months to see whether the data quality issues are bad enough to result in significant numbers of users moving away from Google Maps in the US or not - and whether Google will get significant uptake in terms of number of people contributing error reports in the US (beyond the initial wave of curiosity-driven updates just to test if the process works). Obviously the answer to the second question is likely to have a big influence on the first. Stay tuned ...

Wednesday, November 4, 2009

I've joined the Advisory Board of FortiusOne

Today it was announced that I've joined the new Advisory Board of FortiusOne, together with Jeff Harris, who has a very distinguished background in the Intelligence world, and Michael Frankel and Wolf Ruzicka, who bring great expertise in Business Intelligence and enterprise software. We actually had the first Advisory Board meeting just recently and it's a great group.

I've followed the development of FortiusOne with interest for a few years now, and I did a bit of consulting for them back in the fairly early days of the company. Their CEO Sean Gorman and CTO Andrew Turner are two of the leading thinkers in the geospatial industry. I am a big proponent of their philosophy of de-mystifying geospatial analysis and making it accessible to a much broader audience of non-specialists. You can check out their free GeoCommons site which lets you easily search for, upload and download geospatial data, and produce great looking maps and spatial analysis like this:



(Click on the map to go to the interactive version)

One cool feature of GeoCommons is the ability to upload spreadsheets containing addresses or place names, which will be automatically geocoded using the free and open source GeoCommons geocoder. There are lots of nice examples of using GeoCommons on the FortiusOne blog, for example these posts on health care, the Afghan Elections, and home foreclosures. FortiusOne sells enterprise versions of their application (as a service or an appliance), which have additional analytic capabilities beyond those on the free public site, but with the same focus on simplicity and ease of use. I look forward to working with the team at FortiusOne, and watch for more cool new things coming soon!

Sunday, November 1, 2009

Tips on videoing presentations using ScreenFlow

I talk at quite a few conferences, as regular readers will know, and have been experimenting with videoing some of my presentations for a little while now. I thought I would share some of my experience on techniques I've tried and how they've worked.

Basic Approaches
Of course a basic option is just to take a single digital video that includes the speaker and/or the slides. Options include setting this up yourself by putting a camera on a small tripod, or even a table, or getting someone else in the audience to do the filming. You can get reasonable results doing this - certainly better than not seeing the presentation at all! A recent example of this approach was my "cowboy georant" at AGI GeoCommunity.


That was just taken on a cheap Flip video camera and it gives a good idea of the presentation - but it misses out quite a bit too. You don't get to see most of the carefully crafted slides :) (though maybe the cowboy hat was more interesting in this case!). One way to enhance a basic video like this is to upload files to SlideShare, and then imbed the video and slides next to each other, like this.

One other scenario where it makes sense to use a simple approach like this is when doing live streaming. Dave Bouwman and Brian Noyle have done a good job in live streaming presentations from various events for geogeektv (some older recordings here). This is very easy to do - all you need is a webcam (an external one is probably most flexible in this scenario), and a free membership of one of various online streaming services like ustream. Most of these services have the ability to both stream live and record for later viewing.

Using ScreenFlow
But with just a little effort you can combine videos of the presenter and the slides so you get a good view of both. The software I use to do this is called ScreenFlow, which runs on Mac only. It can record everything happening on your screen (from basic slides to video and software demos), and also gives you the option to concurrently record video from an internal or external webcam on your computer. ScreenFlow gives you nice capabilities to position the two (or more) video streams within your final output. I have used this on quite a few occasions with my built in webcam and a "picture in picture" style. The following is an example of this:

GeoWeb 2009 presentation by Peter Batty from Peter Batty on Vimeo

This approach works pretty well. If you watch for a few minutes you will see that the small window showing me talking can be easily moved around the screen, to avoid overlapping with significant items on a slide. This adds a bit of work to run through the presentation file in ScreenFlow and move the video window as needed depending on the current slide, though it's straightforward to do this. If you designed your slides appropriately with this approach in mind, you could potentially avoid (or at least minimize) the need to move the video window ... though I have to confess I haven't been organized enough to do that yet! But this definitely works well and doesn't need any equipment other than a laptop with a webcam.

A couple of minor drawbacks with this approach include the fact that using the built in webcam gives a slightly odd angle looking up at the presenter, especially if you are tall as I am, and the video window showing the presenter is rather small. I recently bought a new HD video camera (a Lumix GH1), and decided to try a different approach for my recent presentation in Duluth. I set up the video camera on a separate tripod focused on the presentation podium. I could have recorded the slides while I presented (which would have been easiest), but actually I recorded them separately in ScreenFlow after the fact (following along with the video to get the timing right). This is a useful option to have if you don't record the slides at the time for some reason, but you have a video of the presenter.

Having the video of me presenting in a wide screen HD format (1280x720 pixels) gave me the idea of putting the slides and presenter video side by side as in the following (click through to see it in a larger format, including the HD version):

The Geospatial Revolution (Minnesota) from Peter Batty on Vimeo

ScreenFlow has this nice ability to rotate the videos in a 3D space, which makes them slightly narrower without losing any content, as well as giving an interesting perspective effect. Putting the slide video on top crops out one side of the presenter video (which is just static background) and the other side is cropped by the edge of the overall frame. Overall I really like this layout - it gives you a clearer view of the presenter, from a better angle than using the webcam, and you can also see the whole of all the slides. One other nice little graphical touch I added is to include a 50% reflection below both videos - another cool feature of ScreenFlow. I can provide more specific details on putting this together if anyone needs that.

Specific tips on using ScreenFlow
Make sure you have plenty of spare disk space - ScreenFlow stores video in high quality for the full screen resolution and the webcam, so a raw file can easily take up a few gigabytes (once you output the final presentation video it will be compressed and a lot smaller). Once I didn't have a lot of spare disk space on my laptop, and got into a long discussion after the presentation without turning recording off, then the disk filled up and I lost the whole recording (enhancement request to the ScreenFlow folks ... it would be great if you could recognize an impending full disk and stop recording cleanly before crashing!)

These days when presenting I usually use "presenter mode" where I can see speaker notes and other information such as elapsed time and the next slide on my laptop screen, while the projector displays just the current slide - both Apple Keynote and Microsoft PowerPoint have this feature. It's especially useful if you're using more of a "presentation zen" style where you have simple pictures on most slides and do away with bullet point lists (which I highly recommend!). Anyway, if you do that you need to make sure that ScreenFlow is recording what is happening on the external display rather than the built in display. It defaults to using the internal display so this is easy to miss, especially in the heat of the moment when you're just about to start presenting. An added complication is that if you don't get to plug in your laptop until immediately before the presentation, you can't select the external display until the last minute (once you are connected to the projector) - which increases the chance of messing up this step. I have done at least a couple of presentations where I accidentally recorded the presenter screen instead of the audience screen. That's not the end of the world as you can re-record the slides after the fact, but that takes extra time of course. So another enhancement request for the ScreenFlow folks is to either default to recording the external screen (or have a preference to specify this), or at least give a warning when starting recording if there are two screens connected.

I have mainly used the internal microphone on my Mac for recording, and that has generally worked pretty well, but on some occasions the sound has been a little quieter than I would like. So it is worth considering an external microphone - one that I have used is the Snowflake, which worked well for my presentation in Perth but unfortunately I left it on the floor afterwards and someone stepped on it :(!

Publishing your video
YouTube has an annoying 10 minute limit which makes it inconvenient for traditional conference presentation formats. I like to use vimeo, which doesn't have this limit and also handles high definition videos nicely (though youtube has upgraded its support in that area recently). Vimeo also gives you the option of letting users download the original video file, if you want to. I now have a geospatial presentation channel on vimeo.

How does this impact conferences?
Kirk Kuykendall commented on my recent video post and wondered how this type of video recording may impact conferences - is there a risk of reducing conference attendance? There might be a slight risk of that, but overall I think it is probably more likely to have the opposite effect and encourage people to attend. A lot of the value of attending conferences is in meeting people, and in the informal (or formal) discussions that happen there. Conferences like Where 2.0 already video their presentations and put them online. TED is another example of a conference that puts all its presentations online, but still charges $4500 for conference attendance and sells out. But hopefully it will help conference organizers focus on providing a good all round experience for attendees.

It also presents a similar dilemma for speakers like myself who talk at a number of conferences. Obviously you sometimes re-use some material, so publishing online increases the risk that some attendees have seen parts of your presentation before. But again I feel the benefits outweigh the risks - you get an additional audience for your presentation, it may encourage some additional people to come and see you speak live, and it also gives you an incentive you to keep refreshing your material!

Monday, October 26, 2009

Talk on "The Geospatial Revolution" in Minnesota

Here is a video of my recent keynote talk at the Minnesota GIS/LIS conference in Duluth, which was an excellent event. There were about 500 people there, which is great in the current economic climate. It was mainly a "traditional GIS" audience, and I got a lot of good feedback on the talk which was nice.

I talk about current trends in the industry in three main areas: moving to the mainstream (at last!); a real time, multimedia view of the world; and crowdsourcing. There's a lot of the same material that I presented in my talk with the same title at AGI GeoCommunity (which doesn't have an online video), but this one also has additional content (~50 minutes versus 30 minutes).

Click through to vimeo for a larger video, and if you click on "HD" you will get the full high definition version!! I used a different approach to produce this video compared to previous presentation videos, using a separate camera and a different layout for combining the slides and video. I like the way this came out - I'll do a separate blog post soon with some tips on how to video presentations, I think.

The Geospatial Revolution (Minnesota) from Peter Batty on Vimeo.

You can also view the slides here:

Sunday, October 25, 2009

Google Maps data correction - a strange semi-update

I reported previously that I found that Google Maps was missing Central City Parkway after their change in street data provider (they are now providing their own street data rather than using Tele Atlas). I reported the error to Google and said I would report back here when it was fixed - Google is aiming to fix errors within 30 days. This evening Tom Churchill commented on my previous post to say that Central City Parkway was now present on the map - I thought it was odd that it had been fixed but I hadn't received an email to let me know, as promised. I reported the problem 17 days ago and received confirmation that it was an error and they would work on it 13 days ago.

Anyway, when I went to check out the updated map, this is what I saw (Google is on the right, OpenStreetMap on the left):

Partial update to Central City Parkway in Google Maps

This compares to the previous comparison screenshot I did, which looks like this:

Central City Parkway missing from Google Maps

So there is now a road on the map that follows the path of Central City Parkway, when there wasn't before. But it's drawn as a minor road when it should be a major highway (a "divided highway" or "dual carriageway" depending on where you come from!), and it has no name shown on the map. And when I try to get Google Maps to route along it, it stubbornly refuses to do so, even if I try to drag the route to force it along there (when dragging, it doesn't allow me to drop on this street):

Central City Parkway not routing yet in Google Maps

So anyway, we seem to have a curious semi-update - there's a new geometry there that wasn't there previously, along the route of Central City Parkway, but with no name, the wrong road classification, and you can't route along it. Seems odd that a partial update like this should find its way into the live database ... I guess the process is still slightly "beta" :O !! Will keep an eye on it and report back on further progress!

Monday, October 19, 2009

Location Based Services in 2014 - Part 1

The AGI in the UK is currently carrying out a "Foresight Study" to look at where the geospatial industry will be in five years' time. They have asked several dozen people to contribute short reports on different topics, and I was asked to write about Location Based Services. I thought I would publish the current working draft here, and I encourage you to contribute comments and suggestions that I can work into the final version. This is Part 1 of 2.

As an aside, I think that predicting 5 years out for Location Based Services is not unreasonable right now - we have had several significant breakthroughs in the past year or two, and so can make some fairly safe predictions based on widespread adoption of those. Timing makes a lot of difference - 5 years ago we had no Google Earth and no Google Maps (both launched in 2005), so any 5 year geospatial industry predictions back then would have been way off! Of course there will still be plenty of innovations that we don't anticipate too!


Scope
For the purposes of this discussion, a location based service is defined as a software-based service where a key element of that service is the current location of the user, derived using location sensing technology. We also consider applications that are based on data derived from users of location based services (such as real time traffic flow information); applications based on sensing the location not of users, but other objects such as vehicles or equipment; and some applications where the location of the user is not derived from sensors (for example applications based on anticipated future location).

Current Position
Location Based Services have been widely touted as “the next big thing” since the late 1990s, just before the dot com bubble burst. We have (finally) seen significant progress in the past couple of the years, with the iPhone in particular bringing location based services to a mainstream audience for the first time. A key limitation with the iPhone currently is that applications cannot run in the background, which rules out an important subset of location based services, those that carry out notifications or other actions proactively based on your current location. Battery life is still a challenge for continuously logging location of a phone, as GPS is relatively power hungry.

Most new phones now have location capabilities. We are also starting to see new capabilities that are very relevant to location based services, including compasses built in to smart phones, and augmented reality applications that combine location awareness, compass and built in camera. These capabilities provide new and compelling user interface capabilities for location based applications.

Anticipated Changes
Location tracking will be pervasive in 2014 – all mobile phones will have location tracking. Location based applications will be able to run in the background, overcoming a key current limitation (this will require improvements in battery technology). Built in compass, camera and augmented reality capabilities will also be on most if not all phones (as a baseline, consider that the proportion of phones with cameras exceeded 70% globally in 2008, with 75% anticipated in 2009).

The great majority of phones will be “smart phones” with capabilities exceeding today’s iPhones, including high quality graphics, touch screens, and the ability to run sophisticated applications (a basic iPhone costs $99 today, and in 5 years we will have had just over 3 iterations of Moore’s Law, so price performance will have improved by a factor of roughly 10).

Crowdsourcing and widespread use of sensors means that a wide variety of good quality and extremely current geospatial data will be available for free, including:
  • Road data, with relevant information for routing, including real time traffic information
  • Footpaths and cycle paths, with relevant information for routing
  • Points of interest such as restaurants, shops, petrol stations, ATMs, etc, plus relevant real time information such as gas prices
Additional types of location sensing technology will be common, including the following:
  • Proximity sensors, for example RFID or Bluetooth, which can detect when two devices are within a short distance of each other. One current example is a smart key, which will cause a car door to open when it is within a short distance. Another might be a Bluetooth device in a museum that displays relevant information, or plays relevant audio, on a smart phone when it is close by.
  • Continuous local sensors, such as WiFi or ultrawideband (UWB). UWB has a much higher degree of accuracy (~30cm) compared to WiFi (meters or tens of meters depending on number of sensors and environment). High precision sensors enable a variety of indoor location based applications that are not possible with GPS.
  • Very inexpensive passive RFID sensors (will be a few cents in this timeframe), which will be used to track huge numbers of inexpensive assets (at discrete points where they can be scanned, for example the entrance to warehouses or stores).
Usage of social networking applications will be highly pervasive. Unlike today, users will be able to maintain one common set of information about their networks of friends and business associates, and share this among multiple applications. This will be important for location related applications, for specifying what aspects of a person’s location information may be shared with whom.

High speed wireless communications networks will be pervasive in most parts of the world. It also seems reasonable to assume that the majority of newer vehicles will include a location tracking device and wireless communications by 2014.

Update: Part 2 discusses the impact of these changes on the geospatial industry, look at various application scenarios, and summarize five key points.

Thursday, October 15, 2009

"Georant" on free geodata, Ordnance Survey and USGS

I'm a little behind on this - you may have already caught this video of me doing a "georant" at AGI GeoCommunity via Jonathan (who kindly said it was "rather funny") or GeoCommunityLive (who said that the audience were in hysterics, and "at the same time Peter had some serious points to make about the potential pitfalls of the 'free' model").

I basically discuss the fact that many people massively oversimplify the arguments for and against free government geodata. The georant format was a fun one, the idea being to have short entertaining talks accompanied (or fueled?!) by beer, with an Ignite style format (15 slides which automatically advance every 20 seconds, for a total of 5 minutes). I recommend it to other conference organizers!



And here are the slides - you get a glimpse of some in the video and not others, so feel free to follow along by advancing these. For the benefit of those not from the UK, slide 2 features Vanessa Lawrence, Director General and Chief Executive of the Ordnance Survey, and the appearance of that slide was what prompted the burst of laughter that you hear about 20 seconds in!
And finally perhaps I should also explain that the conference party had a "dress code" of "black and white", which was the reason for the cowboy outfit ... though given the amount of coverage my cowboy hat generated, I may have to use it again for future presentations :) !!

Wednesday, October 14, 2009

A black hole isn't "evil", but ...

I loved this quote from Paul Ramsey, commenting on Paul Bisset's blog post about the "Google data earthquake":
Right, a black hole isn’t “evil”, but that doesn’t change the fact that it massively distorts the shape of space-time everywhere it goes, which can be a bummer for any object in its immediate neighbourhood.
That summarizes rather nicely concerns I've expressed in recent posts.

Tuesday, October 13, 2009

Speaking in Minnesota next week

Just a quick post to say that I will be doing the opening keynote talk next week (Thursday October 22nd) at the 19th annual Minnesota GIS/LIS conference, in Duluth. It should be a fun and interesting event, so if you're up in that part of the world I encourage you to stop by!

More on the "Google data earthquake"

Following on from my previous post about Google shaking up the geospatial data industry, Steve Coast invited me and James Fee to join him for a discussion on the topic. James' blog post on the topic has 138 comments at the time of writing, which is a good indication of the interest in this change! You can listen to the podcast on the "Google data earthquake" here.

One topic I talk about in the call which I didn't cover in my previous post is where the Google street data comes from (they haven't said anything about this). To me it looks like a mixture of data they have captured from their StreetView cars, which seems to be good quality, and then probably TIGER data, which is much lower quality, where their cars haven't driven. Quite a few people have reported finding errors in street data that weren't there previously since the change. I found that Central City Parkway was missing from the map entirely, which is a pretty major highway that was completed in 2004. You can see this below, with OpenStreetMap on the left, and Google Maps on the right (screen shot using GeoFabrik's nice map compare tool):

Central City Parkway missing from Google Maps

I've reported the error to Google, so it will be interesting to see how quickly it gets fixed - and in general, how quickly they are able to fix up the apparently lower quality data in areas they haven't driven yet (though this assessment is not based on anything scientific).

Wednesday, October 7, 2009

Google shakes up the geospatial data industry

Well, the big news of the day is that Google has dumped Tele Atlas as the main data provider for Google Maps in the US, and is providing its own map data from a variety of sources (presumably also including its own Streetview teams). They've also added the ability to point out errors in the map, another addition to the crowdsourcing techniques they've been using. The announcement has caused a flurry of discussion of course. James raises questions about various aspects of the data (especially parcels). Steve speculates that the same thing will happen in Europe and that the beneficiary there will probably be AND.

The new Google data certainly adds details in some places, from a quick random sampling - for example check out Commons Park in downtown Denver using the nice GeoFabrik Map Compare tool. None of those paths were there previously in Google. Still not quite as good as OpenStreetMap in this case though :).

This does dramatically reshape the geospatial data industry though. Previously there were two commercial providers with a detailed routable database of roads in the US, NAVTEQ (owned by Nokia) and Tele Atlas (owned by TomTom), now at a stroke there is a third in Google. OpenStreetMap is a fourth provider of course, not quite up with the other three in terms of coverage and routing quality in the US yet, but getting there very quickly.

This raises lots of interesting questions:
  • Will Google sell its data to providers of third party navigation systems and compete with Tele Atlas and NAVTEQ? Or indeed will they sell/license it to others who could use it (users of GIS, etc)?
  • Will Google Maps on the iPhone (and other mobile devices) get real time turn by turn directions? This was previously prohibited by licensing terms from Tele Atlas and NAVTEQ. Existing real time navigation systems using data from these two providers generally cost in the region of $100. Will Google add this to the free maps offering? Or sell a version that does real time turn by turn directions?
  • Will Google contribute any of this data to open data initiatives like OpenStreetMap? Or make it available to USGS for the US National Map? In the past they have cited licensing constraints from their data providers as a reason for not being more open with their geospatial data, that reason largely goes away now (though we don't know all the new data providers and their terms). I'm not holding my breath on this one, but we can hope!
  • Will this negatively or positively impact OpenStreetMap? Previously in areas with active communities, OpenStreetMap had significantly more detail, and more current data, than Google - this appears to move Google forward in that regard. But will Google taking another step towards total world domination encourage more people to want an open alternative?
So anyway, definitely a very interesting development for the geospatial data industry (albeit one that has been on the cards for a little while). It will take a little while to understand the full implications. I'm sure Tele Atlas is glad they are no longer an independent company, I wouldn't like to have seen how their stock price would have dropped today otherwise :O !!

Update: some more discussion and a link to a podcast on this topic featuring Steve Coast, James Fee and me in this follow up post.

Jack Dangermond on GeoDesign

In my previous post on GeoDesign (and Shakespeare), I was curious about why (it seemed to me) ESRI was positioning design in GIS as something new. Their position was clarified by no less an authority than Jack Dangermond in the comments - I thought that Jack's thoughts deserved a new post of their own rather than being hidden away, so here they are ...
Thanks for noticing our efforts in GeoDesign and yes, you are certainly correct; the consideration of geographic factors as part of design is not new. Your personal efforts to develop engineering design tools while you were at Smallworld was not only good work but built on a long history of human thought and innovation in this area.

I suppose everyone sees design and design methodology differently as it relates to their background and experience.

In my case, I was first introduced to the efforts of Phillip Lewis and Ian McHarg in the 1960s. They both developed manual techniques for landscape planning involving plastic overlay maps. They used these overlays to describe constraints and opportunities presented by geography. These maps were typically used as the basis for "designing" open space and other land use plans.

Later, Carl Steinitz, a professor at Harvard, laid out a computer-based methodology using early GIS tools. His methods pioneered both computer suitability analysis and environmental impact modeling.

Today my colleagues and I are advancing these techniques by integrating a series of new tools and methods into our GIS software. We are also promoting the ideas that we need to more directly integrate geographic information into many forms of spatial design and decision making. While these techniques are certainly relevant for land use and natural resource planning, they also have enormous value for any type of geographic site selection, corridor planning, or area-wide planning efforts. In fact, they can be a benefit in most human activities which change geography.

The fundamental technology we have developed is not profound. It involves sketching on top of smart analytic maps and getting fast feedback. This technology, however, has to be complemented with a methodology involving the integration of geographic science and other layers with an interactive design process which allows users to sketch and get rapid feedback on the consequences of their alternative designs.

Our technology is being deployed in both desktop and server platforms. We believe that the Web Server technology is particularly interesting because when deployed on the Web, it promises to lead to a whole new way of collaborative and community based planning. Ultimately this platform promises to also provide very broad based access to users of all types. Who knows, a whole new world of "geodesigners" may emerge like neogeographers – people who participate in volunteer efforts to design a more sustainable world.

These ideas are not new. We are hoping our technical efforts and promotion of these ideas will lead to more integration of geographic thinking into all that we do; specifically that people who are responsible for changing our geography are fully aware of the consequences and implications of the changes as they are made.

This January we are having our first GeoDesign symposium in California. It is being co-sponsored by the University of California, the University of Redlands, and ESRI. There will be many academic and design firms participating. Presentations will be made on theory, technology and methodology associated with GeoDesign. We welcome participation by those interested in these ideas (see geodesignsummit.com)

Thursday, September 17, 2009

Shakespeare on GeoDesign

I've been trying to figure out why ESRI is suddenly trying to make such a big deal of "GeoDesign" - this announcement talks about introducing "the first generation of geodesign concepts, technologies, and tools". But people have been doing design in GIS for decades - the first release of Smallworld in 1991 was really all about design, with features like version management to enable you to analyze multiple alternative designs, etc etc. In the 1980s IBM GFIS was heavily used for design applications. I've come to the conclusion that maybe they've been trying to think of something they can say that they do which those pesky neogeographers don't do (yet!), and trying to present design as something new and cool is their attempt to do this. Anyone else have any theories? It all just seems a bit odd to me.

Anyway, I was browsing a little Shakespeare in preparation for my talks in Stratford upon Avon next week, as you do, and discovered that actually GeoDesign is even older than this. It goes back to the 16th century at least - here is a surprisingly detailed description of the design and construction process from Lord Bardolph in King Henry IV, Part 2 (maybe there's a reason why this wasn't one of his greatest hits!):
"... When we mean to build,
We first survey the plot, then draw the model;
And when we see the figure of the house,
Then we must rate the cost of the erection;
Which if we find outweighs ability,
What do we then but draw anew the model
In fewer offices, or at least desist
To build at all? Much more, in this great work,
Which is almost to pluck a kingdom down
And set another up, should we survey
The plot of situation and the model,
Consent upon a sure foundation,
Question surveyors, know our own estate,
How able such a work to undergo,
To weigh against his opposite; or else
We fortify in paper and in figures,
Using the names of men instead of men;
Like one that draws the model of a house
Beyond his power to build it; who, half through,
Gives o'er and leaves his part-created cost
A naked subject to the weeping clouds
And waste for churlish winter's tyranny."
(First posted in the comments on a previous post, but I decided it deserved its own entry!)

Wednesday, September 16, 2009

Google PowerMeter accidentally wipes out small industry on the way to changing the world??

I spent the last few days at the Autovation conference in Denver, which is focused on Smart Metering and the Smart Grid, an area that I am becoming increasingly interested in and one where Enspiria is doing a lot of work (where I work part time as Chief Technology Advisor). It was a very interesting conference - there is certainly lots of activity and energy in the space, especially since the stimulus bill committed $4.3bn to Smart Grid projects - and utilities need to match this funding, so close to $9bn will be spent over the next couple of years. That's a large enough sum to even interest the likes of Google and Microsoft in electric utility applications, something they haven't been into previously.

So it was interesting that the closing speaker at the conference was Ed Hu from Google (who is incidentally a former astronaut, who has spent six months on the space station!). He is responsible for their PowerMeter initiative. This was announced earlier this year and I had previously skimmed articles on it, but have to admit that I hadn't grasped the full significance of it until Ed's talk yesterday. You can see a short description of what it's all about in this one minute video:


In summary, it lets you see detailed information about your home's power consumption, enabling you to change your behavior to reduce consumption. Ed draws an analogy with the fuel consumption readout in a Toyota Prius, which encourages you to modify your driving style to maximize your fuel consumption (I can vouch for this). He says that in their trials so far, people using PowerMeter have typically got anywhere from 5-15% savings on their electricity bill. As I talked about in my previous Smart Grid video, one reason for having smart meters is to enable customers to have access to this type of information, in order to encourage them to change their behavior and reduce electricity consumption. This has various benefits including reduction in carbon emissions. Usage data for Google PowerMeter can be obtained either via your local utility, if they have installed smart meters and choose to offer the Google service (which requires them to interface their Meter Data Management System, MDMS, to Google), or alternatively you will be able to buy devices to install in your home and read consumption directly. Ed said that if they could get the same level of usage reductions as they got in their pilot, from 6 million users, this would be equivalent to the reduction in carbon emissions due to all the hybrid cars currently on the road.

This initiative is being run by google.org, the philanthropic arm of Google, and the system is offered free to both consumers and utilities (and will continue to be free, Ed said). He also said that he'd been told personally by the CEO of Google, Eric Schmidt, that the aim of the project was "to change the world".

I'm excited in many ways to see Google getting involved - certainly they understand how to build applications to engage consumers, and this is not something that electric utilities generally have expertise in. And having Google involved certainly could significantly accelerate the development of this aspect of the Smart Grid. However, it's potentially very disruptive for a number of existing companies in this space, who are trying to do much the same thing. Companies like Greenbox and Tendril seem to have strong overlap with what Google is doing here. In this story at earth2tech shortly after the initial announcement of PowerMeter, both try to put a somewhat positive spin on Google getting involved in the space, but it will be hard for them or others to compete with the core Google offering, especially since it is going to be free. Perhaps they can find niches that are complementary to what Google is doing, but they and other companies in this space seem in a somewhat precarious position to me.

The one other player in this space that I haven't mentioned, who are probably in a stronger position to compete with Google, is Microsoft, who have a relatively similar offering called Microsoft Hohm. Earth2tech compares the two. They say that Microsoft intends to charge utilities for their offering eventually - and also says that Microsoft intends to move into the space of controlling devices too. Someone at Autovation asked Ed if Google was planning to move into control of devices too, in addition to the data display they are doing now, and he indicated that this was very likely - though he said they wanted to focus on getting the display part right first. While he wasn't specific, if they did provide the ability to control devices for the consumer it is a logical step to provide that to the utility too, which gets them into the whole market area called Demand Response, potentially overlapping even more with existing companies.

After seeing all the disruption that Google and Microsoft have brought to the geospatial industry, it is interesting to see them moving aggressively into the consumer-related aspects of the Smart Grid. Google providing completely free enterprise solutions through its philanthropic arm is in some ways commendable and in other ways concerning (in terms of the ability of others to provide competition, and the risk that they could just wipe out multiple companies). One of the most common questions web entrepreneurs get asked when presenting to investors is "what if Google decides to do what you're doing?", and it seems as though people will need to be asking that question of companies in ever more diverse fields! It will be very interesting to see how all this develops over the next year or two, and how the existing companies in the space respond.

Sunday, September 6, 2009

Looking forward to the AGI GeoCommunity conference

I'm very much looking forward to the AGI GeoCommunity conference which is coming up in Stratford upon Avon in a couple of weeks. Apart from Stratford being the birthplace of Shakespeare and a great place to visit, it's also very close to Leamington Spa where I used to live when I worked at IBM UK many years ago, so it will be a chance for me to catch up with old friends. It will also be the first time I've presented at a major UK conference since moving to the US 16 years ago, so I'm looking forward to talking in my homeland!

More importantly, from a general rather than a personal perspective it looks as though it will be a really interesting and fun event. I think Steven Feldman has done a great job introducing some new ideas this year, including the addition of a strong neogeography/geoweb element which really hasn't been there in previous years (see Ed Parsons' thoughts on this year and last). There will be a really strong geoweb track organized by Chris Osborne, founder of #geomob. And Andrew Turner and myself are giving the opening keynote talks - I know that mine will include a strong "neo" element, and since Andrew's is titled "How neogeography killed GIS", I'm guessing that his just might too :). I think that some of the most interesting conferences I've been to recently have been those that combine both traditional GIS and "neogeo" elements - like GeoWeb in Vancouver and, on a smaller scale, the WhereCamp5280 event we held recently in Denver. So I'm hoping that we'll get a lot of interesting discussion about how to combine the old and new geo worlds. Chris is also offering discounted day passes on his blog, in case you need any further incentive to attend!

There is going to be a "soapbox" event with lightning "Ignite" style talks on contentious topics, with a beer-fueled crowd, which sounds like a lot of fun. To get in the spirit of things I have submitted a talk called "The grass is always greener ... in defence of the Ordnance Survey" (for those not familiar with the UK national mapping agency, they tend to be a favorite punchbag for everyone in the UK geo industry ...)

Ordnance Survey

So I'm slightly concerned that my talk may look like the famous "Rawhide" scene from the Blues Brothers - I hope they have the wire fence in place to protect the speakers :) !!


The conference party has a "black and white" theme and I think I have a prize-winning outfit lined up for that (actually I'm not sure if there are prizes or not, but I may try to wangle a bottle of Lagavulin out of Mr Feldman).

Ordnance Survey

So all in all I'm looking forward to what should be a really fun and interesting event ... I guess I'd better get working on wrapping those presentations up!

Monday, August 24, 2009

Traditional GIS vendor market share for 2008-2009

Daratech has published its annual analysis of the GIS industry. I thought there were a few interesting things worth commenting on there. First I should say though that I am generally somewhat skeptical of these type of reports - there is a lot of subjectivity in what gets included and what doesn't. One illustration of this was when Bentley suddenly jumped to the number 2 position ahead of Intergraph a couple of years ago - and apparently kept that for a second year. While I was clearly not in an impartial position at that time (being CTO of Intergraph), this went against the intuition of myself and others I talked to in the industry - the general feeling was that somehow Bentley had persuaded Daratech to count a lot of "CAD" revenue as "GIS". And to be fair, there is a lot of debate about how much of Intergraph's revenue is "GIS" and should be counted here. For example, Intergraph does a lot of business providing 911 call taking systems in the Public Safety space, which have a strong geospatial element to them - do you or don't you count those in this number? There is a good argument in favor, but it might not be what everyone calls "GIS". In this year's version, Intergraph's revenue is twice that of Bentley (which is hard to reconcile with Bentley being second last year).

Anyway, all those caveats notwithstanding, the top three vendors for 2009 are projected to be ESRI with 30%, Intergraph with 16%, and GE Energy (Smallworld) with somewhere around 8% (exact figure not stated in this summary, approximate figure deduced from chart). No surprise to see ESRI and Intergraph in the top two spots, which has been the generally accepted state of affairs for most industry observers for a long time. From a personal perspective it is nice to also see Smallworld (GE) still in the number 3 spot, where they have been on and off - there are several others vying for that (so my former companies get 2 of the top 3 slots!). The report also says that GE/Smallworld has top position in utilities with 24% of the market - almost as strong as ESRI's position in the overall market.

I think it's interesting though that ESRI's share (which is consistent with previous reports) is actually lower than many people perceive. ESRI clearly is the dominant player in the traditional GIS space, they enjoy an effective monopoly in many markets (for example see Andrew's recent post on a US Air Force sole source bid), and their "mindshare" is pervasive. Many people I talk to, especially those in the "neo" world, assume that "everyone" doing "traditional GIS" is using ESRI. But what this report says is that 70% of people are not using ESRI (well 70% of the revenue comes from people not using ESRI, which is not necessarily the same thing - but nevertheless, a lot of people are not using ESRI). But you have to give ESRI credit for the way they have achieved such "thought domination" with "only" a 30% market share. I often think that there are several interesting comparisons to draw between the dominant positions of ESRI and Microsoft in their respective markets (maybe there is more for a future post there). Microsoft has operating system market share of 90%-ish, and Apple somewhere around 8%, but Apple has arguably more thought leadership and buzz around its offerings than Microsoft does. But there is very little of that sort of alternative thought widely seen in the traditional GIS space, even though Intergraph and Smallworld still have certain areas where they have technical advantages and/or alternative approaches to offer compared to ESRI.

However, we are now seeing more diversity of thought reaching a broader audience from the "neogeography" side of the house, which is a good thing for the industry. And this leads into the other area that I wanted to comment on, which is how it is increasingly hard to do this kind of market share analysis on the geospatial market, as geospatial technology becomes more imbedded in mainstream IT. As geospatial data becomes just another data type and becomes an element of many different applications, how do you say that a given application is "geospatial" or not? There is no place on this list for Google, Microsoft or Oracle, for example, all of whom clearly play a major role in the geospatial market these days. And there is nothing that captures the strong growth that I perceive in open source geospatial software (by definition, software revenue is not going to be a good metric for measuring the market share of free software!).

So overall, while there are certainly some interesting tidbits in this Daratech report summary, it is not at all reflective of the overall state of the broader geospatial industry, where it is increasingly hard to even define the market, let alone to measure market share in a quantitative way.

For those that are interested in quantitative analysis of the "traditional geospatial" market, I would also suggest considering the annual Geospatial Technology Report from GITA, which uses an alternative approach based on survey responses from GITA member organizations. There are potential flaws in that methodology too, but in many ways it is less subjective than something based on vendor revenues, and it provides a lot of additional detail on various aspects of how people are using the technology, which I have found interesting in the past. It's also a lot cheaper! (I should add for full disclosure that I am a former member of the board of directors of GITA, but I receive no remuneration for sales of GITA reports!).

Update: Matt Ball has a good post discussing the relevance of these type of industry reports.