I found a site called MapJack via this post at Mapperz last week, but haven't seen much other comment on it. They provide a similar "immersive photographic" view of the world to Google Street View, but they include data from some locations where you can only walk (not drive), and some that are indoors - including a tour of Alcatraz. Currently they just have a beta site with coverage for San Francisco. The user interface incorporates some very nice ideas.
One of most interesting aspects of this for me is in the "About MapJack" page, where they say:
Mapjack.com showcases a new level of mapping technology. What others have done with NASA budgets and Star Wars-like equipment, we've done on a shoestring budget, along with a few trips to Radio Shack. Specifically, we developed an array of proprietary electronics, hardware and software tools that enable us to capture an entire city’s streets with relative ease and excellent image quality. We have a complete low-cost scalable system encompassing the entire work-flow process needed for Immersive Street-Side Imagery, from picture gathering to post-processing to assembling on a Website.
This is just another example of people finding ways to bring down the cost of relatively specialized and expensive data capture tasks - it made me think of this post on aerial photography by Ed Parsons.
Tuesday, July 31, 2007
GeoWeb report - part 1
Well, I made it back from the big road trip up to Vancouver for the GeoWeb conference. We got back home on Sunday evening, after a total of 3439 miles, going up through Wyoming and Montana and then across the Canadian Rockies through Canmore, Banff, Jasper and Whistler, and coming back in more of a straight line, through Washington, Oregon, Idaho and Utah. The Prius did very well, performing better at high speeds than I had expected.
But anyway, the GeoWeb conference was very good. The venue was excellent, especially the room where the plenary sessions were held, which was "in the round", with microphones, power and network connections for all attendees (it felt a bit like being at the United Nations). This was very good for encouraging audience interaction, even with a fairly large group. See the picture below of the closing panel featuring Michael Jones of Google, Vincent Tao of Microsoft, Ron Lake of Galdos, and me (of no fixed abode).
I will do a couple more posts as I work through my notes, but here are a few of the highlights. In his introductory comments, Ron Lake said that in past years the focus of the conference had primarily been on what the web could bring to "geo", but that now we were also seeing increasing discussion on what "geo" can bring to the web - I thought that this was a good and succinct observation.
Perhaps one of the best examples of the latter was given by Michael Jones in his keynote, where he showed a very interesting example from Google book search, which I hadn't come across before. If you do a book search for Around the World in 80 Days, and scroll down to the bottom of the screen, you will see a map with markers showing all the places mentioned in the book. When you click on a marker, you get a list of of the pages where this place is mentioned and in some cases can click through to that page.
This adds a powerful spatial dimension to a traditional text-based document. It is not much of a jump to think about incorporating this spatial dimension into the book search capability, and if you can do this on books, why not all documents indexed by Google? Michael said that he expected to see the "modality" of spatial browsing grow significantly in the next year, and he was originally going to show us a different non-public example in regard to this topic, but he couldn't as he had a problem connecting to the Google VPN. My interpretation of all this is that I think we will see some announcements from Google in the not too distant future that combine their traditional search with geospatial capabilities (of course people like MetaCarta have been doing similar things for a while, but as we have seen with Earth and Maps, if Google does it then things take on a whole new dimension).
Another item of interest that Michael mentioned is that Google is close to reaching an arrangement with the BC (British Columbia) government to publish a variety of their geospatial data via Google Earth and Maps. This was covered in an article in the Vancouver Sun, which has been referenced by various other blogs in the past couple of days (including AnyGeo, The Map Room, and All Points Blog). This could be a very significant development if other government agencies follow suit, which would make a lot of sense - it's a great way for government entities to serve their citizens, by making their data easily available through Google (or Microsoft, or whoever - this is not an exclusive arrangement with Google). There are a few other interesting things Michael mentioned which I'll save for another post.
One other theme which came up quite a lot during the conference was "traditional" geospatial data creation and update versus "user generated" data ("the crowd", "Web 2.0", etc). Several times people commented that we had attendees from two different worlds at the conference, the traditional GIS world and the "neogeography" world, and although events like this are helping to bring the two somewhat closer together, people from the two worlds tend to have differing views on topics like data update. Google's move with BC is one interesting step in bringing these together. Ron Lake also gave a good presentation with some interesting ideas on data update processes which could accommodate elements of both worlds. Important concepts here included the notions of features and observations, and of custodians, observers and subscribers. I may return to this topic in a future post.
As anticipated given the speakers, there were some good keynotes. Vint Cerf, vice president and chief Internet evangelist for Google, and widely known as a "Father of the Internet", kicked things off with an interesting presentation which talked about key architectural principles which he felt had contributed to the success of the Internet, and some thoughts on how some of these might apply to the "GeoWeb" - though as he said, he hadn't had a chance to spend too much time looking specifically at the geospatial area. I will do a separate post on that.
He was followed by Jack Dangermond, who talked on his current theme of "The Geographic Approach" - his presentation was basically a subset of the one he did at the recent ESRI user conference. He was passionate and articulate as always about all that geospatial technology can do for the world. A difference in emphasis between him and speakers from "the other world" is in the focus on the role of "GIS" and "GIS professionals". I agree that there will continue to be a lot of specialized tasks that will need to be done by "GIS professionals" - but what many of the "old guard" still don't realize, or don't want to accept, is that the great majority of useful work that is done with geospatial data will be done by people who are not geospatial professionals and do not have access to "traditional GIS" software. To extend an analogy I've used before, most useful work with numerical data is not done by mathematicians. This is not scary or bad or a knock on mathematicians (I happen to be one by the way), but it does mean that as a society we can leverage the power of numerical information by orders of magnitude more than we could if only a small elite clique of "certified mathematical professionals" were allowed to work with numbers. Substitute "geographical" or "geospatial" as appropriate in this statement to translate this to the current situation in our industry.
For example, one slide in Jack's presentation has the title "GIS servers manage geographic data". This is a true statement, but much more important is that fact that we are now in a world where ANY server can manage geographic data - formats like geoRSS and KML enable this, together with the fact that all the major database management systems are providing support for spatial data. There is a widely stated "fact" that many people in the geospatial industry have quoted over the years, that something like 85% of data has a geospatial component (I have never seen a source for this claim though - has anyone else?). Whatever the actual number, it certainly seems reasonable to claim that "most" data has a spatial component. So does that mean that 85% of data needs to be stored in special "GIS servers"? Of course not, that is why it is so significant that we really are crossing the threshold to where geospatial data is just another data type, which can be handled by a wide range of information systems, so we can just add that spatial component into existing data where it currently is. Jack also continues to label Google and Microsoft as "consumer" systems when, as I've said before, they are clearly much more than that already, and their role in non-consumer applications will continue to increase rapidly.
But anyway, as Ron said in his introduction, it would be hard to get two better qualified people than Jack and Vint to talk about some of the key concepts of "geo" and "web", so it was an excellent opening session. I think that this post is more than long enough by this point, so I'll wrap it up here and save further ramblings for part 2!
But anyway, the GeoWeb conference was very good. The venue was excellent, especially the room where the plenary sessions were held, which was "in the round", with microphones, power and network connections for all attendees (it felt a bit like being at the United Nations). This was very good for encouraging audience interaction, even with a fairly large group. See the picture below of the closing panel featuring Michael Jones of Google, Vincent Tao of Microsoft, Ron Lake of Galdos, and me (of no fixed abode).
I will do a couple more posts as I work through my notes, but here are a few of the highlights. In his introductory comments, Ron Lake said that in past years the focus of the conference had primarily been on what the web could bring to "geo", but that now we were also seeing increasing discussion on what "geo" can bring to the web - I thought that this was a good and succinct observation.
Perhaps one of the best examples of the latter was given by Michael Jones in his keynote, where he showed a very interesting example from Google book search, which I hadn't come across before. If you do a book search for Around the World in 80 Days, and scroll down to the bottom of the screen, you will see a map with markers showing all the places mentioned in the book. When you click on a marker, you get a list of of the pages where this place is mentioned and in some cases can click through to that page.
This adds a powerful spatial dimension to a traditional text-based document. It is not much of a jump to think about incorporating this spatial dimension into the book search capability, and if you can do this on books, why not all documents indexed by Google? Michael said that he expected to see the "modality" of spatial browsing grow significantly in the next year, and he was originally going to show us a different non-public example in regard to this topic, but he couldn't as he had a problem connecting to the Google VPN. My interpretation of all this is that I think we will see some announcements from Google in the not too distant future that combine their traditional search with geospatial capabilities (of course people like MetaCarta have been doing similar things for a while, but as we have seen with Earth and Maps, if Google does it then things take on a whole new dimension).
Another item of interest that Michael mentioned is that Google is close to reaching an arrangement with the BC (British Columbia) government to publish a variety of their geospatial data via Google Earth and Maps. This was covered in an article in the Vancouver Sun, which has been referenced by various other blogs in the past couple of days (including AnyGeo, The Map Room, and All Points Blog). This could be a very significant development if other government agencies follow suit, which would make a lot of sense - it's a great way for government entities to serve their citizens, by making their data easily available through Google (or Microsoft, or whoever - this is not an exclusive arrangement with Google). There are a few other interesting things Michael mentioned which I'll save for another post.
One other theme which came up quite a lot during the conference was "traditional" geospatial data creation and update versus "user generated" data ("the crowd", "Web 2.0", etc). Several times people commented that we had attendees from two different worlds at the conference, the traditional GIS world and the "neogeography" world, and although events like this are helping to bring the two somewhat closer together, people from the two worlds tend to have differing views on topics like data update. Google's move with BC is one interesting step in bringing these together. Ron Lake also gave a good presentation with some interesting ideas on data update processes which could accommodate elements of both worlds. Important concepts here included the notions of features and observations, and of custodians, observers and subscribers. I may return to this topic in a future post.
As anticipated given the speakers, there were some good keynotes. Vint Cerf, vice president and chief Internet evangelist for Google, and widely known as a "Father of the Internet", kicked things off with an interesting presentation which talked about key architectural principles which he felt had contributed to the success of the Internet, and some thoughts on how some of these might apply to the "GeoWeb" - though as he said, he hadn't had a chance to spend too much time looking specifically at the geospatial area. I will do a separate post on that.
He was followed by Jack Dangermond, who talked on his current theme of "The Geographic Approach" - his presentation was basically a subset of the one he did at the recent ESRI user conference. He was passionate and articulate as always about all that geospatial technology can do for the world. A difference in emphasis between him and speakers from "the other world" is in the focus on the role of "GIS" and "GIS professionals". I agree that there will continue to be a lot of specialized tasks that will need to be done by "GIS professionals" - but what many of the "old guard" still don't realize, or don't want to accept, is that the great majority of useful work that is done with geospatial data will be done by people who are not geospatial professionals and do not have access to "traditional GIS" software. To extend an analogy I've used before, most useful work with numerical data is not done by mathematicians. This is not scary or bad or a knock on mathematicians (I happen to be one by the way), but it does mean that as a society we can leverage the power of numerical information by orders of magnitude more than we could if only a small elite clique of "certified mathematical professionals" were allowed to work with numbers. Substitute "geographical" or "geospatial" as appropriate in this statement to translate this to the current situation in our industry.
For example, one slide in Jack's presentation has the title "GIS servers manage geographic data". This is a true statement, but much more important is that fact that we are now in a world where ANY server can manage geographic data - formats like geoRSS and KML enable this, together with the fact that all the major database management systems are providing support for spatial data. There is a widely stated "fact" that many people in the geospatial industry have quoted over the years, that something like 85% of data has a geospatial component (I have never seen a source for this claim though - has anyone else?). Whatever the actual number, it certainly seems reasonable to claim that "most" data has a spatial component. So does that mean that 85% of data needs to be stored in special "GIS servers"? Of course not, that is why it is so significant that we really are crossing the threshold to where geospatial data is just another data type, which can be handled by a wide range of information systems, so we can just add that spatial component into existing data where it currently is. Jack also continues to label Google and Microsoft as "consumer" systems when, as I've said before, they are clearly much more than that already, and their role in non-consumer applications will continue to increase rapidly.
But anyway, as Ron said in his introduction, it would be hard to get two better qualified people than Jack and Vint to talk about some of the key concepts of "geo" and "web", so it was an excellent opening session. I think that this post is more than long enough by this point, so I'll wrap it up here and save further ramblings for part 2!
Labels:
conference,
ESRI,
geospatial,
google,
maps,
Microsoft
Thursday, July 26, 2007
Microsoft Virtual Earth to support KML
I'm at the GeoWeb conference in Vancouver which has been good so far - I will be posting more when I get time, but it's been hectic so far. However, I just thought I would do a quick post to say that in the Microsoft "vendor spotlight" presentation which just finished, the speaker said that Virtual Earth will support the ability to display KML in a September / October release this year. Maybe I missed something, but I hadn't seen this news elsewhere. I just did a quick Google and this post at Digital Earth Blog says that at Where 2.0 in June they wouldn't comment on support for KML, and I didn't find any other confirmation online of the statement that was made here today, which makes me wonder whether this comment was "officially blessed". Has anyone else heard this from other sources?
This would make a huge amount of sense of course, given the amount of data which is being made available in KML, but nevertheless Microsoft does have something of a track record of trying to impose their own standards :), and they have been reluctant to commit to KML up to this point, so I think this is a very welcome announcement (assuming it's correct), which can only cement KML's position as a de facto standard (I don't think Microsoft could have stopped KML's momentum, but if they had released a competing format it would have been an unfortunate distraction).
This would make a huge amount of sense of course, given the amount of data which is being made available in KML, but nevertheless Microsoft does have something of a track record of trying to impose their own standards :), and they have been reluctant to commit to KML up to this point, so I think this is a very welcome announcement (assuming it's correct), which can only cement KML's position as a de facto standard (I don't think Microsoft could have stopped KML's momentum, but if they had released a competing format it would have been an unfortunate distraction).
Tuesday, July 24, 2007
Sad news about Larry Engelken
On Friday I heard the very sad news that Larry Engelken had been killed in a jet-skiing accident while on vacation in Montana. He was 58. Larry was a founder of Convergent Group (originally UGC), a past president of GITA, and a great person. The Denver Post has an obituary here. My deepest condolences go out to Larry's family.
Friday, July 20, 2007
GeoWeb panel update
As I mentioned previously, I am going to be at the GeoWeb conference in Vancouver next week, and I will now be participating in the closing panel entitled "Future Shock: The GeoWeb Forecast for 2012", together with Ron Lake of Galdos, Michael Jones of Google, Carl Reed of OGC, and Vincent Tao of Microsoft. The abstract is as follows:
This closing panel session features senior visionaries who provide a synthetic take of GeoWeb 2007 and use this as a basis for forecasting the growth, evolution, and direction of the GeoWeb. Specifically, discussants will address:
What will it look like in 2012?
What device(s) will predominate?
What will be the greatest innovation?
What will be the largest impediment?
What market segments will it dominate?
What market segments will it fail to impact?
Each discussant will provide a five- to seven-minute statement touching on each of the questions above. A 30-minute question-and-answer session will follow. Answers will be limited to two minutes; each discussant has the opportunity to respond to each question.
Since I leave tomorrow morning to drive up there, I will have something to think about on the road! It should be a good panel I think.
This closing panel session features senior visionaries who provide a synthetic take of GeoWeb 2007 and use this as a basis for forecasting the growth, evolution, and direction of the GeoWeb. Specifically, discussants will address:
What will it look like in 2012?
What device(s) will predominate?
What will be the greatest innovation?
What will be the largest impediment?
What market segments will it dominate?
What market segments will it fail to impact?
Each discussant will provide a five- to seven-minute statement touching on each of the questions above. A 30-minute question-and-answer session will follow. Answers will be limited to two minutes; each discussant has the opportunity to respond to each question.
Since I leave tomorrow morning to drive up there, I will have something to think about on the road! It should be a good panel I think.
Tuesday, July 17, 2007
Back from Mexico - some experiences with geotagging
Well hello again, I am back from Mexico and had a great time. I managed to avoid blogging, the following was a more typical activity :)
This was my first vacation where I managed to reliably geotag all my photos - previously I had made a few half-hearted attempts but had failed to charge enough batteries to keep my GPS going consistently, or hadn't taken it everywhere. I used HoudahGeo to geotag the photos on my Mac, and this seemed to work fine, with a couple of minor complaints. One is that when it writes lat-long data to the EXIF metadata in the original images, for some reason iPhoto doesn't realize this, and doesn't show the lat-long when you ask for info on the photo (unless you export and re-import all the photos, in which case it does). The other is that most operations are fast, but for some reason writing the EXIF data to the photos, the final step in the workflow I use, is very slow - it would often take several minutes to do this for several hundred 10 megapixel images (a typical day's shooting for me), when the previous operations had taken a few seconds. But it did the job and all my photos have the lat-long where they were taken safely tucked away in their EXIF metadata.
Flickr provides its own map viewing capability, which uses Yahoo! maps (not surprisingly since Yahoo! owns flickr). If you click here, you can see the live version of the map which is also shown in the following screen shot, showing the locations of my "top 40" pictures from the trip.
If you zoom in to the northeast location, there is some high quality imagery, in Cancun, which gives a good basis for seeing where the photos were taken. However, the imagery is very poor in the two main locations where we stayed, Tulum (to the south) and Chichen Itza (to the west). I looked in Google Earth and Google Maps, and they both had much better imagery in Tulum, though unfortunately not in Chichen Itza. Microsoft didn't have good imagery for either location. So I thought I should poke around a little more to explore the latest options for displaying geotagged photos from flickr in Google Earth or Google Maps.
Flickr actually has some nice native support now for generating feeds of geotagged photos in either KML or GeoRSS. So this link dynamically generates a KML file of my photos with the tags mexico and best (though both Safari and FireFox don't seem to be able to recognize this as a KML file automatically, I have to tell them to use Google Earth and then it works fine). Unfortunately though, flickr feeds seem to return a maximum of 20 photos, and I haven't found any way around this. I can work around this by creating separate feeds for the best photos from Tulum, Chichen Itza and Cancun separately, but obviously that's not a good solution in all cases. These KML files work well in Google Earth, and one nice feature is that they include thumbnail versions of each photo which are directly displayed on the map (and when you click on those, you get a larger version displayed and the option to click again and display a full size photo back at flickr). However, the approach of using thumbnails does obscure the map more than if you use pins or other symbols to show the location of the photo - either approach may be preferred depending on the situation. These files don't display especially well in Google Maps - you get the icon, but the info window doesn't include the larger image or the links back to flickr - here's one example.
I looked around a little more and found this post at Ogle Earth which pointed me to this Yahoo Pipe which can be used to create a KML file from flickr. After a bit more messing around (you have to find things like your internal flickr id, which is non-obvious), I managed to produce this KML file which contains all of my "top 40" photos in a single file (you may need to right click the link and save the KML file, then open it in Google Earth). Of course I also needed to upload this file to somewhere accessible on the web, so all in all this involved quite a few steps. This KML file uses pins displayed on the map (with photo titles), rather than thumbnails, and again the info window displays a small version of the photo with an option to click a link back to flickr for larger versions. This KML also includes time stamps, which is interesting - if you are using Google Earth 4, you will see a "timer" bar at the top of the window when you select this layer. To see all the images, make sure you have the whole time window selected (at first this was not the case for me, so it seemed that some of the photos were missing). But if you select a smaller window, you can do an animation to show where the pictures were taken over time, which is also pretty cool.
So in general conclusion, the tools to easily geotag your photos are pretty solid now - I have used both HoudahGeo on the Mac, and previously RoboGeo on the PC, and both worked well. The software available to display geotagged photos is getting better, but there's still room for improvement - but I'm sure things will continue to move along quickly in this area. I would like to see flickr add a "KML" button to their map displays, which would be much simpler than the current process!
This was my first vacation where I managed to reliably geotag all my photos - previously I had made a few half-hearted attempts but had failed to charge enough batteries to keep my GPS going consistently, or hadn't taken it everywhere. I used HoudahGeo to geotag the photos on my Mac, and this seemed to work fine, with a couple of minor complaints. One is that when it writes lat-long data to the EXIF metadata in the original images, for some reason iPhoto doesn't realize this, and doesn't show the lat-long when you ask for info on the photo (unless you export and re-import all the photos, in which case it does). The other is that most operations are fast, but for some reason writing the EXIF data to the photos, the final step in the workflow I use, is very slow - it would often take several minutes to do this for several hundred 10 megapixel images (a typical day's shooting for me), when the previous operations had taken a few seconds. But it did the job and all my photos have the lat-long where they were taken safely tucked away in their EXIF metadata.
Flickr provides its own map viewing capability, which uses Yahoo! maps (not surprisingly since Yahoo! owns flickr). If you click here, you can see the live version of the map which is also shown in the following screen shot, showing the locations of my "top 40" pictures from the trip.
If you zoom in to the northeast location, there is some high quality imagery, in Cancun, which gives a good basis for seeing where the photos were taken. However, the imagery is very poor in the two main locations where we stayed, Tulum (to the south) and Chichen Itza (to the west). I looked in Google Earth and Google Maps, and they both had much better imagery in Tulum, though unfortunately not in Chichen Itza. Microsoft didn't have good imagery for either location. So I thought I should poke around a little more to explore the latest options for displaying geotagged photos from flickr in Google Earth or Google Maps.
Flickr actually has some nice native support now for generating feeds of geotagged photos in either KML or GeoRSS. So this link dynamically generates a KML file of my photos with the tags mexico and best (though both Safari and FireFox don't seem to be able to recognize this as a KML file automatically, I have to tell them to use Google Earth and then it works fine). Unfortunately though, flickr feeds seem to return a maximum of 20 photos, and I haven't found any way around this. I can work around this by creating separate feeds for the best photos from Tulum, Chichen Itza and Cancun separately, but obviously that's not a good solution in all cases. These KML files work well in Google Earth, and one nice feature is that they include thumbnail versions of each photo which are directly displayed on the map (and when you click on those, you get a larger version displayed and the option to click again and display a full size photo back at flickr). However, the approach of using thumbnails does obscure the map more than if you use pins or other symbols to show the location of the photo - either approach may be preferred depending on the situation. These files don't display especially well in Google Maps - you get the icon, but the info window doesn't include the larger image or the links back to flickr - here's one example.
I looked around a little more and found this post at Ogle Earth which pointed me to this Yahoo Pipe which can be used to create a KML file from flickr. After a bit more messing around (you have to find things like your internal flickr id, which is non-obvious), I managed to produce this KML file which contains all of my "top 40" photos in a single file (you may need to right click the link and save the KML file, then open it in Google Earth). Of course I also needed to upload this file to somewhere accessible on the web, so all in all this involved quite a few steps. This KML file uses pins displayed on the map (with photo titles), rather than thumbnails, and again the info window displays a small version of the photo with an option to click a link back to flickr for larger versions. This KML also includes time stamps, which is interesting - if you are using Google Earth 4, you will see a "timer" bar at the top of the window when you select this layer. To see all the images, make sure you have the whole time window selected (at first this was not the case for me, so it seemed that some of the photos were missing). But if you select a smaller window, you can do an animation to show where the pictures were taken over time, which is also pretty cool.
So in general conclusion, the tools to easily geotag your photos are pretty solid now - I have used both HoudahGeo on the Mac, and previously RoboGeo on the PC, and both worked well. The software available to display geotagged photos is getting better, but there's still room for improvement - but I'm sure things will continue to move along quickly in this area. I would like to see flickr add a "KML" button to their map displays, which would be much simpler than the current process!
Labels:
flickr,
geospatial,
geotagging,
google,
gps,
yahoo
Thursday, July 5, 2007
Off on vacation for a week
I'm off to Mexico on vacation for a week from tomorrow morning, so don't expect too much blogging! We're staying in a cabana by the ocean with no electricity for four days, though they say they have a wireless network connection - not sure if that is good or bad :). If they really do then I may have to do one post from the outdoor bathtub before the iPhone batteries expire, just because I can ... but in general things will be quiet for a week or so!However, I did just buy myself a new Garmin 60CSx handheld GPS (since I still can't get any software I'm happy with to save tracklogs on the BlackBerry), and plan to test out the HoudaGeo application I just bought for the Mac to geocode all my photos (we will be going to the Mayan pyramids at Chichen Izta too, which I'm really looking forward to). I did a quick test of the HoudaGeo application in Denver this afternoon and it seemed to work fine (I had previously used RoboGeo on the PC, but that doesn't work on the Mac).
Cheers for now :).
Cheers for now :).
iPhone after almost a week's experience
Several people asked me to follow up my previous posts about the iPhone again after I'd used it for a little longer, so here's just a quick update before I leave for a week's vacation tomorrow. My overall verdict on the iPhone is definitely a big thumbs up - while of course you can list plenty of features it's missing and lots of areas for improvement, it has so many great features and usability innovations that it's a real pleasure to use.
A few high points:
A few high points:
- The general user experience is great
- Web browsing - especially once you get used to the user interface and learn a few tricks, it's amazing how usable this is on a device with such a small screen. At first I was mainly zooming using the "pinch" technique, but on watching the "iPhone tour" video again I realized that double-clicking zooms you in to the specific area of the page where you double click (so in a page with multiple columns, it will set the display to the width of the column you click on). Though one drawback I hadn't thought about until Dave Stewart from the Microsoft Virtual Earth team mentioned it is that safari intercepts all double click and mouse move events in order to bring you this functionality, which is an issue for many browser based applications (especially mapping ones).
- Google maps - my concerns about the local search implementation notwithstanding, it's still very useful and fun in a lot of situations. I was just in bar talking to a friend about my upcoming drive to Vancouver, and he said that he thought that the drive from Vancouver to Whistler was one of the most spectacular that he had done - but we got into a discussion about whether you could just drive on from Whistler to Jasper or would have to backtrack, and in a minute or so Google Maps on the iPhone had resolved the question for us (you can just continue on, so this is added into my plans). And at lunch today I was in a coffee shop and decided to search for local restaurants and look at their web sites, to see how easily I could check out their menus and decide where I'd like to go - this did work pretty well, as each search result included a link to the restaurant web site (and no bogus results were returned in this case)
- The Photo, iPod and Youtube applications are all great
- WiFi support - so far I have been using this most of the time, and get great performance and don't use up minutes on a plan, etc. This will be especially useful when traveling abroad, as the mobile phone companies all really burn you on charging for data over the cell phone networks, so being able to use WiFi will be great.
- It's a chick magnet at parties (my girlfriend Paula's description, not mine!)
- Add a GPS, of course
- Improve local search, as I discussed previously
- While the general user experience is great, there are a lot of areas where they could leverage some of the new techniques but didn't. Automatic rotation of applications into landscape mode is one area I would like to see leveraged a lot more. This is used to good effect in the web browser, which I almost always use in landscape mode as things are much more legible. And it is great in the photo and iPod applications (see the video). But when I'm in mail and I get an HTML formatted email, this is just like looking at a web page and would be far superior in landscape mode, yet this is not supported. As I mentioned previously, maps does not support this when it would be such a natural thing to do. And also, the keyboard is much wider with larger keys when in landscape mode, so it would be great to leverage this in situations where the focus is on data entry - for example in the notes application, or when composing an email, both of which only work in portrait mode. And there are some odd inconsistencies - both photos and maps support the notion of panning and zooming, and both support the same pinching technique for zooming in and out, and dragging with your finger to pan. But in maps, you double tap to zoom in and tap with two fingers to zoom out, whereas in photos, double click will either zoom in or zoom out depending on the situation (which is the same as in the browser). Maps also needs to support the typing auto-correction - it seems to be the only application which doesn't. These sort of things should really be ironed out.
Labels:
Apple,
general technology,
geospatial,
google,
iPhone,
maps,
review
Tuesday, July 3, 2007
Going to GeoWeb
I just thought I would mention that I am planning to be at the upcoming GeoWeb conference in Vancouver. They have an impressive set of big names lined up for keynotes, with Vint Cerf, Jack Dangermond, Michael Jones, Geoff Zeiss and Vincent Tao, as well as a pretty packed set of technical sessions, three concurrent tracks for two and a half days (and that's excluding pre-conference workshops which I probably won't make it to). And a trip to Vancouver is always something I'm happy to find an excuse for! I look forward to catching up with lots of friends there, and might even end up on a panel if they'll have me (I have volunteered but they are still finalizing plans for those).
I'm actually planning to do a family road trip up there from Denver and take in a bit of scenery along the way. I've been playing a bit with Rand McNally's trip planning site, though haven't used it enough to see if there is any advantage over the new and cool additions to Google Maps routing which the whole world has blogged about - will report back on that if I find anything of particular note. I think we will take in Jasper and Banff en route, if anyone has any other specific suggestions for good places to go through, please let me know!
I'm actually planning to do a family road trip up there from Denver and take in a bit of scenery along the way. I've been playing a bit with Rand McNally's trip planning site, though haven't used it enough to see if there is any advantage over the new and cool additions to Google Maps routing which the whole world has blogged about - will report back on that if I find anything of particular note. I think we will take in Jasper and Banff en route, if anyone has any other specific suggestions for good places to go through, please let me know!
Local search on Microsoft Local Live
Steve Lombardi from Microsoft emailed me about the local search issues I raised in regard to Google Maps on the iPhone (and other mobile platforms) in my earlier post. In that post I compared the results I got with Google to those I got from MapQuest - I would have tried Microsoft Local Live also, except I had been under the mistaken impression that it didn't work on my Mac ... turns out it doesn't work with Safari (you get a very scaled back site which is pretty useless), but it works fine with FireFox (for 2D stuff - Virtual Earth in 3D, and Photosynth, are actually the things I miss most in Mac land so far!). So I was pleased to discover that I can at least use the 2D stuff still.
Steve had tried the same tests I did on Local Live, and I tried them for myself and it fared well. With luck, this live link will give you more or less the same thing as the screen shot above. I ran several tests in my previous review, all centered on 1792 Wynkoop St, Denver, CO. For full details see the earlier post, but I'll summarize the earlier results here too. These were the test searches and the results I got with Local Live:
A couple of other quick observations on the Microsoft Local Live implementation. One is that it lets you place a pin at your original location of interest, and also display pins in different colors showing multiple different query results at the same time - this feature is nicely done and not available in Google either online or mobile. The results include some ads, but these are clearly separate from the result listing. The results come back sorted in order of "relevance", which I think is probably in most cases a euphemism for "more or less by distance, but with scope to move sponsored results further up the list" - which is what Google appears to be doing with its mobile maps as I discussed previously. But with one click I can change this to sort by distance, and on both these lists it shows the distance of each result from my starting point. As a user I am quite happy with this approach - it gives the service provider (Microsoft) the chance to monetize their service with ads and preferred placement, which ultimately is necessary otherwise service providers won't be able to continue to provide their service, but it doesn't hamper my ability to easily get the specific information I want (as Google Maps Mobile does).
Steve tells me that the same local search capabilities are available in Microsoft Live Search for mobile ... but unfortunately there's not much chance of me testing that any time soon, as I am already suffering enough abuse over having both an iPhone and a BlackBerry 8800, so I don't think I can justify adding a Windows-based smartphone to the collection :) !!
Steve had tried the same tests I did on Local Live, and I tried them for myself and it fared well. With luck, this live link will give you more or less the same thing as the screen shot above. I ran several tests in my previous review, all centered on 1792 Wynkoop St, Denver, CO. For full details see the earlier post, but I'll summarize the earlier results here too. These were the test searches and the results I got with Local Live:
- King Soopers (a local supermarket chain): Google returned 4 non-existent results in the top 10; Microsoft and MapQuest both had no errors but some duplication, in the sense of having multiple addresses close by for the same store. M&M both correctly located the closest store but Google didn't (unless you count manually discarding results with incomplete addresses)
- Tattered Cover (a well known Denver bookstore with 3 locations): Google returned 4 non-existent results in a list of 8, while M&M both returned only the 3 correct store locations.
- Office Depot and Home Depot seemed to work fine with everyone, with no obvious errors.
- A search for grocery yielded 4 out of 10 incorrect results on Google and 10 reasonable looking results on MapQuest (though I didn't verify them all). With Microsoft the top result was incorrect, an incomplete address which just said "Denver, CO", similar to those which caused a lot of the errors with Google. And the second address on the list was interesting - it was Cowboy Lounge, which is a nightclub which used to be Market 41, which appeared on the Google list also, incorrectly categorized as a grocery store (you can understand how the mistake occurred given the original name). Interesting that Microsoft picked up the name change, which Google didn't, but still has the incorrect categorization. However, one good thing with Microsoft is that I was given the option to provide feedback that the result was incorrect, so I did that and asked this it should be removed from the grocery categories (and I provided feedback on the previous incorrect result too). The rest of the list appeared to be legitimate establishments, though the Rocky Mountain Chocolate Factory and Cookies by Design are a broad interpretation of "grocery" :) !! So Microsoft did better than Google but not as well as MapQuest on this one. I will be interested to see if and when my feedback on the incorrect categorization gets through the system, I will check it out every so often to find out!
A couple of other quick observations on the Microsoft Local Live implementation. One is that it lets you place a pin at your original location of interest, and also display pins in different colors showing multiple different query results at the same time - this feature is nicely done and not available in Google either online or mobile. The results include some ads, but these are clearly separate from the result listing. The results come back sorted in order of "relevance", which I think is probably in most cases a euphemism for "more or less by distance, but with scope to move sponsored results further up the list" - which is what Google appears to be doing with its mobile maps as I discussed previously. But with one click I can change this to sort by distance, and on both these lists it shows the distance of each result from my starting point. As a user I am quite happy with this approach - it gives the service provider (Microsoft) the chance to monetize their service with ads and preferred placement, which ultimately is necessary otherwise service providers won't be able to continue to provide their service, but it doesn't hamper my ability to easily get the specific information I want (as Google Maps Mobile does).
Steve tells me that the same local search capabilities are available in Microsoft Live Search for mobile ... but unfortunately there's not much chance of me testing that any time soon, as I am already suffering enough abuse over having both an iPhone and a BlackBerry 8800, so I don't think I can justify adding a Windows-based smartphone to the collection :) !!
Follow up discussion on GE next generation system (about customization and open source)
My two part post last week on General Electric's next generation system based on Oracle (part 1 and part 2) generated some interesting follow up comments and questions (mainly after part 2). I got somewhat sidetracked with the whole iPhone extravaganza over the past few days, but wanted to circle back and follow up on a couple of threads.
The first thread related to customization versus configuration, and the fact that there are potential attractions in being able to provide an "off the shelf" system, which is just configured and not customized, for a specific application area - in this case design and asset management for mid-size electric utilities in North America. If you can achieve this, then potentially you can significantly reduce implementation costs and (in particular) ongoing support and upgrade costs. However, the challenge lies in whether you can meet customer needs well enough in this type of complex application area with just configuration. People have tried to do this multiple times in the utility industry, but so far nobody has been successful, and everyone has fallen back to an approach which requires customization. Both Roberto Falco and Jonathan Hartley were skeptical that a pure configuration approach could be successful, and Jonathan makes a good argument in his response that if you implement a very complex configuration system, then you may end up just recreating a customization environment which is less flexible and less standard than if you had just allowed appropriate customization in the first place. I don't disagree with either of them in general - though I would make the comment to Jon that I don't think we're talking about "GIS" in general, it's a specific application of that to electric utilities in a relatively narrow market segment, so that gives you a little more chance of defining the problem specifically enough that a configurable system could make sense. One other observation I would make is that I think that cost is an element of this equation. If an "off the shelf" system meets 90% of an organization's stated requirements and costs 80% of the amount of a fully customized system which meets 100% of their requirements, that is much less compelling than if the off the shelf system cost (say) 20% of the fully customized system, in which case they might be more willing to make a few workarounds or changes to business processes to accommodate such a system. This may be stating the obvious, but I wonder whether organizations trying to develop this "off the shelf " approach go into it thinking that they will have to offer a substantially lower price (or provide other compelling benefits) to persuade customers to adopt it. Finally on this thread, I think it is also worth observing that the "GIS" (if you call it that) in a utility is not a standalone system - it typically requires some sort of integration or interfaces with many other systems, such as customer information, work management, outage management, workforce management, etc etc. This makes it an even bigger challenge to develop something which does not require customization.
Paul Ramsey raised a different question, that of open source, which I think is interesting in this context. But first I will just answer a couple of other questions that Paul raised. He asked if this was essentially a replacement market, and the answer is yes - especially among the medium and large utilities, and in the more established markets (North America, Western Europe, Japan, Australia / New Zealand, etc), pretty well everyone has a system implemented, and GE/Smallworld, Intergraph and ESRI are the dominant vendors. Because of the customized nature of these systems, and the amount of integration with other business systems, switching to a new vendor is typically a multi-million dollar project, even if the new software is provided for free. So this is obviously a big reason why this is a "sticky" market and customers don't change systems very often. The other clarification is that Paul asks about defining the market of customers "who think that 'classic' Smallworld doesn't cut it anymore", and overall I wouldn't particularly categorize things that way. Overall I think that satisfaction levels are probably as high among the Smallworld customer base as with any of the other vendors, the system is functionally very competitive still, the main concern is probably in obtaining and retaining people with the appropriate specialized skills, especially for smaller organizations. But there has been very little movement of utilities from Smallworld to other companies up to this point. Ironically, GE bringing out a next generation system (which they had to do for new business) is something which may cause existing customers to re-evaluate their strategy. Again though this is just a standard challenge of moving to a new generation architecture for any software company - you're damned if you do and damned if you don't.
Anyway, back to open source - Paul raises the question of whether GE doing something with open source may be an option. I have heard a few people raise this before in regard to Smallworld, and it is an interesting question. There are three major elements to the Smallworld core software. The first is the Magik programming language, which was developed by Smallworld back in the late eighties, and it is very similar to Python and Ruby. Smallworld was way ahead of its time in recognizing the productivity benefits of this type of language, and this was a key reason for its success. The core Magik virtual machine has been stable for a long time, the original developers of it left Smallworld a number of years ago and I suspect that nothing much has changed at this level of the system recently. The second key element is VMDS, the Version Managed Datastore. There is some very complex and low level code in both of these components which I suspect would make it hard to open source them effectively, given the amount of time it would take people to get into the details of these components (currently known to only a very few people either inside GE, or who have left GE), and the relatively small size of the market for the products would probably be a disincentive for people to make the effort to learn this. However, both these components are very stable and probably don't really need much maintenance or change. The vast majority of the system is written in Magik, and the bulk of the Magik source code for Smallworld GIS has always been shipped with the product, which has allowed for a hugely flexible customization environment (much more flexible than Smallworld's main competitors). There is a pretty large base of technical resources in customers and Smallworld partners who know how to enhance the system using this environment. If GE could manage enhancements at this level of the system in an open source fashion, to leverage the strength of the existing Smallworld technical community, who are on the whole very loyal to Smallworld and have a strong interest in seeing it continue to be successful, this could be a very powerful thing. As I noted in my previous post, one of the challenges for GE is that to develop new applications on two platforms concurrently (Smallworld and the new Oracle/Java based platform) will significantly decrease the amount of new functionality they can develop with a finite set of resources.
Of course there are a lot of challenges for an existing commercial software company in making such a switch. A critical one for GE would be that they could maintain their existing maintenance revenue from the Smallworld user base (at least to a large degree), otherwise it would be a non-starter from a business perspective I think. But it is quite conceivable that this approach could harness the talents of a much larger group of developers than Smallworld currently has working on the product, and produce more enhancements and fixes to the products than customers currently see, so you can see scenarios in which customers would be happy to continue to pay maintenance for support. As Paul points out, Autodesk has made this work and most of their customers still pay maintenance. There are differences in the Autodesk scenario as they open sourced a new product rather than an old one, and I think that a big driver for them was that they saw that it was going to be increasingly hard to make money for basic web mapping products, as that area becomes increasingly commoditized, due to the efforts of both the Googles and Microsofts of the world as well as the open source community. I think that this factor probably helped them decide to make the big leap to an open source approach, and it seems to have been a successful one for them.
Could GE also consider open sourcing portions of the new product, which I think was really Paul's question? That could also be an interesting possibility, to help gain traction in the market. If they open sourced some of the more generic base level components of the product, they could still build their specific business applications on top of these components and charge money for those, but leverage the open source community to provide additional functionality.
So the motivations are somewhat different for GE than for Autodesk, and there are multiple scenarios where open source could play a role, but I could envision an open source version of Smallworld significantly extending the product's life, by harnessing the very good technical resources which exist in the broader Smallworld community. Internal GE resources could then be freed up to work on the new product line (with just a small number focused on managing the open source developments of the established Smallworld product line). There are certainly a number of challenges and a lot of complexity in making something like this fly from a business perspective, and I'm not sure if GE would have the imagination or the boldness to take a risk on it, but it's an interesting thing to speculate about!
The first thread related to customization versus configuration, and the fact that there are potential attractions in being able to provide an "off the shelf" system, which is just configured and not customized, for a specific application area - in this case design and asset management for mid-size electric utilities in North America. If you can achieve this, then potentially you can significantly reduce implementation costs and (in particular) ongoing support and upgrade costs. However, the challenge lies in whether you can meet customer needs well enough in this type of complex application area with just configuration. People have tried to do this multiple times in the utility industry, but so far nobody has been successful, and everyone has fallen back to an approach which requires customization. Both Roberto Falco and Jonathan Hartley were skeptical that a pure configuration approach could be successful, and Jonathan makes a good argument in his response that if you implement a very complex configuration system, then you may end up just recreating a customization environment which is less flexible and less standard than if you had just allowed appropriate customization in the first place. I don't disagree with either of them in general - though I would make the comment to Jon that I don't think we're talking about "GIS" in general, it's a specific application of that to electric utilities in a relatively narrow market segment, so that gives you a little more chance of defining the problem specifically enough that a configurable system could make sense. One other observation I would make is that I think that cost is an element of this equation. If an "off the shelf" system meets 90% of an organization's stated requirements and costs 80% of the amount of a fully customized system which meets 100% of their requirements, that is much less compelling than if the off the shelf system cost (say) 20% of the fully customized system, in which case they might be more willing to make a few workarounds or changes to business processes to accommodate such a system. This may be stating the obvious, but I wonder whether organizations trying to develop this "off the shelf " approach go into it thinking that they will have to offer a substantially lower price (or provide other compelling benefits) to persuade customers to adopt it. Finally on this thread, I think it is also worth observing that the "GIS" (if you call it that) in a utility is not a standalone system - it typically requires some sort of integration or interfaces with many other systems, such as customer information, work management, outage management, workforce management, etc etc. This makes it an even bigger challenge to develop something which does not require customization.
Paul Ramsey raised a different question, that of open source, which I think is interesting in this context. But first I will just answer a couple of other questions that Paul raised. He asked if this was essentially a replacement market, and the answer is yes - especially among the medium and large utilities, and in the more established markets (North America, Western Europe, Japan, Australia / New Zealand, etc), pretty well everyone has a system implemented, and GE/Smallworld, Intergraph and ESRI are the dominant vendors. Because of the customized nature of these systems, and the amount of integration with other business systems, switching to a new vendor is typically a multi-million dollar project, even if the new software is provided for free. So this is obviously a big reason why this is a "sticky" market and customers don't change systems very often. The other clarification is that Paul asks about defining the market of customers "who think that 'classic' Smallworld doesn't cut it anymore", and overall I wouldn't particularly categorize things that way. Overall I think that satisfaction levels are probably as high among the Smallworld customer base as with any of the other vendors, the system is functionally very competitive still, the main concern is probably in obtaining and retaining people with the appropriate specialized skills, especially for smaller organizations. But there has been very little movement of utilities from Smallworld to other companies up to this point. Ironically, GE bringing out a next generation system (which they had to do for new business) is something which may cause existing customers to re-evaluate their strategy. Again though this is just a standard challenge of moving to a new generation architecture for any software company - you're damned if you do and damned if you don't.
Anyway, back to open source - Paul raises the question of whether GE doing something with open source may be an option. I have heard a few people raise this before in regard to Smallworld, and it is an interesting question. There are three major elements to the Smallworld core software. The first is the Magik programming language, which was developed by Smallworld back in the late eighties, and it is very similar to Python and Ruby. Smallworld was way ahead of its time in recognizing the productivity benefits of this type of language, and this was a key reason for its success. The core Magik virtual machine has been stable for a long time, the original developers of it left Smallworld a number of years ago and I suspect that nothing much has changed at this level of the system recently. The second key element is VMDS, the Version Managed Datastore. There is some very complex and low level code in both of these components which I suspect would make it hard to open source them effectively, given the amount of time it would take people to get into the details of these components (currently known to only a very few people either inside GE, or who have left GE), and the relatively small size of the market for the products would probably be a disincentive for people to make the effort to learn this. However, both these components are very stable and probably don't really need much maintenance or change. The vast majority of the system is written in Magik, and the bulk of the Magik source code for Smallworld GIS has always been shipped with the product, which has allowed for a hugely flexible customization environment (much more flexible than Smallworld's main competitors). There is a pretty large base of technical resources in customers and Smallworld partners who know how to enhance the system using this environment. If GE could manage enhancements at this level of the system in an open source fashion, to leverage the strength of the existing Smallworld technical community, who are on the whole very loyal to Smallworld and have a strong interest in seeing it continue to be successful, this could be a very powerful thing. As I noted in my previous post, one of the challenges for GE is that to develop new applications on two platforms concurrently (Smallworld and the new Oracle/Java based platform) will significantly decrease the amount of new functionality they can develop with a finite set of resources.
Of course there are a lot of challenges for an existing commercial software company in making such a switch. A critical one for GE would be that they could maintain their existing maintenance revenue from the Smallworld user base (at least to a large degree), otherwise it would be a non-starter from a business perspective I think. But it is quite conceivable that this approach could harness the talents of a much larger group of developers than Smallworld currently has working on the product, and produce more enhancements and fixes to the products than customers currently see, so you can see scenarios in which customers would be happy to continue to pay maintenance for support. As Paul points out, Autodesk has made this work and most of their customers still pay maintenance. There are differences in the Autodesk scenario as they open sourced a new product rather than an old one, and I think that a big driver for them was that they saw that it was going to be increasingly hard to make money for basic web mapping products, as that area becomes increasingly commoditized, due to the efforts of both the Googles and Microsofts of the world as well as the open source community. I think that this factor probably helped them decide to make the big leap to an open source approach, and it seems to have been a successful one for them.
Could GE also consider open sourcing portions of the new product, which I think was really Paul's question? That could also be an interesting possibility, to help gain traction in the market. If they open sourced some of the more generic base level components of the product, they could still build their specific business applications on top of these components and charge money for those, but leverage the open source community to provide additional functionality.
So the motivations are somewhat different for GE than for Autodesk, and there are multiple scenarios where open source could play a role, but I could envision an open source version of Smallworld significantly extending the product's life, by harnessing the very good technical resources which exist in the broader Smallworld community. Internal GE resources could then be freed up to work on the new product line (with just a small number focused on managing the open source developments of the established Smallworld product line). There are certainly a number of challenges and a lot of complexity in making something like this fly from a business perspective, and I'm not sure if GE would have the imagination or the boldness to take a risk on it, but it's an interesting thing to speculate about!
Labels:
General Electric,
geospatial,
open source,
Smallworld,
utilities
Monday, July 2, 2007
The secret behind iPhone mania - iLaunch
Well, so after all the hype Apple is estimated to have sold around 500,000 iPhones over the weekend. I only discovered today that this success was largely due to Apple's iLaunch product, announced in the Onion in March - somehow I missed this previously! There is of course plenty of other spoof coverage out there to choose from, like this:Overall I would say that the reviews were very positive, albeit with various caveats about missing functionality and features - there's a good roundup at Time. I would agree with Hiawatha Bray in the Boston Globe who says "For it's not just cool; this phone is important, in the same way that Apple's first Macintosh computer was important. The Mac showed us a better way to interact with computers, and forced the entire industry to follow its lead."
Issues with Google local search are on other platforms too
I played around with the local search capabilities on Google Maps Mobile on my BlackBerry 8800, and found that it also has the same issues that I discussed with local search on the iPhone. I guess I just hadn't used local search much with Google on my BlackBerry, as I'd been using Telenav which has richer functionality (in particular voice driving directions using the GPS).
Labels:
Apple,
blackberry,
geospatial,
google,
iPhone,
maps,
mobile
Sunday, July 1, 2007
Google Maps local search on the iPhone really has some serious flaws
Yesterday I posted my main review of Google Maps on the iPhone, as well as my general impressions of the iPhone. To say some positive things first, I really think that while I've commented on a few areas for improvement, I really think the iPhone is great - it's definitely a jump forward in terms of user interfaces, and I think introduces a lot of ideas in this regard which will become much more widely used. It's just a lot of fun to use. On Google Maps specifically, I talked about a lot of things I liked yesterday, and one other nice feature I have discovered is that the Bookmark page also has a "Recents" tab which saves all your recent searches, routes, etc, and will restore those with all the context - this is especially nice for retrieving routes. Oh, and the built in camera, which I had low expectations of, actually isn't too bad - here are a few sample snapshots in case you're interested.
However, I expressed some reservations about the way that local search was implemented yesterday, and having played around with this some more I really feel that there are some serious issues that need addressing here. There are two main issues: one is the way that the functionality has been implemented, and the other is the quality of the data for points of interest.
Let's take the functionality first. Frankly, Google seems to have lost track of the basic user scenario for local search. I suspect that probably a (conscious or unconscious) contributor to this was the desire to include establishments which paid for placement higher in the list of results, as I commented on yesterday, without it being too obvious that they are doing this. But anyway, the basic scenario for local search is that you identify a location, which might either be where you are now (the most common scenario), or perhaps an address where you plan on being in the future, or where a friend is located. You then search for something close to that place - this might fairly specific, like "tattered cover" (a bookstore I used in my example yesterday) or might be more generic, like "coffee". In either case, knowing how far away each of the search results are from the specified location is a really crucial piece of information in deciding whether that search result is of interest to you. Sorting the results by distance, and knowing that the closest (or the closest 3, or 5, or whatever) results have been shown to you are fundamental requirements. And then if you decide that one of the search results is of interest to you, by far the most common follow up action is that you want to figure out how to navigate from the point you first specified to the search result you have selected.
Most other local search implementations I have used meet these requirements pretty well, but local search on the iPhone fails pretty badly. Using the TeleNav software on my BlackBerry 8800, I choose "Find business" and the first thing it asks me for is the search point, and I have six options for specifying this: current location (which it knows from the GPS, which is of course its big advantage over the iPhone), recent addresses, favorites, key in an address, near airport, or contacts. You then specify the search term, and are immediately given a list of results sorted in order of distance from the search point, and showing how far away each result is. You click on a result and it shows more information including the address, and gives you the option to drive to it, save it, map it, or call it. The one feature which TeleNav does not provide which the iPhone does is the ability to display multiple results on a map at the same time - but it meets the main user scenario much better.
Google Maps online also addresses this user scenario well. You type in the address of the point you are interested in, and hit "Search maps". Then you choose "Find businesses" and you now have two text boxes, one containing the address of the search point, and one for your search term. You hit search and get an ordered list back, by distance, with the distance from the search point displayed for each result. If you click on a search result and ask for directions, the start point is already filled in with the address of the original search point.
But all this goes by the board in Google Maps on the iPhone. You will typically precede a search by centering the screen on a location, most often by typing an address (or retrieving a bookmark). But that point is not explicitly used by the search on the iPhone. Instead, you just get back any ten results which satisfy your search term and which are somewhere within the bounds of the current screen display - except that it's not just any ten, it seems that there is a ranking order which is presumably determined by how much establishments have paid for placement. There is no guarantee that the closest (or closest 3, or 5, etc) are returned. There is no easy way to find out how far away a given search result is from the point you were interested in (the only way I have found is to select a result, display its details, choose "directions to here", and then you have to fill in the start address, which you can do by selecting a bookmark or recent search address, and then calculate a route - and you have to repeat this for every item whose distance you want to know).
I tested a couple of scenarios where there were fewer than ten results in the original map display window, and in this case the map window stays the same and displays pins for the results which are in that area, but you still get a list of ten items (not sure if these are guaranteed to be the ten closest in this situation or not). A bad thing in this scenario is that if you select an item from the list which was not in the original map window, it pans you to that location, but now you have no idea how that relates geographically to your original search point (beyond re-entering the original point and calculating a route).
If there are no search results in the original map window, the map window zooms out so that it fits in all the results (at least in the cases I tried). However, there is no symbol on the map showing where your original point was, so you risk losing your orientation in this situation if it's an area you don't know well, unless you calculate a route from your original point again.
So functionality wise I really think someone lost the plot here. Yes, if you are zoomed in close (i.e. at the default scale which is displayed which you search for an address), the system will find you some results that are "sort of close" to the address you typed, and you get a nice pretty display with animated flying pins. But there appears to be no guarantee that the closest result will definitely be returned (though it probably will in most cases, especially if you are zoomed in close), there's no easy way of determining how far away any given result is (you have to calculate routes individually), there is no way at all of sorting results in order of distance, and in order to calculate a route to a search result you have to re-enter the original search point again (which you can admittedly do via a bookmark or recent result, but this still adds at least two or three clicks which shouldn't be necessary). One other thing I noticed while thinking about distances in this context is that there is no scale bar on the map, so you can't even estimate graphically. I know it's not a big screen, but all in car navigation systems I have used have some nice compact scale bar, so I think this should be added in there too.
So anyway, that's the functionality rant, sorry about that :) !! Now for the data rant!
These issues are exacerbated by the fact that Google's point of interest data really seems to have a lot of issues. I mentioned some examples in yesterday's post, but categories of error that I have found in more than half of my test cases so far include incomplete addresses (either no street number, or no street), establishments which have closed, and incorrect categorization (examples below). I tested Google Maps online and it had the same errors (which is at least consistent - you would hope that they use the same data). For comparison, I went to try MapQuest (online) and it did MUCH better. Here are just a few example searches I tried, centered on 1792 Wynkoop St, Denver, CO:
So anyway, to finish on a not entirely negative note, let's reiterate a few good things. Overall, the iPhone rocks! Google Maps on the iPhone has a lot of great features - the quality of the map display is great, panning and zooming using the touch screen is really fast and intuitive, routing works well and the way that it saves the full state of complex operations including routing and searches in "Recents" is really nice. BUT, the local search really doesn't match up to the high standards that we all expect from Google. Yes, it will give you what you want in many cases, but is missing key functionality like displaying and sorting by distance, common workflows like navigating to the point of interest you want to go are much more complex than they should be, and in my tests I have encountered way too many data errors to be comfortable relying on it. The good news is that fixing the functionality really shouldn't be difficult technically, though I think it will need Google to accept that sponsored results should be displayed separately somehow (as they are with regular Google search). Fixing the data problems may be a bigger effort, but others have done it as the MapQuest comparison shows.
Fixing local search on Google Maps for the iPhone is now top of my wish list, but I also still think that auto-correction of typing and rotation of the map are high priorities, as I mentioned yesterday. That's enough iPhone reviewing for this weekend I think :) !!
However, I expressed some reservations about the way that local search was implemented yesterday, and having played around with this some more I really feel that there are some serious issues that need addressing here. There are two main issues: one is the way that the functionality has been implemented, and the other is the quality of the data for points of interest.
Let's take the functionality first. Frankly, Google seems to have lost track of the basic user scenario for local search. I suspect that probably a (conscious or unconscious) contributor to this was the desire to include establishments which paid for placement higher in the list of results, as I commented on yesterday, without it being too obvious that they are doing this. But anyway, the basic scenario for local search is that you identify a location, which might either be where you are now (the most common scenario), or perhaps an address where you plan on being in the future, or where a friend is located. You then search for something close to that place - this might fairly specific, like "tattered cover" (a bookstore I used in my example yesterday) or might be more generic, like "coffee". In either case, knowing how far away each of the search results are from the specified location is a really crucial piece of information in deciding whether that search result is of interest to you. Sorting the results by distance, and knowing that the closest (or the closest 3, or 5, or whatever) results have been shown to you are fundamental requirements. And then if you decide that one of the search results is of interest to you, by far the most common follow up action is that you want to figure out how to navigate from the point you first specified to the search result you have selected.
Most other local search implementations I have used meet these requirements pretty well, but local search on the iPhone fails pretty badly. Using the TeleNav software on my BlackBerry 8800, I choose "Find business" and the first thing it asks me for is the search point, and I have six options for specifying this: current location (which it knows from the GPS, which is of course its big advantage over the iPhone), recent addresses, favorites, key in an address, near airport, or contacts. You then specify the search term, and are immediately given a list of results sorted in order of distance from the search point, and showing how far away each result is. You click on a result and it shows more information including the address, and gives you the option to drive to it, save it, map it, or call it. The one feature which TeleNav does not provide which the iPhone does is the ability to display multiple results on a map at the same time - but it meets the main user scenario much better.
Google Maps online also addresses this user scenario well. You type in the address of the point you are interested in, and hit "Search maps". Then you choose "Find businesses" and you now have two text boxes, one containing the address of the search point, and one for your search term. You hit search and get an ordered list back, by distance, with the distance from the search point displayed for each result. If you click on a search result and ask for directions, the start point is already filled in with the address of the original search point.
But all this goes by the board in Google Maps on the iPhone. You will typically precede a search by centering the screen on a location, most often by typing an address (or retrieving a bookmark). But that point is not explicitly used by the search on the iPhone. Instead, you just get back any ten results which satisfy your search term and which are somewhere within the bounds of the current screen display - except that it's not just any ten, it seems that there is a ranking order which is presumably determined by how much establishments have paid for placement. There is no guarantee that the closest (or closest 3, or 5, etc) are returned. There is no easy way to find out how far away a given search result is from the point you were interested in (the only way I have found is to select a result, display its details, choose "directions to here", and then you have to fill in the start address, which you can do by selecting a bookmark or recent search address, and then calculate a route - and you have to repeat this for every item whose distance you want to know).
I tested a couple of scenarios where there were fewer than ten results in the original map display window, and in this case the map window stays the same and displays pins for the results which are in that area, but you still get a list of ten items (not sure if these are guaranteed to be the ten closest in this situation or not). A bad thing in this scenario is that if you select an item from the list which was not in the original map window, it pans you to that location, but now you have no idea how that relates geographically to your original search point (beyond re-entering the original point and calculating a route).
If there are no search results in the original map window, the map window zooms out so that it fits in all the results (at least in the cases I tried). However, there is no symbol on the map showing where your original point was, so you risk losing your orientation in this situation if it's an area you don't know well, unless you calculate a route from your original point again.
So functionality wise I really think someone lost the plot here. Yes, if you are zoomed in close (i.e. at the default scale which is displayed which you search for an address), the system will find you some results that are "sort of close" to the address you typed, and you get a nice pretty display with animated flying pins. But there appears to be no guarantee that the closest result will definitely be returned (though it probably will in most cases, especially if you are zoomed in close), there's no easy way of determining how far away any given result is (you have to calculate routes individually), there is no way at all of sorting results in order of distance, and in order to calculate a route to a search result you have to re-enter the original search point again (which you can admittedly do via a bookmark or recent result, but this still adds at least two or three clicks which shouldn't be necessary). One other thing I noticed while thinking about distances in this context is that there is no scale bar on the map, so you can't even estimate graphically. I know it's not a big screen, but all in car navigation systems I have used have some nice compact scale bar, so I think this should be added in there too.
So anyway, that's the functionality rant, sorry about that :) !! Now for the data rant!
These issues are exacerbated by the fact that Google's point of interest data really seems to have a lot of issues. I mentioned some examples in yesterday's post, but categories of error that I have found in more than half of my test cases so far include incomplete addresses (either no street number, or no street), establishments which have closed, and incorrect categorization (examples below). I tested Google Maps online and it had the same errors (which is at least consistent - you would hope that they use the same data). For comparison, I went to try MapQuest (online) and it did MUCH better. Here are just a few example searches I tried, centered on 1792 Wynkoop St, Denver, CO:
- King Soopers (a local supermarket chain): in the top 10 on Google, there were three entries with an incomplete address (just Denver, CO), all of which show as being closer than the closest real King Soopers, and one entry which says in Google Maps online that it is an "unverified listing" (at 1150 Stout St) - but on the iPhone this just appears in the list without any indication that it is unverified (this one does not appear in MapQuest and I am pretty sure that it is not a real location). So 4 out of 10 results are wrong (non-existent), therefore you would have a 40% chance of ending up somewhere with no King Soopers. On MapQuest, 10 legitimate addresses are returned, though in some cases they appear to have multiple addresses for the same store (e.g. entry points on two different streets) - but this is not a serious error as you would still find a King Soopers.
- Tattered Cover (a well known Denver book store with 3 locations, which I mentioned in my post yesterday). MapQuest returns just 3 results, all correct. Google Maps online returns 8 entries in its initial list, of which two have incomplete addresses (duplicates of entries which do have complete addresses, but they show up at entirely different locations on the map), one is a store which closed a year ago, one says it is unverified and is in Boulder (where there is no Tattered Cover), and one is a duplicate of a correct store. The same 8 entries appear on the iPhone, again with no indication that the Boulder one is unverified - so in this case you have a 50% chance of not showing up at a Tattered Cover.
- Searches for Office Depot and Home Depot were more successful, with no obvious errors - hooray :) !!
- Searching for grocery in Google Maps online returned Market 41, a nightclub which closed a year ago, and four entries with incomplete addresses (two King Soopers which we saw before, and two Safeways). The iPhone returned a different list in this case, but included Market 41, two entries with incomplete addresses (one the same, one different), and Lemon Sisters market, which was a small grocery store but closed several years ago. So a 40% chance of going to the wrong place in this case. The same search on MapQuest did not return Market 41 and had 10 complete addresses, all of which looked legitimate.
So anyway, to finish on a not entirely negative note, let's reiterate a few good things. Overall, the iPhone rocks! Google Maps on the iPhone has a lot of great features - the quality of the map display is great, panning and zooming using the touch screen is really fast and intuitive, routing works well and the way that it saves the full state of complex operations including routing and searches in "Recents" is really nice. BUT, the local search really doesn't match up to the high standards that we all expect from Google. Yes, it will give you what you want in many cases, but is missing key functionality like displaying and sorting by distance, common workflows like navigating to the point of interest you want to go are much more complex than they should be, and in my tests I have encountered way too many data errors to be comfortable relying on it. The good news is that fixing the functionality really shouldn't be difficult technically, though I think it will need Google to accept that sponsored results should be displayed separately somehow (as they are with regular Google search). Fixing the data problems may be a bigger effort, but others have done it as the MapQuest comparison shows.
Fixing local search on Google Maps for the iPhone is now top of my wish list, but I also still think that auto-correction of typing and rotation of the map are high priorities, as I mentioned yesterday. That's enough iPhone reviewing for this weekend I think :) !!
Labels:
Apple,
blackberry,
general technology,
geospatial,
google,
gps,
iPhone,
mapquest,
maps
Subscribe to:
Posts (Atom)