Well, I made it back from the big road trip up to Vancouver for the GeoWeb conference. We got back home on Sunday evening, after a total of 3439 miles, going up through Wyoming and Montana and then across the Canadian Rockies through Canmore, Banff, Jasper and Whistler, and coming back in more of a straight line, through Washington, Oregon, Idaho and Utah. The Prius did very well, performing better at high speeds than I had expected.
But anyway, the GeoWeb conference was very good. The venue was excellent, especially the room where the plenary sessions were held, which was "in the round", with microphones, power and network connections for all attendees (it felt a bit like being at the United Nations). This was very good for encouraging audience interaction, even with a fairly large group. See the picture below of the closing panel featuring Michael Jones of Google, Vincent Tao of Microsoft, Ron Lake of Galdos, and me (of no fixed abode).
Perhaps one of the best examples of the latter was given by Michael Jones in his keynote, where he showed a very interesting example from Google book search, which I hadn't come across before. If you do a book search for Around the World in 80 Days, and scroll down to the bottom of the screen, you will see a map with markers showing all the places mentioned in the book. When you click on a marker, you get a list of of the pages where this place is mentioned and in some cases can click through to that page.
MetaCarta have been doing similar things for a while, but as we have seen with Earth and Maps, if Google does it then things take on a whole new dimension).
Another item of interest that Michael mentioned is that Google is close to reaching an arrangement with the BC (British Columbia) government to publish a variety of their geospatial data via Google Earth and Maps. This was covered in an article in the Vancouver Sun, which has been referenced by various other blogs in the past couple of days (including AnyGeo, The Map Room, and All Points Blog). This could be a very significant development if other government agencies follow suit, which would make a lot of sense - it's a great way for government entities to serve their citizens, by making their data easily available through Google (or Microsoft, or whoever - this is not an exclusive arrangement with Google). There are a few other interesting things Michael mentioned which I'll save for another post.
One other theme which came up quite a lot during the conference was "traditional" geospatial data creation and update versus "user generated" data ("the crowd", "Web 2.0", etc). Several times people commented that we had attendees from two different worlds at the conference, the traditional GIS world and the "neogeography" world, and although events like this are helping to bring the two somewhat closer together, people from the two worlds tend to have differing views on topics like data update. Google's move with BC is one interesting step in bringing these together. Ron Lake also gave a good presentation with some interesting ideas on data update processes which could accommodate elements of both worlds. Important concepts here included the notions of features and observations, and of custodians, observers and subscribers. I may return to this topic in a future post.
As anticipated given the speakers, there were some good keynotes. Vint Cerf, vice president and chief Internet evangelist for Google, and widely known as a "Father of the Internet", kicked things off with an interesting presentation which talked about key architectural principles which he felt had contributed to the success of the Internet, and some thoughts on how some of these might apply to the "GeoWeb" - though as he said, he hadn't had a chance to spend too much time looking specifically at the geospatial area. I will do a separate post on that.
He was followed by Jack Dangermond, who talked on his current theme of "The Geographic Approach" - his presentation was basically a subset of the one he did at the recent ESRI user conference. He was passionate and articulate as always about all that geospatial technology can do for the world. A difference in emphasis between him and speakers from "the other world" is in the focus on the role of "GIS" and "GIS professionals". I agree that there will continue to be a lot of specialized tasks that will need to be done by "GIS professionals" - but what many of the "old guard" still don't realize, or don't want to accept, is that the great majority of useful work that is done with geospatial data will be done by people who are not geospatial professionals and do not have access to "traditional GIS" software. To extend an analogy I've used before, most useful work with numerical data is not done by mathematicians. This is not scary or bad or a knock on mathematicians (I happen to be one by the way), but it does mean that as a society we can leverage the power of numerical information by orders of magnitude more than we could if only a small elite clique of "certified mathematical professionals" were allowed to work with numbers. Substitute "geographical" or "geospatial" as appropriate in this statement to translate this to the current situation in our industry.
For example, one slide in Jack's presentation has the title "GIS servers manage geographic data". This is a true statement, but much more important is that fact that we are now in a world where ANY server can manage geographic data - formats like geoRSS and KML enable this, together with the fact that all the major database management systems are providing support for spatial data. There is a widely stated "fact" that many people in the geospatial industry have quoted over the years, that something like 85% of data has a geospatial component (I have never seen a source for this claim though - has anyone else?). Whatever the actual number, it certainly seems reasonable to claim that "most" data has a spatial component. So does that mean that 85% of data needs to be stored in special "GIS servers"? Of course not, that is why it is so significant that we really are crossing the threshold to where geospatial data is just another data type, which can be handled by a wide range of information systems, so we can just add that spatial component into existing data where it currently is. Jack also continues to label Google and Microsoft as "consumer" systems when, as I've said before, they are clearly much more than that already, and their role in non-consumer applications will continue to increase rapidly.
But anyway, as Ron said in his introduction, it would be hard to get two better qualified people than Jack and Vint to talk about some of the key concepts of "geo" and "web", so it was an excellent opening session. I think that this post is more than long enough by this point, so I'll wrap it up here and save further ramblings for part 2!