A fierce debate is raging those days on the lod-public list between +Kingsley Idehen and +Hugh Glaser, plus a couple of others. The question is to know if the huge efforts to publish mountains of linked data have produced so far any kind of visible and useful applications consuming them. In other words, where are the semantic dogs consuming those heaps of Semantic Web Dog Food? Kingsley holds it of course that they have invaded all the Web avenues, and Hugh that they are nowhere to be seen. Obviously they are not looking for the same kind of dogs, or they don't agree on what a semantic dog could look like, making me wondering if such dogs might be akin to the dragon of the story.
This debate is to compare with +Amit Sheth's recent post entitled "Data Semantics and Semantic Web - 2012 year-end reflections and prognosis" suggesting among other things that although another five years or so could be necessary for the Linked Open Data to gain enough quality to allow building upon it seriously, on the other hand things like Google Knowledge Graph are opening the way to pervasive semantic applications.
Beyond those ongoing cries of "Publish more linked data" and "Show me the applications" why not try in this New Year time to think about linked data in a Long Now perspective?
The more I'm working in this space, the more I see that things are moving forward despite all known obstacles. This is a deep and slow process, and it's not only about technical applications. It's a way of thinking which is percolating into media and government agencies, large libraries and European institutions, search industry, social media, crowdsourcing data bases, research centers etc. And many people working in those spaces don't think first about making money with the next technological widget, they are more and more aware of taking part in building a new layer of the human knowledge heritage. This layer is worth being considered, in the long now perspective, at the same historical level as, and in the continuity of, the successive inventions of language, writing, or successive layers of mathematics. Consider how long it took for position numeration, algebra, trigonometry, calculus, ... to become the pervasive tools they are today? Did their pionneers think about business models? Did they throw away their work after only ten or twenty years of efforts because no application was selling? Fortunately not, and this story has been continued for centuries, we have still generations of patient mathematicians trying to solve problems set up centuries ago. Do they waste their time and our money? Should they take financial statistics instead?
The more I'm working in this space, the more I see that things are moving forward despite all known obstacles. This is a deep and slow process, and it's not only about technical applications. It's a way of thinking which is percolating into media and government agencies, large libraries and European institutions, search industry, social media, crowdsourcing data bases, research centers etc. And many people working in those spaces don't think first about making money with the next technological widget, they are more and more aware of taking part in building a new layer of the human knowledge heritage. This layer is worth being considered, in the long now perspective, at the same historical level as, and in the continuity of, the successive inventions of language, writing, or successive layers of mathematics. Consider how long it took for position numeration, algebra, trigonometry, calculus, ... to become the pervasive tools they are today? Did their pionneers think about business models? Did they throw away their work after only ten or twenty years of efforts because no application was selling? Fortunately not, and this story has been continued for centuries, we have still generations of patient mathematicians trying to solve problems set up centuries ago. Do they waste their time and our money? Should they take financial statistics instead?
Kingsley has it right : Turtle is a lot easier to grok than calculus and certainly even easier than high-school trigonometry. Hopefully in a few decades Turtle and SPARQL basics will be taught in high schools so that anyone with a A-Level will be able to read, publish and query linked data.
In conclusion, seems to be nobody has wasted time, talent and energy so far in modeling, cleaning, interlinking and publishing our heritage of data. And now nobody is wasting time doing the same thing if one is serious about quality, sustainability and responsibility. And I see around more and more actors very serious about it indeed. Let's continue this patient work and stop fighting about what a semantic dog (or dragon ) looks like.
[Added] Just read today a new post by Mike Bergman, Enterprise-scale Semantic Systems. It goes along the same lines: "Despite the despair of some semantic advocates, the market place is increasingly understanding the concepts and potential of semantic technologies".
[Added] Just read today a new post by Mike Bergman, Enterprise-scale Semantic Systems. It goes along the same lines: "Despite the despair of some semantic advocates, the market place is increasingly understanding the concepts and potential of semantic technologies".
No comments:
Post a Comment
Comments welcome