Vocabularies are finite, hence ambiguous.

Vocabularies have been for thousands of years our main weapons in the fierce war against ambiguity. The Web has enabled the continuation of this war with new weapons called URI and RDF. This new battlefield has seen an unprecedented proliferation of terms, entities and concepts. Although everyone in this space goes on recommending the reuse of existing concepts and terms, not to reinvent the wheel and so on, we all strive for accuracy, and since the existing terms are never exactly fitting either our data or view of the world, we feel forced to add to the pile. We reinvent the wheel because our stuff is just so slightly different.
There is no possible end to this process. To achieve perfect accuracy, get rid of all ambiguity, we would need infinite vocabularies. We all know from high school that actual infinity is impossible to achieve, and this is quite simple to understand. But unbound growth in a very large world, in other words potential infinity, is in practice as difficult to grasp as actual infinity. Both are, to paraphrase Woody Allen, very large near the end, and whatever the ability of the information system to scale (brainware, hardware and software all together), it will break at some point. If you say to someone that the universe is infinite, he's ready to accept it intellectually as a default option, because universe having limits is in fact more difficult to grasp, not so much because of its weird space-time geometry than because its actual size and proportions, finite but so large, discourage all attempt to achieve accurate physical or mental representation.
What do we bring home from that? That the finite nature of our vocabularies, even extended by the impressive growth of technologies, makes that we have to live with ambiguity forever. Hence we have to consider ambiguity not as a bug, but as a feature of our vocabularies. Unfortunately many people still do believe, or act as if they believe, that because they are the domain experts and have worked for years on it, their terms are perfectly accurate and free from ambiguity. Expressing the terms semantics in formal languages is just comforting some of them in this dangerous illusion. And thinking we can achieve non-ambiguity prevents research to focus on the real issue of how to practically deal with ambiguity with the agility and efficiency of natural language conversation.

[Edited 2014-07-22] For a quite entertaining introduction to the issue, see "How many things are there?" 


Dimensions of online identity (about:me)

Ongoing discussion on how social accounts should be represented in schema.org is quite interesting to follow. I've not yet put directly my pinch of salt in this soup, just posted a side note on Google+, which triggered a small forking debate. I'm confident enough in people at schema.org to come out with some pragmatic decision, hence as +Dan Brickley likes to say those days, I don't worry too much
But underneath the technical issues, arise some good questions about online identity. Some people in that discussion seem to consider that their social accounts are not really identifying them. In a recent post, I defended the opposite view that URIs of social profiles are maybe the most representative of the online identity, and should be used as primary URIs at least in contexts where social interaction is at stake. I would like to go a little further in this analysis.
The following diagram is inspired by the work of Fanny Georges, with whom I had a few exchanges back in 2009 about those subjects of online identity. For those who read French, you can find some of the original concepts in this paper (see diagram on page 3), where she introduces the notions of declarative identity, active identity, and computed identity. I come out with a slightly different representation, but I wanted to acknowledge the source of most concepts in the picture.

This diagram defines two dimensions of the online identity : a personal-social axis, and a declarative-active one. Each corner of the diagram represents a combination of two poles of those axes. You can figure yourself easily where any resource linked somehow to you will fit, but better clarify by some examples :
Bottom left you find your good old' 96 web page : been there, done that, my home, my kids, my research papers, my collection of old bikes, whatever. All chosen and made by you. Today you will find there a static online CV, for example.
Upper left contains anything said online about you, if you are (un)happy enough to be a public person : articles about you, photographs and videos, library records of your publications, a Wikipedia article about you if you are notable enough (unless you have mingled into its redaction, in which case it will be somewhere on the middle left).
Bottom right contains the traces of your individual interactions on the Web : the pages you visit, the searches you perform, the transactions you make etc. This part of your identity is split on many servers. A piece at your bank, a piece on Amazon, a piece at Google etc. This is the most obscure and frightening part of your identity, because you have no real control on that. Many systems know many things about you, that you might have forgotten.
Upper right contains all the interactions you have on the social Web : FB wall, comments on you blog posts, retweets, GMail etc.

Orthogonal to those two axes is whatever is computed from those data. Many things have been computed behind the scenes long ago from your personal activity (bottom right) : cookies on your browser, suggestions from Amazon, and all sorts of adware or malware entering your computer. Things computed from the social-active upper right are suggestions (friends, books ..) and anything Google or Facebook or whoever "thinks" you would like to do, read, buy etc.
The Semantic Web on the other hand, has been interested mainly in computing on declarative identity : DBpedia descriptions (upper left), FOAF profiles (bottom left). 
Google, for the Knowledge Graph, seems to gather stuff from all over the place : what I say and what I do, what others say about me and what other do in interaction with me. And at the end of the day, even if it's scary to see all this stuff put together and crunched by mysterious algorithms on Big G servers, all together it might yield a more balanced view of my identity than any of its aspects. That's why I take my Google+ URI to be as close as possible to the "about:me" node in the center of the diagram.

[Added 2014-04-14] See also Cybernetics and Conversation. Quote from the reference article (1996) 
Thus we find ourselves being constructed (defined, identified, distinguished) _by_ that conversation. From this point-of-view, our selves emerge as a consequence of conversation. Expressed more fully, conversation and identity arise together.


Query + Entity = Quentity

Neologisms are cool, particularly those of the portmanteau kind. Taking two old words and biding them together into a new hybrid semantic species is indeed as exciting, tricky and risky as tree grafting. And it takes some years, either for words or for trees, to figure out success or failure. Will you eventually harvest any fruit, will the hybrid survive at all? Nine years ago I introduced hubjects, and four years before that it was the semantopic map. Neither of those have grown in the expected direction or yielded the expected fruits, although they are both alive and well. Those poor results will not prevent me to try a new grafting experience with quentities, and let's meet here after 2020 under this new tree, and enjoy the fruits, if any.
So, what is this new semantic graft all about? I've ranted here last year about Google not exposing public linked data URIs for its Knowledge Graph entities, and defining linked entities jus as yet other queries. A similar criticism applies to the Bing version of the Knowledge Graph I just invited yesterday to play in this blog. But thinking twice about it, I wonder now if queries are not the right way, and maybe the best way, to consider entities in the Web of data. After all, many (most) URIs in the linked data cloud actually resolve to a query behind the scene, even if they look like plain vanilla URIs. URIs at DBpedia, Freebase, VIAF, WorldCat, OBO, Geonames (just to name a few) are deferenced through some query on some data base, which might be or not a SPARQL query on a triple store. 

Let's take this query which you can pass to the DBpedia SPARQL endpoint.
?quake  dbpprop:magnitude  ?mag
FILTER (contains(str(?quake), "earthquake"))
FILTER (contains(str(?quake), "/20"))
FILTER (?mag > 7)
FILTER (?mag < 10)
I've tweaked the filters in order to cope with the quite messy state of earthquakes data in DBpedia : no single class nor category for earthquakes, no consolidation of datatype in the values of magnitude (hence the max value filter), date absent or in weird formats, but fortunately quite standard URI fragment syntax (every 21st century earthquake has a URI starting with http://dbpedia.org/resource/20 and containing "earthquake"). Default explicit semantic filters, use syntactic ones ... if you know the implicit semantics of the syntax, of course.
Granted, this query is as ugly as the data themselves are, but the result is not that bad and one could proudly call this "List of major earthquakes in the 21st century, sorted by decreasing magnitude".

Now I've encapsulated the query on DBpedia endpoint into a tiny URI. Does http://bit.ly/1lNkb0R qualify as a URI for an entity in the common meaning of "named entity"? One can argue forever to know if that "List of major earthquakes in the 21st century" is or is not an entity, but in my opinion it is one, no more no less than every individual earthquake in that list (the ontological status of an individual earthquake is a tricky one, too, if you think about it). 
One can argue also that this entity is a shaky one, because the result of this query is bound to change. The current list in DBpedia might be inaccurate or incomplete, some instances might escape the filter for all sort of obvious reasons, and obviously new major earthquakes are bound to happen in this century. Moreover, a stable meaning for this URI depends on the availability and stability of the bit.ly redirection service, on the availability of the DBpedia SPARQL endpoint, on the stability of Wikipedia URI policy and DBpedia ontology. Given all those particularities, let's assume we have a new kind of entity, defined by a query, that I propose to call for this very reason a quentity (shortcut for query entity), and an associated URI which I would gladly call a qURI (shortcut for query URI).

This qURI of course makes sense only as long as the technical context in which you can perform it is available. But is it different for any other kind of URI? To figure what a URI means in the Web of data, you have to use a HTTP GET, which is nothing more than a query over the global data base which is the Web, and what you GET generally depends on the availability and stability of as many pieces of hardware, software and data as in the above example. 
Indeed any URI can be seen, no more no less than the above bit.ly one, as an encapsulated query, making sense only when it's launched across the Web. And is not the elusive entity represented (identified) by this URI better seen as the query itself rather than as the query result? The query is permanent, whereas the query result is dependent on the everchanging client-server conversation context. 

So, if you want some kind of permanence in what the URI defines or identifies or represents (pick your choice), look at the query itself, not at the result. If you abide by this strange but inescapable conclusion, every entity in the Web is a quentity, and its URI is a qURI.

Follow-up discussion on Google+

Added 2014-04-09 : In the G+ discussion is introduced another and certainly better example to make my point : http://bit.ly/R2e3VV, a SPARQL CONSTRUCT yielding the same list in n3, making clear that the RDF description one GET from this URI does not, and cannot, include any triple describing the URI itself.


Bing Knowledge Graph

I've added the Bing Knowledge Graph widget to this blog, hoping that the introduction of Microsoft code is not a violation of the terms and conditions of Blogger. I guess Google will provide something similar pretty soon, based on its own Knowledge Graph. I wish there was some equivalent, non-proprietary widget leveraging entities in the Linked Open Data cloud.
The pages in this blog do not include many entities, though, and some results will certainly look funny. But I could not let pass this new step towards the Web of entities without having a try at it. Browsing around the pages, I noticed that the notion of "entities" is quite liberal, since for example "Philosophy" and "Semantic Web" are recognized, which is good news. Just everything can be an entity, as long as it can be identified.

[2014-07-21 : end of the experiment]


Linked Open Vocabularies, please meet Google+

The Google+ Linked Open Vocabularies community was created more than one year ago. The G+ community feature was new and trendy at the time, and the LOV community gathered quickly over one hundred members, then the hype moved to someting else, and the community went more or less dormant. Which is too bad, because Google+ communities could be very effective tools, if really used by their members, and LOV creators, publishers and users definitely need a dedicated community tool. We made lately another step towards better interfacing this Google+ community and the LOV data base. Whenever available, we now use in the data base the G+ URIs to identify the vocabulary creators and contributors. As of today,  we have managed to identify a little more than 60% of LOV creators and contributors this way. 
Among those, only a small minority (about 20%) is member of the above said community, which means about 80% of this community members are either lurkers of users of vocabularies. It means also that a good deal of people identified by a G+ profile in LOV still rarely or never use it. One could think that we should then look at other community tools. But there are at least two good reasons to stick to this choice.
Google+ aims at being a reliable identity provider. This was clearly expressed by Google at the very beginning of the service. The recent launch of "custom URIs" such as http://google.com/+BernardVatant through which a G+ account owner can claim her "real name" in the google.com namespace is just a confirmation of this intention. "Vanity URLs" as some call them, are not only good at showing off or being cool. My guess is that they have some function in the big picture of the Google indexing scheme, and certainly something to do with the consolidation of the Knowledge Graph.
We need dynamic, social URIs. I already made this point at the end of the previous post. And the more so for URIs of living and active people. Using URIs of social networks will hopefully make obsolete the too long debate over "URI of Document" vs "URI of Entity". Such URIs are ambiguous, indeed, because we are ambiguous. 
The only strong argument against G+ URIs is that using URIs held by a private company namespace to identify people in an open knowledge project is a bad choice. Indeed, but alternatives might turn to be worse. 


Content negotiation, and beyond

I had in the past, and for many years, looked at content negotiation with no particular attention, as just one among those hundreds of technical goodies developed to make the Web more user-friendly, along with javascript, ajax, cookies etc. When the httpRange-14 solution proposed by the TAG in 2006 was based on content negotiation, I was among those quite unhappy to see this deep and quasi-metaphysical issue solved by such a technical twist, but three months later I eventually came to some better view of it. Not only content negotiation was indeed the way to go, but this decision can be seen now as a small piece in a much bigger picture.
Content negociation has become so pervasive we don't even notice it any more. Even if a URI is supposed to have a permanent referent, what I GET when I submit that URI through a protocol is dependent on a growing number of parameters of the client-server conversation : traditional parameters pushed by the client are language preference, required mime type (the latter being used for the httpRange-14 solution), localisation, various cookies, and user login. Look at how http://google.com/+BernardVatant works. This URI is a reference for me on the Web (at least it's the one I give those days to people wanting a reference), but the representation it yields will depend on the user asking it : anonymous request, someone being logged on G+ but not in my circles, someone in my circles (and depending on which), someone having me in her circles etc, and of course of the interface (mobile, computer). This will look also differently if I call this URI indirectly from another page, like in a snippet etc.  
This kind of behavior will be tomorrow the rule. Every call to any entity through its URI will result in a chain of operations leading to a different conversation. And not only for profiles in social networks, not only for people, alive or dead, but for every entity on the web : places, products, events, concepts ... 
Imagine the following scenario applied to a VIAF URI for example. VIAF stores various representations of the same authority, names in various languages, preferred and alternative labels for the matching authority in a given library. I can easily imagine a customized acces to VIAF, where I could set my preferences such as my favourite library or vendor, with default values based on my geolocation (as already today in WorldCat) and/or user experience, parameters for selection of works (such as a period in time, only new stuff, only novels ...). The search on a name in VIAF would lead to a disambiguation interface if needed, and once the entity selected, to a customized representation of the resource identified under the hood.
This kind of customized content negotiation will not necessarily be provided by the original owner of the URI. In fact there certainly are a lot of sustainable business models around such services which would run on top of existing linked data. A temporal service would extract for any entity a time line of events involving this entity, e.g., events in the life of a person or enterprise, or various translations and adaptations of a book, life cycle of a product ... A geographical service would show the locations attached to an entity, like distribution of offices of a company or its organisational structure. And certainly the most popular way to interact with the entity will be to engage in the conversation with it, as we engage in conversation with people. In both pull and push mode. I would not say like Dominiek ter Heide that the Semantic Web has failed. But I agree it could. Things on the Web of Data have to go dynamic, join the global conversation, or die of obsolescence. 


Thou shalt not take names in vain

This is certainly too serious a subject for a Friday night in the middle of August, but that's a good time for old ideas to be written down. And indeed this has been on my mind for so long, at least since I realized that common nouns such as english timeword, windows, apple, caterpillar, shell, bull, french orange, printemps, champion, géant, carrefour, german kinder, and many more, had been "borrowed" from the language commons to become brands. This is in principle forbidden by various trademark legislations, but there are subtle workarounds. I have always considered such practices as unacceptable enclosures in the knowledge commons. They might look anecdotic, leading to rather silly cases, but some borderline practices from major Web actors show that this affair is more important that it could seem at first sight.
One could argue that the market gives back words to the commons, lists of generic or genericized trademarks are easy to find, in a variety of languages. But curiously enough,  the other way round, systematic lists of common nouns used as trademarks I could not find either in Wikipedia or anywhere else. Note sure if they could get any longer than the former, in any case the lists I proposed to start on Wikipedia were proposed for deletion a few minutes after creation by zealous wardens of the Wikipedia Holy Rules, for lack of notability of the subject. Forget about it, I'm now trying to figure how to query DBpedia to get such a list, but the distinction between a proper name and a common noun is no more explicit in DBpedia descriptions and ontology than it is in Wikipedia.
Anyway, this is not necessarily the most important aspect of the way information technologies can impact, misuse and abuse our language commons at large. There is quite a lot of rules or guidelines one could imagine for that matter, some already explicited by laws even if tricky to enforce, some yet to be specified, not to mention being enforced. There is something deeply anchored in our culture about the fair use of names, coming certainly from the way they are rooted in our religions, hence I have only a slight compunction to take inspiration below from one of the most holy and ancient set of rules. Apologies to believers who might read the following as blasphemy uttered by an old agnostic, and disclaimer to everyone else : those were not cast in stone by any god on any mountain. But if the first and main item in this list seems clearly inspired by the Third Commandment, well, yes it is, and not only in form. The underlying claim is that every word, every name, carries along with it enough history and legacy to be honoured. Those who don't care that much about such religious considerations can read this as pragmatic deontological guidelines for a fair, efficient and sustainable use of names in our information systems at large, and on the Web in particular.

Here goes, ten items of course to stick to the original format. 
  1. Thou shalt not take names in vain
  2. Honour the many meanings of a name, for they belong to the Commons
  3. Acknowledge linguistic and semantic diversity, polysemy and synonymy
  4. Do not steal names from the Commons to be your proper names
  5. Do not sell and buy names, for they belong to the Commons
  6. Do not hide yourself or your products under false names
  7. Do not use names against their common meaning 
  8. Do not enforce your own meanings upon others
  9. Expose your meanings to the Commons, for they will be welcome
  10. Share your own names with the Commons, for they will thrive forever
I won't dwelve today in the details of each of those, some might look quite cryptic and need to be expanded in further posts. Just a remark on the first (and most important) one. The "take in vain" used by the King James version of the Bible has been replaced in more recent translations by "misuse". I prefer the former, which conveys the notion that whenever you use the name, it's not for nothing or something without importance and consequence. When you use a name, you should have well thought about its meaning. In French you would translate at best "Tu ne prendras pas les noms à la légère."