2016-01-14

Otherwise said in French

I have started with the new year a kind of mirror of this blog in French, a long overdue return to my native language. I hope many readers of in other words will be fluent enough in French to also make sense and hopefully enjoy those choses autrement dites. The first posts are listed and linked below, with a short abstract.
  • Toute chose commence par un trait on Shitao, the unity of painting, calligraphy and poetry in classical Chinese culture, and how the continuum of nature is divided into things by the single brushstroke. 
  • Cosmographie en orange et bleu a "just so story" about the separation of heavens and earth as seen and described by the first ontologist in the first days, and what happened to him on the seventh day.
  • L'ontologiste sur le rivage des choses on the illusion of so-called ontologies (in the modern sense of the term) thinking they have said what things are, when they only define how things differ from each other.
To be continued ... stay tuned!

2016-01-05

Desperately seeking the next scientific revolution

If you still believe in the ambient narrative on the accelerated path of scientific and technological progress, it's time to read You Call this Progress? on the excellent Tom Murphy's blog Do the Math. I'm a bit older than the author, just enough to have seen a few last but not least scientific achievements of the past century happening between my birth and his one. The paper of Watson & Crick on DNA structure was published in Nature a few days after I was born. My childhood time saw the discovery of the cosmic microwave background and general acceptance of the Big Bang theory, experimental confirmation and acceptance of the plate tectonics theory. While I was a student the standard model of microphysics was completed. Meanwhile chaos theory, of which mathematical premices had been discovered by Poincaré at the very beginning of the century, was setting the limits of predictability of natural systems evolution, even under deterministic laws.

This set of discoveries was somehow the bouquet final of a golden age of scientific revolutions which contibuted to our current vision of the world, starting in the 19th century with thermodynamics, theory of species evolution, foundation of microbiology, electromagnetism unification, followed at the beginning of the 20th century by relativity and quantum mechanics, two pillars for our current understanding of microphysics and cosmology, from energy production and nucleosynthesis in stars to structure of galaxies and visible universe at large. Put together, those revolutions spanning about 150 years from 1825 to 1975 set the basis for the mainstream scientific narrative, giving an awesome but broadly consistent (if you don't drill too much in the details, see below) account of our universe history, from Big Bang to galaxies, stars and planets formation and evolution, our small Earth and life at its surface, bacteria, dinosaurs and you and me. A narrative we've come to like and make ours thanks to excellent popularization. We like to be children of the stars, and to wonder, looking at the night sky, if we are the only ones.

As Tom Murphy clearly arguments, this narrative has not substantially changed since 40 years, and has not seriously been challenged by further discoveries. Many details of the story have been clarified, thanks to improved computing power, data acquisition, and spatial exploration. We've discovered thousands of exoplanets as soon as we had the technical ability to detect them, but that did not come as a surprise, and in fact what would have been really disturbing would have been not to discover any. The same lack of surprise happened with gravitational lenses first discovered in 1979 but predicted by general relativity. And no new unexpected particle has been discovered despite billions of dollars dedicated to the Large Hadron Collider, the largest experimental infrastructure ever built.

Could that mean that the golden age of scientific revolutions is really behind us, and all we have to do in the future is to keep on building on top of them an apparently unbound number of technological applications? In other words, that no new radical paradigm shift, similar to the ones of the 1825-1975 period, is likely to happen? Before making such a bold prediction, it would be safe to remember those famous for having proven wrong in the past in pretending that there was nothing new to be discovered.

Actually, major issues already known by 1975 are still open. In physics, the unification of interactions needs to solve strong inconsistencies between relativity and quantum theory, an issue with which Albert Einstein himself struggled until his death, not to speak about the mysterious dark matter and dark energy needed by theory to account for the accelerated expansion of the universe. The latter is actually one of the rare important and unexpected discoveries of the end of the 20th century. In natural science, the process of apparition of life on Earth has still to be clarified, as well as the correlative issue of the existence of extraterrestrial life.

The number of scientists and scientific publications since 1975 has kept growing exponentially, as well as the power of data acquisition, storage and computing technology. With no result comparable in importance for our understanding of the universe to what Galileo discovered in the single year 1610 simply by turning the first telescope towards the Moon, Venus and Jupiter. The general process of science and technology evolution in the past has been that improved technology and instrumentation yields new results pushing towards theoretical revolutions and paradigm shifts. But strangely enough, the unprecented explosion of technologies since half a century has produced nothing of the kind.

Is it really so? Some scientists pretend that there actually is a revolution going on, but as usual mainstream science establishment is rejecting it. This is for example the position of Rupert Sheldrake in this article of 2012 The New Scientific Revolution. Indeed, the theories Sheldrake is defending, such as Morphic Resonance and Morphic Fields, are really disruptive and alluring, but refuted as non-scientific by the majority of his peers. I'm not a biologist, so I won't venture in this debate, and let readers make their own mind about it.

2015-12-17

Two cents of (natural) intelligence

Several months ago, my previous attempt to speak here about artificial intelligence, wondering if computers could participate in the invention of language, met a total lack of feedback (it's not too late for second thoughts, dear reader). I found it quite frustrating, hence another attempt to venture on this slippery debate ground.
+Emeka Okoye in the follow-up of the previous post on facets makes strong points. When I wonder how much intelligence we want to delegate to machines, and for which tasks, the answer comes as a clear declaration of intention.
We are not delegating "intelligence" to machines rather we are delegating "tasks" ... We can have a master-slave relationship with machines ... We, humans, must be in control.
I appreciate the cautious quote marks in the above. But can it be that simple? Or just wishful thinking, as +Gideon Rosenblatt is warning us in a post entitled Artificial Intelligence as a Force of Nature. The connected machines ecosystem, distributed agents, neuronal networks and the like, are likely to evolve into systems (call them intelligent or not is a moot point) which might soon escape, or has already escaped if we believe some other experts on this topic, the initial purpose and tasks assigned by their human creators, to explore totally new and unexpected paths. This hypothesis, not completely new, is backened here by a comparison with evolution of life, of which the emergent ambient intelligence would be a natural (in all meanings of the term) follow-up.

But evolution of technologies, from primitive pots, knifes and looms up to our sophisticated information systems, is difficult to compare to the evolution of life and intelligence. The latter is very slow, driven by species selection on time scales of millions of years, spanning thousands of generations. Behind each success we witness, each species we wonder how it perfectly fits its environment, are forgotten zillions of miserable failures which have been eliminated by the pitiless struggle for life. Nothing can support the hypothesis of an original design and intention behind such stories.
It's often said, like in this recent Tech Insider article, that comparing natural and artificial intelligence is like comparing birds to planes. I agree, but this article misses an important argument. Birds can fly, but at no moment did Mother Nature sat down at her engineering desk and decided to design animals able to fly. They just happened to evolve so over millions of years from awkward feathered dinosaurs, jumping and flying better and better and we now have eagles, sterns and falcons. On the contrary, planes were from the beginning designed with the purpose of flying, and in barely half a century they were able to fly higher and quicker than the above natural champions of flight.

To make it short, technology evolves based on purpose and design, life (nature) has neither predefined purpose nor design. Intelligence makes no exception to that. Natural intelligence (ants, dolphins, you and me) is a by-product of evolution, like wings and flight. We were not designed to be intelligent, we just happened to be so as birds happened to fly. But computers were built with a purpose, even if they now behave beyond their original design and purpose, like many other technologies, because the world is complex, open and interconnected.

Let's make a different hypothesis here. Distributed intelligent agents could escape the original purpose and design of their human creators, maybe. But in such a case, they are not likely to emerge as the single super intelligence some hope and others fear. Rather, like the prebiotic soup more than three billions years ago, its spontaneous evolution would probably follow the convoluted and haphazard paths of natural evolution, struggle for survival and the rest. A recipe for success over billions of years, maybe, but not for tomorrow morning.

2015-12-14

Rage against the mobile

The conversation around the previous post about facets led me to investigate a bit more about mobile, and what it means for the web of text. This is something I'd never really considered so far, and thanks to +Aaron Bradley for attracting my attention on it. Bear in mind I'm just an old baby-boomer who never adopted mobile devices so far, touchscreens drive me crazy, and I still wonder how people can write anything beyond a two words sentence on such devices etc. To be honest I do have a mobile phone but it is as dumb as can be (see below). It's a nice light, small object, feeling a bit like a pebble in my pocket but I actually barely use it (by today standards), just to quick calls and messages. Most of the time I don't even carry it along with me, let alone check messages, to the despair of my family, friends and former colleagues. But they eventually get used to it.


To make it short, I do not belong to the mobile generation, and my experience of the Web has been from the beginning, is, and is bound to remain a desk activity, even if the desktop has become a laptop along the years. I'm happy with my keyboard and full screen, so why should I change? And when the desk is closed, I'm glad to be offline and unreachable. I wish and hope things can stay that way as long as I'm able to read, think and write.

With such a long disclaimer, what am I untitled to say about mobile? Only quote what others who seem to know better have already written. In this article among others I read about the so-called mobile tipping point, this clear and quite depressing account of the consequences of mobile access on Web content.
The prospect for people who like to read and browse and sample human knowledge, frankly, is of a more precipitous, depressing decline into a black-and-white world without nuance [...] The smaller screens and less nimble navigation on phones lend themselves to consuming directory, video, graphic and podcast content more easily that full sentences. If the text goes much beyond one sentence, it is likely to go unread just because it looks harder to read than the next slice of information on the screen. [...] Visitors who access information via a mobile device don’t stay on sites as long as they do when using a desktop computer. So if you’re counting on people using their smartphones or tablets to take the same deep reading dive into the wonders of your printed or normal Web page messages, you’re probably out of luck.  
Given the frantic efforts of Web content providers to keep audience captive, all is ready for a demagogic vicious circle of simplification. Short sentences, more and more black-and-white so-called facts. If this is where the Web is heading to, count me out. I won't write for mobile more than I use mobile to read and write.

I still have hope, though, looking at this blog analytics. Over 80% of the traffic seems to still come from regular (non mobile) browsers and OS. But I guess many of you visitors have also a mobile (smart) phone you otherwise use. I wonder if and how you manage to balance which device you use for which usage. Are you smart enough to use mobile for apps, and switch to proper desk screens to take the time to read (and write)? I'm curious to know. 

2015-12-11

In praise of facets

Follow-up of the previous post, and more on the ways to escape the tyranny of entities in search results. In the quick exchange with Aldo Gangemi in the comments of this post, facets were suggested. I won't argue further with Aldo about facets at BabelNet being types or topics, because he will win at the end, and such a technical argument would lead us astray, far from the main point I wouyld like to make today. You might be uneasy on what facets and particularly faceted search mean, but you have certainly used them many times when searching e-commerce sites, to filter hundreds of laptop models by price, brand, screen size, memory size etc. Libraries, enterprise portals, and many more use faceted search, example below is the search interface of Europeana for "impressionism", the results being filtered by two facets, media type "image" and providing country "Netherlands".

Faceted search is a very intuitive way to search items in a data base. Using faceted search, the user creates at will its own algorithm of filtering, selection and possibly ranking. If you compare with the usual general search engine results, two major advantages appear. The search is multidimensional, and the algorithm is transparent to the user. The system does not apply fancy, smart but opaque algorithms, based on guesses of what the user is looking for. It provides an interface where the user's natural intelligence can be put into action. In short, faceted search provides a good collaborative environment where artificial and human intelligence work together, the former at the service of the latter.

Given the above, one can wonder why general search engines such as Google do not propose faceted search facilities over their results, instead of an unidimensional list of ranked results. A technical answer coming to mind is that such engines do not search items in a collection of objects of which semantic descriptions are stored in a data base, but resources indexed by keywords. That used to be true, but the argument does not seem to hold anymore in the current state of affairs. The Knowledge Graph, however it's implemented, is a data base where things have declared semantic types and properties which could be used for faceted search. It would be a good way to see types and properties defined by schema.org vocabulary put explicitly into action as facets (Creative Work, Person, Place, Event, Business, Intangible ...).

I cannot imagine that Google and al. have never thought about this. There are certainly technical hurdles, but I can't imagine they could not be solved. So I would be curious to hear what they have to say, given that the added value to the search experience would be tremendous. Above all, it would give back to the user the power to define her own filtering on results, and reinstate the habit to do so, instead of the reductionnist Q&A dialogue which in the long run leads to pernicious intellectual laziness, unique thought, and jumping to conclusions without further checking. Our world is more and more complex, and offering simplified and unidimensional answers (presented as facts) to any question does not help to cope with complexity. Current events show us too many examples of oversimplifications and where they lead to.

I think of any of my queries to a search engine as a beam of light sent through the night of my ignorance, where possible answers are hiding as so many complex multi-faceted diamonds. I don't want any one of them, however brilliant and wonderful, make me blind to the point of missing all the rest. Every faceted answer should reflect back a new and unexpected part of the spectrum, without exhausting the question we should always keep alight.


2015-12-09

Search is not only for entities

The Knowledge Graph is a great achievement, but its systematic use at the top of search results is sometimes counter-productive. Knowledge Graph nodes are mostly named entities (individuals, particulars) such as people, places, works (movies, books, music tracks), products ... and rarely universals (concepts, topics, common names). And if an ambiguous search sentence can refer to either particular entities or universals, the former seem to always float at the top with their fancy Knowledge Graph display, and relevant results about universals kicked down. The assumption underlying this default behavior is that people search mostly for particular entities (things), not information about some universal (topic). The hijacking of common names as brand names we already pointed here in the past adds to the issue, along with the growing number of work titles using common names. Add to this the magic of the Knowledge Graph knowing entities by various names in different languages, and you end up with examples like the following. 

For a recent post I searched about the Theory of Everything. If instead of going straight to the Wikipedia article I ask Google, here is what I get.


I was searching for information about a theory in physics, and I get all about a movie which happens to have taken as title the name of this theory. And since my browser default language is French, the Knowledge Graph is kind enough to present me the movie under its French adaptation title "Une merveilleuse histoire du temps", which you can imagine even if you don't speak a lot of French, is all but a translation of "Theory of Everything". The silver lining is that if I search for "Théorie du tout" in French, I have not the same problem, since the movie is not known in French under this title which would be the correct translation of the original one. The first result for "Théorie du tout" is the Wikipedia article on this topic, as expected.
You can play the same funny game with "Gravity", "Frenzy" and many more. Given the limited supply of common names, and the exponential growth of named entities in the Knowledge Graph, all tapping into the commons for their names and titles, such ambiguities are likely to end up being rather the rule than exceptions. Search engines should provide a simple way to opt out entities, so that I could ask "Dear Google, give me resources about the topic called gravity, and I don't care about any individual entity with gravity in its name." And yes, Google, you can do it, I'm sure, just take example on BabelNet, where you can sort results by entities, concepts, music, media etc.  A bit of typing goes a long way ...

Why do I write, really?

Teodora Petkova strikes again with her new and tiny (her word) Web Writing Guide. Her savvy recommendations on the Whys and Whats of writing on/for the Web made me wondering if I ever applied any single one of them, and in particular in this blog which has been for years the main place I've been writing and publishing. The rest of my publication track consisting in a handful of conference or journal papers, a chapter in a collaborative book, some of those still published online, but not really written "for the Web". Not to mention hundreds of messages to various community lists and comments on the social Web, but does that really count as Web writing? 
It might be too late and pointless anyway to consider those recommendations, since I have no tangible reason to keep on writing altogether. Retired from business for half a year, not participating any more in discussions of various communities, not even following them, I have nothing to sell or even to give away here. I could as well forever hold my peace, instead of indulging in more wordy selfies. Nevertheless, I'll make the exercise of going through some of Teodora's recommendations, to see if I ever met them. Just for the fun of it.
  • Write for people
Of course, who else? But I've never thought of anyone in particular as the target of what I write here, although I know I write better when I think about a potential reader. Somehow, each post on this blog could (should) be read as a personal letter to some unknown reader. To make it short, I have no market, no target audience. I know I have a handful of more or less faithful followers, and hope the few serendipitous visitors will bring home some food for thought. 
  • Write for machines
Believe it or not, I really don't give a damn about that one. I've been a so-called Semantic Web evangelist because I liked the ideas behind it and the conceptual debates it triggered (not to mention I was also paid for it), but I never applied its technology to this blog. I even did everything to blur the radar of search engines by changing both URI and title several times. No semantic markup either, beyond a few (rather random) tags. I like the idea of those pages being as easy to reach as the places I love in my mountains. Not unreachable, but not much advertised either, with paths not difficult to follow, but not obvious to find either. And actually, since I'm not able to define or name what I am about, I prefer search engines to ignore those pages than indexing them under any silly topic.
  • Write for joy
This is certainly the only recommendation I follow. Nothing to add.

But the Whys are not the main point of difficulty. Regarding the Whats, I must admit I am completely off track.
  • What is it that you really want to say and cannot help but share?
I'm afraid most of the time I don't know before I've finished writing it.
  • What is it that your audience needs?
As said above, I've no audience, and therefore cannot possibly know what it needs. 

If I try to apply the following ... The intersection of the answers to these questions is the answer to “What to write?” Well I won't say this intersection is empty, but it looks rather undecidable.

Sorry, Teodora, but your recommendations are either useless to me, or they lead to the conclusion that I should not write at all before answering the two above. Unless the write for joy is enough of an excuse to keep writing. 

When I was a child, half a century ago, my school teacher (who happened to be also my father, the teachers offer is scarce in village schools) was an adept of the texte libre. This is the writing exercise I still prefer. Following the Freinet pedagogy, the original free production was selected and amended by the group and eventually published in the class journal. The final text was a collective production based on an individual original idea. Does not that sound quite Webby, back in the 1950's, in remote French village schools?