Applications directly using semantics are not lacking. We can mention research on automatic summarization or machine translation. Semantics is also involved in robotics, in particular the reasoning in natural languages, the man-machine dialogue. Semantics is, in the language sciences, as opposed to syntax.
The syntax is concerned with formal rules, whereas semantics concerns itself with meaning. The goal of the Semantic Web is to allow machines to exchange information using the meaning of words as in natural languages. This ambitious goal requires a lot of work on languages, systems and ontologies. As part of this objective, Topic Maps need to hold an important place.
For example, traditional search engines operate syntactically: a query on the keyword “sun” will fetch in a database pages containing the string [sun] regardless of the meaning of this word which depends of course on context. We found both the Auberge du Soleil, the Théâtre du Soleil, a catalog of sunglasses … plus pages about the Sun Star. In a Topic Map, we would create a Topic Sun, which will be defined according to the context. Ties with the Topic, via associations and occurrences are semantic links.
Language and symbolic capacity are at the heart of research in cognitive science (brain science). Scientists have discovered the biological basis of conscious experience. In addition to the exceptional concentration of nerve cells in the human brain, there is a vast network of connections between different cortical areas, which have various functions.
“The areas that coordinate language must be connected to all other sensory modalities, because we can recognize and identify an object by touch as well as vision, hearing or smell. All this information must find its way to the generator of the word.
However, intermodal connections between different cortical areas are far fewer in animals whose sensory systems remain more isolated, researchers recently found out. If the animal detects a stimulus by the visual system, it is not certain that it knows to generalize it in another modality, such as touch, for example.
Animals generalize much less than humans because they have less intermodal connections. The capacity for abstraction, generalization, is more advanced in humans (for better or for worse) and the intermodal capacity for abstraction is made possible by combining concept-word (or signified-signifier).
The symbolic capacity, is the fact that the symbols “milk”, “juice” … return home not only to a physical reality but also other symbols such as “liquid”, “fruit extract” etc.. It is this ability that allows us to invent imaginary animals, imaginary worlds from properties that have symbolic animals and real situations, or to imagine a life after death …
One wonders what is the semantics of programming. In fact, we find the term in mathematics since the 30s. Mathematical logic is a very precise language derived from natural language and designed to avoid ambiguity. Mathematical logic includes a formal syntax that says for example that the following sentence is correct: for any natural number x, there is a successor of the number x.
But there is also a formal semantics giving meaning to the symbols invariable and accurate, in terms of values or truths of set theory. Theorists have extended these computer methods to the analysis of programming languages. We thus find the term semantics in linguistics, mathematical logic and computer science.
For their part, linguists working in formal semantics extend these methods to the analysis of conjunctions (and, hence, so …), determinants (all, some, each, several …), the negation of verb tenses etc. Any linguistic semantics, even more intuitive, can be reformulated logically.
It is obviously possible to make the semantics intuitively. But reading semantics, we always ask: what is the link between semantics and intuitive logic? The challenge is to make the link between the two. (ie a bit of logic and set theory), taking into account the fact that your knowledge of logic are limited.