According to a recent claim by IBM, 90% of the data available today have been created in the last two years. This uncontrolled and exponential growth of the online information gave new life to the research in the area of user modelling and personalization, since information about users’ preferences, sentiment and opinions can now be obtained by mining data gathered from many heterogeneous sources.
As an example, many recent work rely on the analysis of the content posted by people on social networks and micro-blogs to unveil latent information about their interests, automatically extract people personality traits, build preferences models on the ground of textual reviews, and so on. At the same time, the recent phenomenon of (Linked) Open Data fueled this research line by making available a huge amount of machine-readable textual data.
All these trends paved the way to the design of intelligent and personalized systems able to extract some real value from this plethora of rough textual content produced on the Web: examples of such services are online brand monitoring platforms, social recommender systems and smart cities-related applications, as incident detection systems or personalized city tour planners.
However, a complete exploitation of such textual streams requires a comprehension of the information conveyed by people. In turn, this requires a deep understanding of the language, which is not trivial. The major goal of this workshop is to stimulate the attention of the scientific community on the aforementioned topics.
The workshop aims to provide a forum for discussing open problems, challenges and innovative research approaches in the area, in order to investigate whether the adoption of techniques for semantic content representation and deep content analytics can be effective to build a new generation of intelligent and personalized services based on the analysis of Social, Big and Linked Open Data.