In this paper we experimentally assess, from both algorithmic and pragmatic perspectives, the adequacy of linguistic descriptions of real data generated by two metaheuristics: simulated annealing and genetic algorithm meta-heuristics. The type of descriptions we consider are fuzzy quantified statements (both Zadeh's type-1 and type-2) involving three well-known quantification models (Zadeh's scalar and fuzzy and Delgado's GD). We conducted an empirical validation using real observation and prediction meteorological data, where both automatic (metrics-based) and manual (human experts-based) assessment on the adequacy of the generated descriptions was assessed. Results indicate that, overall, the genetic approach performs better than simulated annealing in terms of quality of the obtained descriptions and time execution. Significance of this outperforming depends on the type of meteorological data and the quantification model selected. Tests of statistical significance point out that for type-1 descriptions no significant differences exist between the two meta-heuristics in the prediction case. For type-2 descriptions, significant differences exist for Delgado's GD model for both types of data. For Zadeh's scalar and fuzzy quantification significance depends on the type of data (observation or prediction). Globally, outperforming of the genetic approach over simulated annealing i) is significant in 4 out of 12 scenarios considered (all of them type-2), and ii) is not significant in the other 8 out 12 scenarios (all type-1 and two type-2).
Also human expert assessment on the adequacy of the descriptions was conducted, showing that both meta-heuristics behave similarly for type-1 descriptions, while genetic algorithms produce more suitable type-2 linguistic descriptions.
Keywords: Linguistic Description of Data, Data-to-text systems, Computing with words, Natural language generation