News

2016-05-30 software

stringi 1.1.1 released

stringi is among the top 10 most downloaded R packages, providing various string processing facilities. A new release comes with a few bugfixes and new features.
* [BUGFIX] #214: allow a regex pattern like `.*`  to match an empty string.

* [BUGFIX] #210: `stri_replace_all_fixed(c("1", "NULL"), "NULL", NA)`
now results in `c("1", NA)`.

* [NEW FEATURE] #199: `stri_sub<-` now allows for ignoring `NA` locations
(a new `omit_na` argument added).

* [NEW FEATURE] #207: `stri_sub<-` now allows for substring insertions
(via `length=0`).

* [NEW FUNCTION] #124: `stri_subset<-` functions added.

* [NEW FEATURE] #216: `stri_detect`, `stri_subset`, `stri_subset<-` gained
a `negate` argument.

* [NEW FUNCTION] #175: `stri_join_list` concatenates all strings
in a list of character vectors. Useful with, e.g., `stri_extract_all_regex`,
`stri_extract_all_words` etc.
2016-05-09 new paper

Paper on the Genie Clustering Algorithm

The following paper has been accepted for publication in Information Sciences: Gagolewski M., Bartoszuk M., Cena A., Genie: A new, fast, and outlier-resistant hierarchical clustering algorithm, 2016. It describes the Genie algorithm available thru the genie package for R. The article has been assigned DOI of 10.1016/j.ins.2016.05.003.
Abstract. The time needed to apply a hierarchical clustering algorithm is most often dominated by the number of computations of a pairwise dissimilarity measure. Such a constraint, for larger data sets, puts at a disadvantage the use of all the classical linkage criteria but the single linkage one. However, it is known that the single linkage clustering algorithm is very sensitive to outliers, produces highly skewed dendrograms, and therefore usually does not reflect the true underlying data structure – unless the clusters are well-separated. To overcome its limitations, we propose a new hierarchical clustering linkage criterion called Genie. Namely, our algorithm links two clusters in such a way that a chosen economic inequity measure (e.g., the Gini- or Bonferroni-index) of the cluster sizes does not increase drastically above a given threshold. The presented benchmarks indicate a high practical usefulness of the introduced method: it most often outperforms the Ward or average linkage in terms of the clustering quality while retaining the single linkage speed. The Genie algorithm is easily parallelizable and thus may be run on multiple threads to speed up its execution further on. Its memory overhead is small: there is no need to precompute the complete distance matrix to perform the computations in order to obtain a desired clustering. It can be applied on arbitrary spaces equipped with a dissimilarity measure, e.g., on real vectors, DNA or protein sequences, images, rankings, informetric data, etc. A reference implementation of the algorithm has been included in the open source genie package for R.
2016-03-09 new paper

Proc. IPMU'2016: 3 Papers Accepted

Three papers which I co-author: Fitting aggregation functions to data: Part I – Linearization and regularization, Fitting aggregation functions to data: Part II – Idempotentization (co-authors: Maciej Bartoszuk, Gleb Beliakov, Simon James), and Fuzzy k-minpen clustering and k-nearest-minpen classification procedures incorporating generic distance-based penalty minimizers (co-author: Anna Cena) have been accepted for the IPMU 2016 conference.

1st paper:

Abstract. The use of supervised learning techniques for fitting weights and/or generator functions of weighted quasi-arithmetic means – a special class of idempotent and nondecreasing aggregation functions – to empirical data has already been considered in a number of papers. Nevertheless, there are still some important issues that have not been discussed in the literature yet. In the first part of this two-part contribution we deal with the concept of regularization, a quite standard technique from machine learning applied so as to increase the fit quality on test and validation data samples. Due to the constraints on the weighting vector, it turns out that quite different methods can be used in the current framework, as compared to regression models. Moreover, it is worth noting that so far fitting weighted quasi-arithmetic means to empirical data has only been performed approximately, via the so-called linearization technique. In this paper we consider exact solutions to such special optimization tasks and indicate cases where linearization leads to much worse solutions.

Keywords. Aggregation functions, weighted quasi-arithmetic means, least squares fitting, regularization, linearization

2nd paper:

Abstract. The use of supervised learning techniques for fitting weights and/or generator functions of weighted quasi-arithmetic means – a special class of idempotent and nondecreasing aggregation functions – to empirical data has already been considered in a number of papers. Nevertheless, there are still some important issues that have not been discussed in the literature yet. In the second part of this two-part contribution we deal with a quite common situation in which we have inputs coming from different sources, describing a similar phenomenon, but which have not been properly normalized. In such a case, idempotent and nondecreasing functions cannot be used to aggregate them unless proper pre-processing is performed. The proposed idempotization method, based on the notion of B-splines, allows for an automatic calibration of independent variables. The introduced technique is applied in an R source code plagiarism detection system.

Keywords. Aggregation functions, weighted quasi-arithmetic means, least squares fitting, idempotence

3rd paper:

Abstract. We discuss a generalization of the fuzzy (weighted) k-means clustering procedure and point out its relationships with data aggregation in spaces equipped with arbitrary dissimilarity measures. In the proposed setting, a data set partitioning is performed based on the notion of points' proximity to generic distance-based penalty minimizers. Moreover, a new data classification algorithm, resembling the k-nearest neighbors scheme but less computationally and memory demanding, is introduced. Rich examples in complex data domains indicate the usability of the methods and aggregation theory in general.

Keywords. fuzzy k-means algorithm, clustering, classification, fusion functions, penalty minimizers

2016-03-07 software

The genie Package for R

A New, Fast, and Outlier Resistant Hierarchical Clustering Algorithm called Genie is now available via the genie package for R (co-authors: Maciej Bartoszuk and Anna Cena). A detailed description of the algorithm will be available in a forthcoming paper of ours.
2015-12-31 new book

Data Fusion Book Now Available

My book Data Fusion: Theory, Methods, and Applications is now available (click me).
Data Fusion: Theory, Methods, and Applications - cover
2015-12-13 new paper

Accepted Paper in IEEE TFS

A short paper entitled H-index and other Sugeno integrals: Some deffects and their compensation, by Radko Mesiar and Marek Gagolewski, has been accepted for publication in IEEE Transactions on Fuzzy Systems.
Abstract: The famous Hirsch index has been introduced just ca. 10 years ago. Despite that, it is already widely used in many decision making tasks, like in evaluation of individual scientists, research grant allocation, or even production planning. It is known that the h-index is related to the discrete Sugeno integral and the Ky Fan metric introduced in 1940s. The aim of this paper is to propose a few modifications of this index as well as other fuzzy integrals -- also on bounded chains -- that lead to better discrimination of some types of data that are to be aggregated. All of the suggested compensation methods try to retain the simplicity of the original measure.
2015-12-01 new paper

Accepted Paper in European Physical Journal B

Agent-based model for the h-index – Exact solution by Żogała-Siudem B., Siudem G., Cena A., and Gagolewski M. has been accepted for publication in European Physical Journal B (assigned doi:10.1140/epjb/e2015-60757-1).
Abstract: The Hirsch's h-index is perhaps the most popular citation-based measure of the scientific excellence. In 2013 G. Ionescu and B. Chopard proposed an agent-based model for this index to describe a publications and citations generation process in an abstract scientific community. With such an approach one can simulate a single scientist's activity, and by extension investigate the whole community of researchers. Even though this approach predicts quite well the h-index from bibliometric data, only a solution based on simulations was given. In this paper, we complete their results with exact, analytic formulas. What is more, due to our exact solution we are able to simplify the Ionescu-Chopard model which allows us to obtain a compact formula for h-index. Moreover, a simulation study designed to compare both, approximated and exact, solutions is included. The last part of this paper presents evaluation of the obtained results on a real-word data set.
2015-11-28

IPMU 2016 Special Session:
Computational Aspects of Data Aggregation and Complex Data Fusion

We are happy to invite you to submit your contribution(s) to the special session entitled Computational Aspects of Data Aggregation and Complex Data Fusion within the 16th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU 2016) that will be held on June 20-24, 2016 in Eindhoven, The Netherlands.

Important dates:

  • Paper submission: January 8, 2016
  • Notification of acceptance/rejection: March 1, 2016
  • Camera-ready papers: March 31, 2016

The proceedings of IPMU 2016 will be published in Communications in Computer and Information Science (CCIS) with Springer. Papers must be prepared in the LNCS/CCIS one-column page format. The length of papers is 12 pages in this special LaTeX2e format. For the details of submission click here.

Please feel free to disseminate this information to other researchers that may potentially be interested in the session. For the details on the Session click here.