2019-02-14 software

R package stringi 1.3.1 released

A new major release of the R package stringi (one of the most often downloaded extensions on CRAN) is available. Check out the changelog for more information.


* [BACKWARD INCOMPATIBILITY] #335: A fix to #314 (by design) prevented the use
of the system ICU if the library had been compiled with `U_CHARSET_IS_UTF8=1`.
However, this is the default setting in `libicu`>=61. From now on, in such
cases the system ICU is used more eagerly, but `stri_enc_set()` issues
a warning stating that the default (UTF-8) encoding cannot be changed.

* [NEW FEATURE] #232: All `stri_detect_*` functions now have the `max_count`
argument that allows for, e.g., stopping at first pattern occurrence.

* [NEW FEATURE] #338: `stri_sub_replace()` is now an alias for `stri_sub<-()`
which makes it much more easily pipable (@yutannihilation, @BastienFR).

* [NEW FEATURE] #334: Added missing `icudt61b.dat` to support big-endian
platforms (thanks to Dimitri John Ledkov @xnox).

* [BUGFIX] #296: Out-of-the box build used to fail on CentOS 6, upgraded
`./configure` to `--disable-cxx11` more eagerly at an early stage.

* [BUGFIX] #341: Fixed possible buffer overflows when calling `strncpy()`
from within ICU 61.

* [BUGFIX] #325: Made `./configure` more portable so that it works
under `/bin/dash` now.

* [BUGFIX] #319: Fixed overflow in `stri_rand_shuffle()`.

* [BUGFIX] #337: Empty search patters in search functions (e.g.,
`stri_split_regex()` and `stri_count_fixed()`) used to raise
too many warnings on empty search patters.
2019-02-14 new paper

Piecewise linear approximation of fuzzy numbers: algorithms, arithmetic operations and stability of characteristics

A paper by me, Lucian Coroianu and Przemyslaw Grzegorzewski entitled Piecewise linear approximation of fuzzy numbers: algorithms, arithmetic operations and stability of characteristics, has been accepted for publication in Soft Computing.

Abstract. The problem of the piecewise linear approximation of fuzzy numbers giving outputs nearest to the inputs with respect to the Euclidean metric is discussed. The results given in Coroianu et al. (Fuzzy Sets Syst 233:26–51, 2013) for the 1-knot fuzzy numbers are generalized for arbitrary n-knot (n>=2) piecewise linear fuzzy numbers. Some results on the existence and properties of the approximation operator are proved. Then, the stability of some fuzzy number characteristics under approximation as the number of knots tends to infinity is considered. Finally, a simulation study concerning the computer implementations of arithmetic operations on fuzzy numbers is provided. Suggested concepts are illustrated by examples and algorithms ready for the practical use. This way, we throw a bridge between theory and applications as the latter ones are so desired in real-world problems.

2019-01-16 new paper

Supervised Learning to Aggregate Data with the Sugeno Integral

Supervised Learning to Aggregate Data with the Sugeno Integral, co-authored by Simon James and Gleb Beliakov, shall appear in IEEE Trans. Fuzzy Systems.

Abstract. The problem of learning symmetric capacities (or fuzzy measures) from data is investigated toward applications in data analysis and prediction as well as decision making. Theoretical results regarding the solution minimizing the mean absolute error are exploited to develop an exact branch-refine-and-bound-type algorithm for fitting Sugeno integrals (weighted lattice polynomial functions, max-min operators) with respect to symmetric capacities. The proposed method turns out to be particularly suitable for acting on ordinal data. In addition to providing a model that can be used for the general data regression task, the results can be used, among others, to calibrate generalized h-indices to bibliometric data.

2018-12-11 new Ph.D.

Anna Cena's Ph.D. defense

My Ph.D. student, Anna Cena has defended her doctoral thesis, Adaptive hierarchical clustering algorithms based on data aggregation methods. Yay!
2018-10-26 new Ph.D.

Maciej Bartoszuk's Ph.D. defense

My Ph.D. student, Maciej Bartoszuk has defended his doctoral thesis (cum laude!), A source code similarity assessment system for functional programming languages based on machine learning and data aggregation methods. Congratulations!
2018-07-02 new paper

The efficacy of league formats in ranking teams

The efficacy of league formats in ranking teams has been accepted for publication in Statistical Modelling. Joint work with Jan Lasek.

Abstract. The efficacy of different league formats in ranking teams according to their true latent strength is analysed. To this end, a new approach for estimating attacking and defensive strengths based on the Poisson regression for modelling match outcomes is proposed. Various performance metrics are estimated reflecting the agreement between latent teams' strength parameters and their final rank in the league table. The tournament designs studied here are used in the majority of European top-tier association football competitions. Based on numerical experiments, it turns out that a two-stage league format comprising of the three round-robin tournament together with an extra single round-robin is the most efficacious setting. In particular, it is the most accurate in selecting the best team as the winner of the league. Its efficacy can be enhanced by setting the number of points allocated for a win to two (instead of three that is currently in effect in association football).

2017-05-23 software

Python package genieclust 0.1a2 released

An alpha release of the Python package implementing our fast and robust (Genie clustering algorithm ) is now available on PyPI. Check out the github repository for more information and tutorials.
2018-05-11 invited talk

Invited Plenary Lecture @ ISCAMI 2018

Today, at the International Student Conference on Applied Mathematics and Informatics – ISCAMI 2018 held in Malenovice, Czechia, I gave a lecture entitled Clustering on MSTs.

Abstract. Cluster analysis is one of the most commonly applied unsupervised machine learning techniques. Its aim is to automatically discover an underlying structure of a data set represented by a partition of its elements: mutually disjoint and nonempty subsets are determined in such a way that observations within each group are ``similar'' and entities in distinct clusters ``differ'' as much as possible from each other.

It turns out that two state-of-the-art clustering algorithms -- namely the Genie and HDBSCAN* methods -- can be computed based on the minimum spanning tree (MST) of the pairwise dissimilarity graph. Both of them are not only resistant to outliers and produce high-quality partitions, but also are relatively fast to compute.

The aim of this tutorial is to discuss some key issues of hierarchical clustering and explore their relations with graph and data aggregation theory.