publications
2025
- arXivgfnx: Fast and Scalable Library for Generative Flow Networks in JAXDaniil Tiapkin, Artem Agarkov, Nikita Morozov, and 4 more authorsarXiv preprint arXiv:2511.16592, 2025
In this paper, we present gfnx, a fast and scalable package for training and evaluating Generative Flow Networks (GFlowNets) written in JAX. gfnx provides an extensive set of environments and metrics for benchmarking, accompanied with single-file implementations of core objectives for training GFlowNets. We include synthetic hypergrids, multiple sequence generation environments with various editing regimes and particular reward designs for molecular generation, phylogenetic tree construction, Bayesian structure learning, and sampling from the Ising model energy. Across different tasks, gfnx achieves significant wall-clock speedups compared to Pytorch-based benchmarks (such as torchgfn library) and author implementations. For example, gfnx achieves up to 55 times speedup on CPU-based sequence generation environments, and up to 80 times speedup with the GPU-based Bayesian network structure learning setup. Our package provides a diverse set of benchmarks and aims to standardize empirical evaluation and accelerate research and applications of GFlowNets.
@article{tiapkin2025gfnx, title = {gfnx: Fast and Scalable Library for Generative Flow Networks in JAX}, author = {Tiapkin, Daniil and Agarkov, Artem and Morozov, Nikita and Maksimov, Ian and Tsyganov, Askar and Gritsaev, Timofei and Samsonov, Sergey}, year = {2025}, url = {https://arxiv.org/abs/2511.16592}, journal = {arXiv preprint arXiv:2511.16592}, } - AAAIMatrix-Free Two-to-Infinity and One-to-Two Norms EstimationAskar Tsyganov, Evgeny Frolov, Sergey Samsonov, and 1 more authorarXiv preprint arXiv:2508.04444, 2025
In this paper, we propose new randomized algorithms for estimating the two-to-infinity and one-to-two norms in a matrix-free setting, using only matrix-vector multiplications. Our methods are based on appropriate modifications of Hutchinson’s diagonal estimator and its Hutch++ version. We provide oracle complexity bounds for both modifications. We further illustrate the practical utility of our algorithms for Jacobian-based regularization in deep neural network training on image classification tasks. We also demonstrate that our methodology can be applied to mitigate the effect of adversarial attacks in the domain of recommender systems.
@article{tsyganov2025matrix, title = {Matrix-Free Two-to-Infinity and One-to-Two Norms Estimation}, author = {Tsyganov, Askar and Frolov, Evgeny and Samsonov, Sergey and Rakhuba, Maxim}, year = {2025}, url = {https://arxiv.org/abs/2508.04444}, journal = {arXiv preprint arXiv:2508.04444}, }