Issue №
1/2026
Information Technology and Security
1 OPTIMIZING THE PERFORMANCE OF NODE.JS SERVICES UNDER HIGH NETWORK LOAD
Aluev A.
Abstract : This article examines the architecture and execution characteristics of Node.js services operating under high network load. Typical performance degradation scenarios associated with event loop saturation, exhaustion of connection pools, and increased contention for external resources are analyzed. The role of monitoring and profiling tools, as well as the formalization of operational requirements through Service Level Objectives and Service Level Agreements, is emphasized. Particular attention is paid to optimization techniques, including reducing serialization overhead, applying backpressure, offloading execution flows, and adopting asynchronous models. Methods for verification on load-testing environments and reproducibility criteria used in industrial practice are also described. The presented overview summarizes applicable techniques for improving the resilience and predictability of Node.js services as load increases.
Keywords: Node.js, performance optimization, event loop, Service Level Objectives, Service Level Agreements, high-load systems.
2 DATA REPLICATION IN DISTRIBUTED FINANCIAL SYSTEMS: THE CDC PATTERN AS A TOOL FOR ENHANCING FLEXIBILITY AND ECONOMIC SCALABILITY FOR FINANCIAL ORGANIZATIONS
Kovalenko A.
Abstract : This article examines the use of the Change Data Capture (CDC) pattern for implementing efficient and scalable data replication in distributed financial systems. It analyzes CDC architecture based on PostgreSQL, Apache Kafka, and Debezium within the context of high-throughput payment platforms. The study highlights that CDC enables near real-time data synchronization between system components without placing additional load on the primary transactional database. This approach reduces latency and improves data consistency. Special attention is given to the economic benefits of adopting CDC, including reduced total cost of ownership, simplified system maintenance, and the elimination of resource-intensive ETL solutions.
Keywords: CDC, Kafka, Debezium, data replication, payment systems, scalability, distributed architectures.
3 MANAGEMENT OF ARCHITECTURAL DECISIONS IN LONG-LIVED R&D PROJECTS: EXPERIENCE OF INDUSTRIAL EMBEDDED SYSTEMS
Kolesnikova D.
Abstract : The article examines approaches to managing technical decisions within R&D projects focused on the development of industrial embedded systems with long life cycles. The impact of changes in software and hardware components on architectural stability is analyzed. The importance of an integrated approach to decision selection, documentation, and traceability throughout the entire operational period is emphasized. Control mechanisms are described, including technical reviews, the use of Architecture Decision Records, and the application of quantitative metrics to assess the maturity of the project model. Particular attention is paid to issues of support, modernization, and continuity of engineering practices under conditions of long-term maintenance. The study concludes that formalization and integration of architectural management processes are essential elements of the system life cycle.
Keywords: system architecture, embedded systems, R&D projects, development, architectural decisions, technical documentation.
4 CODE REVIEW PRACTICES AS A TOOL FOR ENSURING SOFTWARE QUALITY IN AGILE TEAMS: AN ANALYTICAL REVIEW
Shevchenko V.
Abstract : This article analyzes code review practices as a key tool for ensuring software quality in agile teams. The growing complexity of web products, the widespread adoption of CI/CD and DevOps approaches, and the rapid integration of AI tools into the development process underscore the relevance of this work. The novelty of this article lies in its comprehensive analytical review of modern empirical research on peer code review, with a focus on agile contexts, web product architecture, and AI-supported review tools. The article describes the evolution of modern code review practices, the place of code review in the development lifecycle, and its relationship with quality metrics and technical debt. The results of industrial and academic research, as well as the experience of implementing LLM assistants, are examined. Particular attention is paid to the impact of systematic code review on maintainability, architectural decision management, and the resilience of CI/CD pipelines. The goal of this work is to summarize existing practices and empirical data, and to formulate recommendations for agile teams developing web products. The article will be helpful for architects, team leads, and quality engineers responsible for building code review processes integrated with testing, DevOps practices, and AI tools.
Keywords: code review, peer review, agile development, software quality, technical debt, modern code review practices.
5 ARCHITECTURAL AND SYSTEM-LEVEL ASPECTS OF LINUX ENVIRONMENT OPTIMIZATION FOR HIGH-FREQUENCY TRADING SYSTEMS
Otkidach I.
Abstract : This paper investigates the requirements for an operating system when running applications with extremely low latency constraints typical of high-frequency trading systems. The influence of hardware and architectural characteristics of the computing platform is studied. The effects of power management mechanisms, the use of huge pages, memory pinning, and data locality as means of reducing memory access latency are considered. System-level mechanisms of Linux that affect execution determinism are analyzed, including CPU isolation techniques and interrupt distribution. Tools for latency measurement and profiling based on perf, ftrace, and bpftrace, are studied.
Keywords: high-frequency trading, low latency computing, deterministic execution, Linux optimization, operating system tuning, CPU isolation.
6 METHODOLOGICAL FOUNDATIONS OF DESIGNING AI PLATFORMS FOR CORPORATE B2B USERS
Miloserdov A.
Abstract : This article presents a methodological framework for designing AI platforms for corporate B2B users, based on a 12-month deployment across 40 healthcare organizations. The framework integrates four principles – platform orientation, managed evolution, transparency and accountability, and measurable applied value – operationalized through a five-level AI Platform Maturity Framework with quantifiable acceptance criteria. It formalizes requirements under high regulatory pressure, distributed responsibility, and reproducibility constraints, and proposes a structured sequence for user scenario and interface design. Comparative analysis shows that SAFe, TOGAF, and MLOps address these challenges only partially, whereas the proposed approach offers an integrated solution. Outcomes include a 32% reduction in diagnostic decision time, a 20% decrease in false diagnoses, and revenue growth from zero to $2.5M ARR within 12 months, confirming practical validity.
Keywords: AI platform, corporate users, B2B, design methodology, digital transformation, user scenarios, platform maturity, empirical validation, regulated environments.
7 THE ROLE OF MODERN LIBRARIES, FRAMEWORKS, AND TOOLING SOLUTIONS IN ACCELERATING ANDROID APPLICATION DEVELOPMENT
Ponomarev E.
Abstract : The article examines the role of modern libraries and frameworks in accelerating Android application development. It analyzes the theoretical and technological preconditions for improving engineering process productivity in the context of the growing complexity of the mobile ecosystem. It is shown that the use of modern tools contributes to reducing boilerplate code, accelerating build processes, simplifying user interface design, and increasing the predictability of architectural solutions. Particular attention is paid to the impact of libraries and frameworks on development process productivity, as well as to the limitations and risks associated with their use. It is concluded that these tools act as a system-forming factor in improving the efficiency of the full lifecycle of Android application development and maintenance.
Keywords: Android development, libraries, frameworks, mobile applications, software architecture, development productivity.
8 APPLICATION OF ARTIFICIAL INTELLIGENCE TO USER EXPERIENCE PERSONALIZATION IN E-COMMERCE SYSTEMS
Perelekhov I.
Abstract : The article examines the application of artificial intelligence to user experience personalization in e-commerce systems. It analyzes the theoretical foundations of personalization, architectural models for implementing AI solutions, and their impact on key business metrics. It is shown that intelligent personalization covers interface design, content, search results, recommendation mechanisms, and user communications, contributing to higher conversion rates, greater browsing depth, increased repeat purchases, and stronger audience retention. Particular attention is paid to the conditions for the effective implementation of AI solutions, including data quality, the maturity of digital architecture, model interpretability, and the economic feasibility of their use. It is concluded that AI personalization should be integrated into e-commerce platforms in a controlled and testable manner.
Keywords: artificial intelligence, e-commerce, personalization, user experience, recommendation systems, digital platform, customer retention.
9 METHODS FOR IMPROVING THE EFFICIENCY OF AI CLUSTER OPERATIONS BASED ON TELEMETRY ANALYSIS, SLI/SLO, AND INCIDENT MANAGEMENT
Khlystun M.
Abstract : The paper explores approaches to improving the efficiency and resilience of AI cluster operations based on the use of telemetry, SLI/SLO and incident management mechanisms. It is shown that the specificity of AI clusters is determined by the combination of heterogeneous workloads (training, fine-tuning, and inference), strong interdependence of compute, network, and storage subsystems, and sensitivity to local infrastructure disruptions. It is substantiated that telemetry and SLI/SLO provide a measurable basis for assessing service quality and early detection of deviations, while incident management and operational analytics ensure reduced recovery time, lower failure recurrence, and improved platform resilience. The practical significance of the study lies in proposing stages for implementing an integrated reliability management framework for AI clusters.
Keywords: AI clusters, telemetry, service level indicators, service level objectives, incident management, operational analytics, computing platform reliability.
10 ARCHITECTURAL CHALLENGES OF SEMANTIC INTEROPERABILITY IN THE INTEGRATION OF HETEROGENEOUS ELECTRONIC MEDICAL RECORD SYSTEMS
Joshi M.
Abstract : The article examines architectural issues related to semantic interoperability in the integration of heterogeneous electronic medical record systems within a fragmented digital healthcare infrastructure. The study's relevance is that formal compatibility among medical information systems does not ensure the preservation of the clinical meaning of data. The purpose of the article is to identify the key architectural obstacles that hinder the achievement of sustainable semantic interoperability in the integration of heterogeneous electronic medical records. The scientific novelty of the work lies in the systematization of the principal architectural challenges and in the proposal of an architectural approach to overcoming them. This approach is based on the use of a canonical data model, preliminary normalization, governed terminological harmonization, traceability of transformations, and the phased incorporation of changes into an already established clinical picture. Among the main conclusions, it is demonstrated that the most significant risks are associated with terminological heterogeneity, differences in data granularity, loss of context, temporal ambiguity, and the conflation of technical updates with actual clinical events. The article will be useful to researchers in medical informatics, architects of digital healthcare platforms, developers of medical information systems, and specialists in clinical data governance.
Keywords: semantic interoperability, electronic medical records, data integration, heterogeneous systems, canonical data model.