В статье проведена общая оценка необходимости синтеза реалистичных ландшафтов местности для тренажерных комплексов (ТК) с целью улучшения процесса обучения экипажей военной техники, повышения качества взаимодействия между оператором и ТК. Выделены аспекты, которые говорят о целесообразности проведения обучения на синтезированном ландшафте заданной области. Разработан алгоритм и согласно ему реализована программа для обработки данных цифровой модели местности, результатом которой является карта высот SRTM в проекции Меркатора (эллиптической или Web). На практических примерах показано совпадение наложенного изображения снимка земной поверхности с преобразованной картой высот, что подтверждает корректность описанного метода и его реализации. ; In article the general assessment of need of synthesis of realistic landscapes of terrain for the training complexes (TC) for the purpose of improving of training activity of crews of military equipment, improvement of quality of interaction between the operator and TC is carried out. Aspects which speak about feasibility of carrying out training at the synthesized landscape of the given area are mentioned. The basic data containing geospatial information and methods of operation with them are analyzed. For processing of geospatial data the GDAL/OGR library was used. The area of restriction is set in the KML format. The cartographical projections applied to display of geospatial data are described. The used digital model of terrain (Digital Elevation Model (DEM)) is result of carrying out radar topographical filming (Shuttle radar topographic mission (SRTM)) the most part of the territory of the globe. The data SRTM are organized in the form of the individual rasterized cells, or tile. For training of staff unit, the terrain section suffices usually less than what provides one tile to a surface of SRTM data. The algorithm is developed and according to it the program for data handling of digital model of terrain of which the heightmap of SRTM in Merkator projection is result is realized. For expert monitoring of coincidence, in the raster editor the heightmap is superimposed on the image of a surface. On practical examples coincidence of the imposed image of a picture of an earth surface to the transformed heightmap is shown that confirms a correctness of the described method and its implementation.
The article has analyzed the main provisions of General Data Protection Regulation and compared it with the Law of Ukraine 'On Data Protection'. Advantages of the Regulation and necessity to adopt a new Law of Ukraine which could regulate data protection on European level have been identified. The main provisions which are necessary to be implemented in future legal act have been defined. They are the following: personal data, controller, data processor, adequacy of protection etc. It is also necessary to identify rights, obligations, functions, competence and goal of activities after implementing such notions as controller and processor. An important aspect is fines for violation of legislation on personal data protection. The Regulation provides for several types of violations and fine sanctions for them. Liability for violation of standards in the sphere of personal data protection ensures an effective mechanism for implementation of Regulation. The article has also analyzed the notion 'adequacy of protection'. The list of sources which were checked on 'adequacy' has been identified. The main criteria of 'adequacy' which the state has to comply with have been specified. The term for undergoing the inspection regarding compliance with 'adequacy of protection' has been defined. Criteria which the states should comply with in order to undergo the inspection have been provided. It is also necessary to define the mechanism for control in the sphere of personal data protection. The Regulation provides clear criteria for existence of this mechanism which should be implemented not only on the territory of Ukraine. Existing provisions of the Law have been analyzed. Compared analysis of the term 'privacy policy' has been done and gaps in Ukrainian legislation have been studied. Compliance of the Regulation with a standard Ukrainian consent for processing of personal data has been defined.
Big Data technologies have great prospects for application not only in the economy and business, but also in public administration. The article shows the possibility of using Big Data technologies not only to analyze the performance of government bodies of different levels, but also to ensure the effective management of the spatial and economic development of a region or city. In the first case, Big Data is used to assess public opinion about the effectiveness of measures to implement socio-economic policies and to monitor public opinion about the activities of state authorities. In the second case, for the development of infrastructure with the transition from quantitative development of territories to qualitative, the definition of basic needs and attitudes of the population, monitoring of the use of budget funds in support of certain industries, evaluation of migration processes, optimization of various spheres of the region's economy, improving the life of cities by optimizing the flows of transport, identifying threats to the economic security of the region, identifying and taking into account the possible consequences of changes in the natural, socio-economic environment of the region, their impact on Six stages of the Big Data application process are highlighted and described to identify priority areas of entrepreneurial activity in the territories of long-term development. The process of forming and adopting managerial decisions on the spatial development of the region (city) with the use of technologies Big Data is rather complex and multilateral and consists of several stages. The general scheme is described with a detailed description of the three levels of Big Data infrastructure (data collection; data storage and processing; analysis and use).
A detailed description of wastewater generated at various process units for primary oil refining (electric dehydrators, atmospheric and vacuum tubes) is given, and an assessment is made of the compliance of the quality characteristics of the effluent with the requirements of technological regulations and the current regulatory documents governing the activities of oil refineries. By monitoring and performance of field measurements, it has been established that the concentrations of pollutants in the waste waters of primary oil processing plants often exceed the established in-plant standards, and do not reach the quality indicators of the best available technologies and recommendations specified in the current regulatory documents. It was found out that the identical technological processes, even within the same oil refinery when processing the same raw materials, differ significantly in terms of specific waste water, which indicates the imperfection of technological installations and the need for their modernization, based on "best practices" in the oil refining industry. It is substantiated that the existing technological processes of primary oil refining have a significant potential for reducing the amount of waste and improving the level of their purification and reuse. The obtained data make it possible to assess the influence of sources of discharge of process units on the quality of water entering the plant's general treatment facilities and outline real measures whose implementation will allow the most economical way to bring the discharge of sewage from oil refineries into compliance with the requirements of legislation and, on the whole, improve economic indexes and the level of environmental safety in the oil refining industry. ; Наведено розгорнуту характеристику стічних вод, що утворюються на різних технологічних установках первинного перероблення нафти (електродегідратори, атмосферна та вакуумна трубчатки), а також оцінено відповідність якісних характеристик стоків вимогам технологічних регламентів та чинних нормативних документів, що регулюють діяльність нафтопереробних заводів. Способом проведення моніторингу та виконання натурних замірів встановлено, що концентрації забруднювальних речовин у стічних водах технологічних установок первинного перероблення нафти часто перевищують встановлені внутрішньозаводські норми та не досягають показників якості найкращих доступних технологій і рекомендацій, зазначених у чинних нормативних документах. З'ясовано, що тотожні технологічні процеси навіть у межах одного нафтопереробного заводу під час перероблення однакової сировини досить сильно відрізняються за рівнем питомого водовідведення, що свідчить про недосконалість технологічних установок та необхідність їхньої модернізації, базуючись на "найкращих практиках" у нафтопереробній промисловості. Обґрунтовано, що існуючі технологічні процеси первинного перероблення нафти мають значний потенціал щодо зменшення кількості стоків та покращення рівня їхнього очищення і повторного використання. Отримані дані дають змогу оцінити вплив джерел скиду технологічних установок на якість води, що надходить на загальнозаводські очисні споруди та розробити реальні заходи, виконання яких допоможе найбільш економічним способом привести скид стічних вод нафтопереробних заводів відповідно до вимог законодавства та загалом підвищити економічні показники і рівень екологічної безпеки у нафтопереробній галузі.
Modern digital innovation technologies, such as blockchain, artificial intelligence and social networks, combined with methods of data collection and processing, represent new opportunities at several levels of economic relations. In the context of digitalization of the economy, new forms of cooperation are becoming relevant with a simultaneous reduction in the role of intermediaries. Increasing the number of electronic communication devices will lead to the creation of new forms of cooperation, where borders and distances will not be an obstacle to combine human efforts and resources, while increasing information exchange and strengthening data quality requirements. Decentralized autonomous organizations (hereinafter – DAO) are becoming a reliable and effective substitute for the institution of economic mediation and traditional forms of commercial cooperation. Despite the attractive optimization of a number of processes due to digital freedom and the absence of bureaucracy, a number of issues related to protocol security, legal uncertainty and the temporary complexity of the internal mechanism remain unresolved. The capabilities of blockchain technology for working with digital data provide opportunities for a new type of economic cooperation, where through smart contracts, as a structural part of DAO, a transparent, efficient, fair and democratic system of cooperation is achieved. In this article, we offer an overview of DAO technology as a digital collaboration for big data, focusing on current approaches, opportunities and future directions. A brief overview of blockchain and DAO technology and their necessary characteristics for working with big data, various DAO services for data collection, storage, analysis and confidentiality, use of such data, as well as problems and future directions for further research. Blockchain technologies have the potential for effective big data analytics through enhanced security and privacy, an effective management system, with ideas fully protected by blockchain, and joint ...
Big data is a component of the Fourth Industrial Revolution. The deep penetration of digital technology has turned data into an essential component of the production process. Data are automatically generated by machines during the course of operation and during interactions with humans. This paper describes the concept and composition of big data. Most of the big data are unstructured and include text, audio-video files, images, emails, log files, etc. Statisticians are more interested in structured data presented in a pre-defined database model. Big data offer new sources and opportunities that cannot be discounted. However, the use of big data requires proper assessment in terms of quality dimensions such as accuracy, comparability and methodological soundness. Against the backdrop of arguments regarding big data, some users view big data as a replacement of official statistics. Such a conclusion is premature for at least two reasons: first, only a small part of big data can be used for decision-making. Second, theory and practice prove that a small sample based on scientific methods can yield much more reliable and accurate estimates than the results obtained from the processing of large amounts of unstructured data. The paper assesses the possibility of using big data for Sustainable Development Goals (SDG) monitoring, which is a nationally owned process, and NSOs are accountable for the SDG data they report. If the data are derived from a big data source, irrespective of the level of technical sophistication used in data transformation, the reliability of such data might be questioned by the national institutions. The paper concludes that the reliability of data obtained from big data sources hinges on the quality of tools and methods applied to data transformation. Statisticians can play an important role in alerting society, decision-making bodies of the government and businesses about the reliability of information derived from the different sources. ; Большие данные - одна из составляющих четвертой промышленной революции. Глубокое внедрение цифровой технологии в экономику способствовало тому, что информация стала неотъемлемым элементом производственного процесса. Большие данные создаются в процессе работы машины, взаимодействия человека с машиной и взаимодействия между людьми. В статье последовательно рассматриваются вопросы, вытекающие из ее названия. Прежде всего, раскрывается содержание понятия ≪большие данные≫, отмечается, что это не только цифры в традиционном понимании, но и текстовая часть, аудио- и видеозаписи в социальных сетях, фотографии, спутниковые изображения, электронные письма, программы, приложения и многое другое. Автор проводит различие между неструктурированными и структурированными данными, отмечая, что последние - это в основном количественные данные, которые представлены в базе данных с заранее определенной моделью для их хранения, обработки и распространения. Анализируя такой новый и, безусловно, революционный источник информации, каковым являются большие данные, автор оценивает их с точки зрения соответствия основным критериям и базовым принципам качества данных, таким как достоверность, возможность обеспечения сопоставимости, точность и надежность, правильное использование методологии. Отмечается, что тема больших данных вызывает чрезвычайный интерес у статистиков, которые рассматривают их как дополнительный источник сведений в условиях бурного развития информационных технологий. При этом некоторые пользователи переоценивают их потенциал и часто трактуют большие данные как предстоящую замену официальной статистики. Однако, по мнению автора, такое заключение является преждевременным; использовать большие данные необходимо с определенной осторожностью. Автор статьи отмечает два важных момента. Во-первых, для значительного числа пользователей интерес представляет только часть больших данных, а именно структурированные данные, в результате чего объем первых значительно сокращается. Во-вторых, как наукой, так и практикой доказано, что для получения достоверных результатов достаточно наблюдать небольшое число единиц, отобранных на основе случайной выборки (выборочной совокупности). В статье также дается критическая оценка больших данных с точки зрения других национальных базовых принципов, принятых ООН для обеспечения качества статистических данных. Особо выделяются проблемы, связанные с мониторингом достижения Целей устойчивого развития (ЦУР). Автор отмечает, что за данные, предоставляемые официальной статистикой, несут ответственность национальные статистические управления (НСУ). В отсутствие какой-либо институциональной ответственности надежность больших данных может быть поставлена под сомнение. В заключение подчеркивается, что пригодность больших данных определяется обоснованностью предположений, которые устанавливаются в ходе трансформации неструктурированного массива информации для проведения некоторого количественного измерения. В противном случае, по мнению автора, возможно проникновение в информационное поле потока нестатистической количественной информации в большом объеме, которая может дезинформировать общество и привести органы государственного управления и бизнеса к принятию неверных решений.
The article deals with the problem of constructing a model and algorithm for decision support in self-government bodies using machine learning. The method of multiple linear regression for processing the training sample was chosen as a machine learning method. In the training sample, independent data consists of parametric estimates in numerical form of self-government bodies in three areas of activity, such as education, social environment and crime. And the dependent parameter consists of generalized expert assessments of self-government bodies, also in numerical form. The model and algorithm of the decision support process using the method of multiple linear regression are constructed. Based on the constructed model and the proposed algorithm, the coefficients of the function for decision support are identified. Using this model, a generalized expert assessment is determined for the new self-government body in numerical form, which is interpreted as a proposed solution for improving the condition of the object.
The article deals with the problem of constructing a model and algorithm for decision support in self-government bodies using machine learning. The method of multiple linear regression for processing the training sample was chosen as a machine learning method. In the training sample, independent data consists of parametric estimates in numerical form of self-government bodies in three areas of activity, such as education, social environment and crime. And the dependent parameter consists of generalized expert assessments of self-government bodies, also in numerical form. The model and algorithm of the decision support process using the method of multiple linear regression are constructed. Based on the constructed model and the proposed algorithm, the coefficients of the function for decision support are identified. Using this model, a generalized expert assessment is determined for the new self-government body in numerical form, which is interpreted as a proposed solution for improving the condition of the object.
Statistics agencies are the main data provider on the economic position of the macroeconomic level. Most economic decisions on a national scale are based on statistical data. Data processing is a key business process for statistical agencies. At the same time, the quality of statistical data supplied by Rosstat is not always high enough. There are adjustments, a discrepancy between data sets describing the same economic phenomenon is revealed. The purpose of the work is to describe the methods of collecting and processing statistical information that will contribute to improving the quality of the presented data. From the information point of view, the statistical agency is engaged in the organization of information exchange between data providers and consumers, acts as a data aggregator. To organize the information exchange within community you need to create a semantic space to ensure the meaningful filling of the data. The main role in the semantic space is played by the identifiers of objects. The article considers the unified identifiers of statistical accounting objects as a method of collecting and processing statistical information and improving its quality. The international statistical practice use methods of standardizing the turnover of statistical data. Information standards are designed to unify identifiers and namespace for participants of the statistical information turnover and to provide a single semantic space. If you use of unified identifiers, the procedures for processing statistical data become transparent, it allow you grouping by different sections, as well as decomposition of aggregated data into components.The results of the work are recommendations on the use of Core component of the information infrastructure for the collection and analysis of statistical data. In the existing information infrastructure of the Russian digital economy, there are a number of data sources, the use of which will improve the quality of collection and processing of statistical data. To create a semantic space of statistical data in the Russian Federation, the most important section is the registers of Core Components. The use of registers will allow you to organize the binding of statistical data from different domains, as well as to implement the link of aggregated data with microdata. Significant progress is observed in the marking of goods, which allows you to track object's movement through all stages of the life cycle, as well as the location. The government of the Russian Federation initiated a project on labeling of goods, and this information gives an opportunity to get a clear picture of a significant part of the economy. An additional information source of statistical data can be the corporate sector, where actively used tracking systems that monitor the goods, vehicles, containers, warehousing.Conclusion: There are several options for creation of the semantic space for statistical data. World experience is guided by the use of the Web architecture, which involves the technological identifiers. Semantics of statistical data can be ensured by using the potential of the information infrastructure, which will solve a number of problems of statistical accounting. ; Официальная статистика является основным поставщиком данных об экономическом состоянии макроэкономического уровня, на основе статистических данных принимается большинство экономических решений государственного масштаба. Работа с данными для органов статистики является ключевым бизнес-процессом. Вместе с тем уровень качества статистических данных, поставляемых Росстатом, не всегда оказывается достаточно высоким. Наблюдаются корректировки статистических данных, выявляются несоответствие между наборами данных, описывающих одно и то же экономическое явление.Целью работы является описание методов сбора и обработки статистической информации, которые будут способствовать повышению качества поставляемых данных. С информационной точки зрения статистическое агентство занимаются организацией информационного обмена между поставщиками данных и потребителями, выступает агрегатором данных. Для организации информационного обмена в рамках сообщества пользователей создается семантическое пространство, призванное обеспечить смысловое наполнение данных. Основную роль в создании семантического пространства играют идентификаторы объектов учета. В качестве методов сбора и обработки статистической информации и повышения ее качества в статье рассматриваются использование единых идентификаторов отдельных объектов статистического учета. В международной статистической практике применяются методы стандартизации оборота статистических данных. Информационные стандарты призваны унифицировать идентификаторы и пространство имен для участников оборота статистической информации, что позволяет обеспечить единое семантическое пространство. С применением единых идентификаторов становятся прозрачными процедуры обработки статистических данных, в том числе группировка по разным срезам, а также разложение агрегированных данных на составляющие.Результатами работы являются рекомендации по использованию отдельных элементов информационной инфраструктуры для сбора и анализа статистических данных. В существующей информационной инфраструктуре цифровой экономики существует ряд источников данных, использование которых будет способствовать повышению качества сбора и обработки статистических данных. Для создания семантического пространства статистических данных в РФ наиболее актуальным разделом являются реестры базовых объектов. Использование реестров позволит организовать связывание статистических данных из разных предметных областей, а также реализовывать соединение агрегированных данных с микроданными. Существенный прогресс наблюдается в маркировке товаров, которая позволяет отслеживать движение по всем этапам жизненного цикла, а также местоположение объекта. Правительство РФ инициировало проект по маркировке товаров, и эта информация дает возможность получить полное представление о существенной части экономики. Дополнительным информационным источником статистических данных может выступать корпоративный сектор, активно использующий в своей деятельности системы прослеживания, которые выполняют мониторинг товара, транспортных средств, контейнеров, складского хозяйства.Заключение: Существует несколько вариантов обеспечения семантического единообразия статистических данных. Мировой опыт ориентируется на использование веб архитектуры, предполагающей использование технологических идентификаторов. Семантику статистических данных возможно обеспечить путем использования созданного потенциала информационной инфраструктуры, что позволит решить ряд проблем статистического учета.
Authors describe some processing and interpretation results of the very high resolution seismic data, acquired in RV "Academic Mstislav Keldish" cruise to Bennett island in Russian Arctic. The purpose of investigations was to study underwater geological structures around the island, presence or absence of their connection to the ancient rocs on the surface of the island. Interpretation of the seismic data gives some clues to these questions.
The use of citizens` and legal entities` personal information is crucial for the protection of human rights at the current stage of society and information technology development. On the one hand, a high-quality system of processing and using this information can increase the level of citizen participation in government-making processes and assist on creating a human-centric approach in public administration, when the state will have all the necessary information to serve citizens (providing of all administrative services online, use of analytics to improve infrastructure, etc.). On the other hand, misuse and unauthorized access to personal data can lead to violations of citizens' rights, corruption and excessive bureaucracy in administrative procedures. At the same time, approaches to the protection of personal data should be not formal, but purely practical. The state policy should be based on the solution of real problems, which often require a complete revision of the systematic approaches, in this case the infrastructure of the state registers. Modern information technologies make it possible to implement large-scale changes in the business processes of collecting, storing and processing personal data of citizens. Moreover, such approaches make it possible to reduce bureaucratic burdens and minimize corruption risks. Distributed registry technology is a popular solution for improving the performance of government processes and enhancing eGovernment in the country. However, its application requires a proper assessment of the problem and its origins, as well as the outcomes that should be achieved
The main changes in consumer behavior concerning information search, caused by development of informational and digital technologies, are analyzed. It is shown that situation with consumer privacy is substantially deteriorated due to the progress of artificial intelligence algorithms. The set of possible measures, directed on the improvement of personal data security, is outlined. Problem setting. The increased role of information in the modern economy is recognized within information economics, which represents a fundamental change in the prevailing paradigm within science of economics. The fast development of informational and digital technologies during last decades had opened unprecedented opportunities for consumers to find necessary goods and services online. At the same time, it caused serious problems concerning consumer privacy and personal data security.Recent research and publications analysis. G. Stigler and J. McCall were the first to develop formal models of information search in economics. Current research in this sphere is focused on the characteristics of online consumer search and analysis of the key problems of consumer privacy protection and data security arising under digital economy. Such problems are strongly exacerbated due to rapid progress of artificial intelligence.Paper objective. The purpose of the article is to trace basic changes in consumer behavior concerning the search of information in the age of the Internet and consider different facets of consumer privacy under the current conditions.Paper's main body. The avalanche growth of modern informational and digital technologies resulted in the radical decline of information search costs and questioned standard theoretical models. On the other hand, extremely rich and detailed datasets, collected by some online platforms, provided excellent opportunities for comprehensive empirical analysis. According to recent studies, consumer search usually follows intuitive patterns, lasted considerably longer than earlier suggested, and is of funnel type – initially conducted along broad categories and then becomes more focused to buy a good at the lowest cost under given cost of search.During the last decade, many online platforms have accumulated huge volumes of data about their customers, including rather sensitive information regarding wants, preferences, interests, incomes, reserve prices, etc. That kind of data today is more and more often considered as a valuable business asset that can be used to target services, offers, and advertising, or be traded with other interested parties. If possession of the information means power, then control over personal data can have crucial effect on the redistribution of economic power between market participants. As data breaches of large scale became almost weekly routines, consumer privacy today is seriously endangered.The problem of personal data safety is seriously complicated by the progress of artificial intelligence algorithms that reveal unprecedented opportunities to identify individual users. Nowadays, artificial intelligence can be used both for individualized and personalized targeting of consumers. Even in the first of these cases consumers can suffer harm as algorithm developers pursue their interests, which could contradict the best interests of their customers. In the wrong hands artificial intelligence can become a powerful instrument of consumer choice manipulations, mass production of fraud and deception.Conclusions of the research. Fundamental advances in informational and digital technologies caused essential reduction of information search costs, simultaneously decreasing, even more, the costs of collecting, storing, processing, and using data. As a result, consumer privacy protection and personal data safety became the vital challenges for modern economy. The solution can be found by combining efforts in different directions such as create right incentives to help firms choose consumer-friendly data practices, approve adequate legislation, introduce industry self-regulation and use artificial intelligence to control data access.Keywords: information economy, information economics, information search, search cost, information asymmetry, consumer privacy, data breaches, personal data security. ; Проаналізовано основні зміни у поведінці споживачів щодо пошуку інформації, зумовлені розвитком інформаційних і цифрових технологій. Показано, що збереження приватності споживачів різко ускладнюється у зв'язку з прогресом алгоритмів штучного інтелекту. З'ясовано комплекс заходів, спрямованих на поліпшення ситуації з захистом особистих даних.Ключові слова: інформаційна економіка, пошук інформації, витрати пошуку, асиметрія інформації, приватність споживачів, витоки даних, захист особистих даних.
The construction of the Tsimlyansk reservoir in 1952 and the actual climate changes led to significant reduction of spring water flow and the area of the Lower Don River spawning grounds. About 300 thousand hectares of spawning lands were flooded in 1963, while not more than 130 thousand hectares in 1979-1994. In later years water often didn't rise to the floodplain at all. As a result, the fishing industry of the Lower Don River underwent degradation and the spawning grounds were ploughed or built over. The spawning grounds of the Lower Don were the basis for the reproduction of anadromous and semi-anadromous fish of the Azov - Don basin, which has already lost its commercial value for sturgeon and herring. The article presents the results of mapping the modern structure of the floodplain lands of the Lower Don River within the flooding areas in different years. The areas of 1979-2018 floods were determined by processing the Landsat satellite images; the modern structure of their land use was mapped according to Sentinel-2 data. The types of land cover were also analyzed using the global data (GLC30 and ESRI). Within the area of 1979-1994 flooding there are 0,7 thousand hectares of built-up territories, about 30 thousand hectares of arable land and 10 thousand hectares of fallow lands. There is a threat of flooding the buildings and other infrastructure facilities, therefore the floods in the Lower Don River area are now perceived as a negative phenomenon. It is necessary to impose restrictions on the economic use of floodplain lands, and ban the construction activities and stocking of oil products, fertilizers and pesticides within the 1994 flooding area. Application of satellite monitoring technologies is a promising tool for assessing the economic development of floodplain lands and identification of flooded lands. In the future, it is possible to create an automated system for monitoring floodplains of the Don River and other rivers, basing among other on the satellite microwave radiometry data. The advantage of such data is the independence from the presence of clouds, which impedes satellite observations in the visible range
одним из методов коллективного принятия решений является метод номинальной группы. С его помощью можно выявить и проранжировать проблемы на начальном этапе управленческого цикла ; выработать рекомендации лицу ; принимающему решение. По мнению автора статьи ; сократить время на обработку данных ; представление результатов и формирование отчетного документа помогут технологии электронных таблиц. ; One of the methods of collective decision-making is the nominal group method. It can be used to identify and rank problems at the initial stage of the management cycle ; to make recommendations to the decision-maker. According to the author of the article ; spreadsheet technologies will help to reduce the time for data processing ; presentation of results and formation of the reporting document.