The understudy company is a multinational company with three manufacturing plants located in Biñan and Canlubang Industrial Parks in Laguna, Philippines. Being a Japanese company where continuous improvement (Kaizen) is infinite, they commonly used it as a strategy to provide solutions. The proof of Kaizen implementation is their in-house training for Mechatronics Engineers and Technicians. The company has its training facilities and programs where they could practice through experiments and researches. Employees take the training seriously because it is part of their evaluation regarding competence and qualification for promotion. It is used to harness their skills on different areas including Japanese language development and skills-based development, in this case, Mechatronics Technology. While they were doing this for years, the training departments together with the researcher were able to depict a problem on their system of implementation. The process started in developing a program for their employees suited to the company needs, followed by the implementation of training and then by doing an assessment. It seems the process is perfect, no wonder for years of implementation it is only in 2012 that they started asking, who would evaluate our training program with regards to industry standards? Are the trainers qualified to train the same level of engineers and technicians adequately? Is our assessment acceptable to local and international standards? Doing all the processes internally seems to be biased because no one coming from an external entity would qualify if they are going in the right direction. It is then that the researcher was asked by the understudy company to help them develop a training and assessment program where an external entity would validate the effectiveness of the program. The researcher then offered several options like ISO, ANTA and TESDA as an external standard making body who would have the mandate to give certifications as a proof of effectiveness and excellence in implementing training programs. With open mind, the management decided to embrace the idea on a locally mandated government institution - the Technical Education and Skills Development Authority (TESDA) a try. Immediately the understudy company and the researcher formed a core group composed of twenty (20) Engineers and Technicians who pioneered the use of ZarCen Training Model, a training strategy conceptualized and developed by the researcher to provide permanent solution to the problem encountered by the understudy company. After forming a group, the goal was set to develop certified Mechatronics Technicians and Engineers that would undergo Mechatronics Training NC level II, III and IV for the understudy company and at the same time test the effectiveness of ZarCen Training Model. To establish integrity and credibility in measuring the effectiveness of ZarCen Training Model two evaluative procedures were used, one is an external TESDA National Assessment and the researcher's evaluation instrument. As we go through the process, we were able to develop their skills competence and attain a stunning 100% passing rate on TESDA Mechatronics Certification Level II, III, and IV, the first in the Philippines to achieve such performance for the same group of people rendering a proof of effectiveness of ZarCen Training Model. On the other evaluative measure, The ZarCen Training Model as a strategy got the following composite mean ratings from the panels of evaluators: 4.77 for Mechatronics NC II training, 4.78 for Mechatronics NC III training, and 4.84 for Mechatronics NC IV training. All of the three composite ratings had a descriptive rating of "Excellent". The ZarCen Training Model also gained a composite mean of the combined responses of the panel of evaluator for the Content of 4.79, Organization of 4.72, Mechanics of 4.80, Comprehensibility of 4.82 and Workability is 4.78, all of these results show an excellent rating. In addition, ZarCen Training Model as a strategy gained a grand mean of 4.78, which has an equivalent descriptive rating of excellent, proving ZarCen Training Model as consistently effective. Following the success of the program, the management decided to put up a Mechatronics Department where all employees would undergo an assessment, at least, Level II for Operators, Level III for Technicians and Level IV for Engineers. Given the standing order, another two batches composed of operators and technicians were able to hurdle Mechatronics Servicing Level II and III which again showed the consistency and effectiveness of ZarCen Training Model. Finally, after acquiring relevant skills, pioneering engineers and technicians are now localizing the development of automation machines so they need not wait for the parts from their main plant in Japan, a greater impact on operations, logistics and revenue became part of the understudy company.
The design and development of large and complex systems, such as System of Systems (SoS) with autonomous constituent systems is gaining increasing importance in many application domains (e.g., medical, aerospace, military and transportation). The wide scale and huge number of interactions involved in SoS makes it difficult to model and analyse the SoS architecture through state-of-the-art techniques for modelling, architecture development and analysis of monolithic systems. SoS are large-scale concurrent and distributed systems that are comprised of autonomous constituent systems with operational and managerial independence. The characteristics of SoS result in various challenges during the system design. It is necessary to consider emergent behaviour, an evolutionary development process, and an increased state space. In particular, these challenges are unsolved when moving form directed SoS, where the system has central control, towards virtual SoS with no central management and no common purpose ascribed to the constituent systems. In addition to meeting functional requirements, a major challenge is the design space exploration and optimisation of non-functional properties among heterogeneous constituent systems such as reliability, timeliness, safety and security. Many SoS are real-time systems where timing requirements are central to the development process. The failure in meeting these requirements in the late stages of the SoS development can be avoided by introducing them already in the architecture development. Likewise, many SoS represent critical infrastructures that provide safety-relevant services to the users and the environment. Therefore, the specified reliability of the SoS must be guaranteed despite failures of individual constituent systems using fault-tolerance mechanisms. The state-of-the-art modelling approaches describe the SoS using architecture frameworks such as the US Department of Defence Architecture Framework (DoDAF), British Ministry of Defence Architecture Framework (MODAF) and NATO Architecture Framework (NAF). However, these frameworks do not offer a precise connection between the different views of the SoS, which is needed to enable design exploration technologies, reusability and design automation using integrated tool chains. In addition, significant non-functional system properties (e.g., real time, reliability) are not addressed in the state-of-the-art modelling frameworks. This thesis addresses the research gap by establishing a modelling framework that extends the current SoS frameworks to link the different views, and to satisfy real-time and reliability requirements. We provide a methodology that uses design exploration techniques and tools for SoS architecture optimization. The use of architecture patterns is introduced to enhance the model re-usability and facilitate the model evolution. Scientific contributions beyond the state-of-the-art include a Model Based System Engineering (MBSE) methodology for SoS architecting based on the Unified Profile for DoDAF and MODAF (UPDM) with extensions to support significant SoS properties such as timing requirements and reliability. The methodology supports an architecture optimisation process based on Mixed Integer Linear Programming (MILP) connected to architecture patterns that enhance the model reusability. The developed modelling language and design methods are evaluated using simulations and example scenarios. ; Systems-of-Systems (SoS) stellen große, komplexe Systeme dar, die in zahlreichen Anwendungsgebieten zunehmend an Bedeutung gewinnen (z.B. Medizin, Raumfahrt, Militär, Transportsysteme). Der große Umfang sowie die hohe Anzahl an Interaktionen in SoS erschweren deren Modellierung und die Analyse mit Verfahren des Stands der Technik aus monolithischen Systemen. SoS sind großangelegte, parallele verteilte Systeme, deren autonome Teilsysteme operativ und administrativ unabhängig sind. Diese Besonderheit führt zu vielfältigen Herausforderungen beim Entwurf dieser Systeme. Emergentes Verhalten, ein evolutionärer Entwicklungsprozess sowie ein vergrößerter Zustandsraum müssen berücksichtigt werden. Diese Herausforderungen sind insbesondere in virtuellen SoS ungelöst, bei denen für die Teilsysteme kein zentrales Management und kein gemeinsames Ziel vorgesehen ist. Zusätzlich zur Erfüllung funktionaler Anforderungen besteht eine große Herausforderung bei der Optimierung des Entwurfsraums bezüglich der nichtfunktionalen Eigenschaften der heterogenen Teilsysteme (z.B. Zuverlässigkeit, Echtzeitfähigkeit und Sicherheit). Viele SoS sind Echtzeitsysteme, bei denen die zeitlichen Anforderungen ein zentraler Bestandteil des Entwicklungsprozesses sind. Die kostspielige Nichterfüllung dieser Anforderungen in späten Phasen der Entwicklung kann durch die frühzeitige Einbeziehung der Echtzeitanforderungen in die Architekturentwicklung vermieden werden. Häufig sind SoS auch kritische Infrastrukturen für sicherheitsrelevante Dienste. Daher muss die spezifizierte Zuverlässigkeit des SoS unabhängig von der Zuverlässigkeit der einzelnen Teilsysteme durch Fehlertoleranzmechanismen gewährleistet werden. Bei den Modellierungsansätzen im Stand der Technik werden SoS durch Architekturframeworks wie das US Department of Defence Architecture Framework (DoDAF), das British Ministry of Defence Architecture Framework (MODAF) und das NATO Architecture Framework (NAF) beschrieben. Diese Frameworks bieten jedoch keine präzise Verbindungen zwischen verschiedenen Ansichten eines SoS, welche notwendig wären um Techniken für Design-Exploration, Wiederverwendbarkeit und Designautomatisierung durch integrierte Werkzeugketten zu ermöglichen. Des Weiteren werden wesentliche nichtfunktionale Systemeigenschaften wie Echtzeitfähigkeit und Zuverlässigkeit nicht adressiert. Diese Doktorarbeit adressiert jene Forschungslücke durch die Entwicklung eines Modellierungsframeworks, welches die derzeitigen SoS-Frameworks erweitert. Das Framework stellt eine Verbindung zwischen verschiedenen Ansichten her und unterstützt Echtzeit- und Zuverlässigkeitsanforderungen. Zudem werden Techniken für Design-Exploration und Werkzeuge für die Optimierung von SoS-Architekturen eingeführt. Architekturmuster erleichtern die Wiederverwendbarkeit der Modelle und verbessern die Evolution der Modelle. Die wissenschaftlichen Beiträge über den Stand der Technik hinaus beinhalten modellbasierte SystemsEngineering Methoden für die Erstellung von SoS-Architekturen basierend auf dem Unified Profile für DoDAF und MODAF (UPDM) mit Erweiterungen zur Unterstützung von Echtzeitfähigkeit und Zuverlässigkeit. Die Methodik unterstützt einen Architekturoptimierungsprozess basierend auf Mixed Integer Linear Programming (MILP) in Verbindung mit Architekturmustern. Die entwickelte Modellierungssprache und die Designmethoden wurden mithilfe von Simulationen und Beispielsszenarien evaluiert.
Вопросы, рассматриваемые в статье, возникли в связи с объективной необходимостью проведения исследований, направленных на повышение эффективности работы налоговой системы регионального уровня. Государство не может тратить большие средства на сбор налогов, поэтому сама структура налогов и государственная система налогового администрирования (СНА), обеспечивающие их сбор, должны при минимальных затратах обеспечивать высокую эффективность работы, которая невозможна без хорошей информационно-аналитической системы поддержки принимаемых решений. Построение такой системы предполагает интеграцию в единое информационное пространство всех структурных подразделений налоговых и других государственных органов. Необходимое единое информационное пространство создавалось в последние годы в виде системы электронной обработки данных (ЭОД), разработанной ФНС РФ. Эта система, которая является основой автоматизации в работе налоговых органов, описана практически во всех учебниках российских экономических вузов. Однако ЭОД имеет одно узкое место в ней слабо форматизирован аналитический блок, ядром которого должна служить математическая модель анализа финансово-экономического состояния налогоплательщиков, выявления нарушений налогового законодательства в декларациях, синтеза оптимального плана выездных налоговых проверок. Такие модели должны служить достаточно достоверной и объективной основой для поддержки принятия управленческих решений. В настоящее время существует мощный математический инструментарий (универсальный аппроксиматор и кластеризатор) нейронные сети. При использовании нейросетей требуется их обучение на примерах, что с математической точки зрения является некорректно поставленной по Адамару обратной задачей типа задачи интерпретации. Здесь возникает дилемма нахождения компромисса между ошибками обобщения модели и ее робастностью (устойчивостью к вариации данных в заданном диапазоне). Применительно к нейросетевым моделям СНА, которые отличаются сильным зашумлением данных, отягченным в ряде случаев дефицитом наблюдений, исследования по регуляризации нейросетей в обратных задачах не проводились. В работах В.К. Иванова [8] дано строгое математическое обоснование двух методов решения некорректно поставленных задач при условии, что имеется дополнительная априорная информация об искомом решении. Если известно, что решение является элементом заданного компакта, им был разработан метод квазирешений. В этом случае возможна и оценка погрешности приближенного решения. В статье развивается идея новой компьютерной технологии предварительной (камеральной) налоговой проверки предприятий-налогоплательщиков, предложенная на основе нейросетевого моделирования. Использование этих моделей создает основу для повышения достоверности и объективности налогового контроля в налоговых органах, и повысить результативность выездных налоговых проверок. ; Issues covered in this article arose from an objective need for research aimed at improving the efficiency of the tax system at the regional level. The state can not spend more on the collection of taxes, so the structure of taxes and state tax administration system (SNA) to ensure that they collect, must at minimum cost to provide high performance, which is impossible without good information and analytical system of support of decisions. The construction of such a system involves the integration into a single information space of all structural units of the tax and other state bodies. Required common information space created in recent years in the form of electronic data interchange (EDI), developed by the Federal Tax Service of the Russian Federation. This system, which is the basis for the automation of the tax authorities is described in almost all textbooks Russian economic institutions. However, EDI is one bottleneck it poorly formatizirovan analysis unit, the core of which should serve as a mathematical model for analyzing the financial condition of the taxpayer, of violations of the tax legislation in the declarations, the synthesis of the optimal plan of field tax audits. Such models should serve as a sufficiently reliable and objective basis for management decision support. Currently, there is a powerful mathematical tool (universal approximator and clusterer) neural network. By using neural networks require their learning by example, that from a mathematical point of view, is an ill-posed inverse Hadamard problems like interpretation. This raises the dilemma of finding a compromise between the generalization error model and its robustness (resistance to variations of the data in a given range). With regard to the neural network model CHA, which differ very noisy data, aggravated in some cases, shortage of observations, research on regularization of neural networks in inverse problems were not carried out. In the works V.K. Ivanova [8] a rigorous mathematical justification of the two methods for solving ill-posed problems, provided that there is an additional a priori information on the desired solution. If you know that the decision is an element of the set of the compact, he developed a method of quasi-solutions. In this case it is possible and error estimates for approximate solutions. The article develops the idea of a new pre-computer technology (desk) tax audit companies, taxpayers, proposed on the basis of neural network modeling. Use of these models provides the basis for improving the reliability and objectivity of the tax control in the tax authorities, and to improve the effectiveness of field tax audits.
An objective Build-Operate-Transfer (BOT) contract evaluation at the conceptual stage, in countries facing budget constraints, will lead to undertaking projects which are anticipated to be viable in the future. An objective analysis of various risk variables and their impact on a BOT project's future outcome requires study and integration of many likely scenarios into the contract terms, which is complicated and time-consuming. If the process of examining the financial parameters and uncertainties of a BOT project could be automated, this would be a milestone in objective decision-making from various stakeholders' points of view. A soft computing model would let the user analyze many probable scenarios more accurately. In this study two soft computing methods, artificial neural network (ANN) and gene expression programming (GEP) are applied onto two distinct BOT case studies to illustrate automation of their assessment processes. First a case study of BOT model on dormitory projects in Cyprus is analyzed. An ANN model with correlation coefficient of 0.9064 is developed to model the relationship between important project parameters and risk variables. Significant factors, used in ANN model development, were extracted from sensitivity analysis and Monte Carlo simulation results obtained from conventional spreadsheet data. The resulting consensus based on this model would yield to fair contractual agreements for both the government and the concession company. iv Second financial viability of undertaking a BOT contract for sewer and water projects in California, USA is analyzed. Furthermore by aid of sensitivity analysis, risk parameters are identified. Sensitivity analysis results demonstrated that project construction cost factor determines the financial viability of undertaking a BOT contract. Therefore, reliable construction cost prediction, based on limited information, at early stages of the project planning phase is crucial for development of an objective BOT agreement. This study utilized gene expression programming (GEP) which is a derivative of genetic algorithm (GA) and genetic programming (GP), and developed a prediction model with correlation coefficient of 0.8467 for estimating the construction cost of water and sewer rehabilitation/replacement projects. Contribution of this thesis to knowledge is by exploiting ANN model's capability to incorporate many scenarios, we developed an automated tool to define concession terms considering potential risks; and by utilizing GEP model 's ability to create an explicit equation, we developed a formula for a project construction cost prediction to help improve objective financial appraisal of a BOT project. Author keywords: Public-Private-Partnership; Build-Operate-Transfer; Monte Carlo simulation; Contracts; Cost Estimation; Artificial Neural Network; Gene Expression programming; Dormitory Projects; Water and Sewer Replacement/Rehabilitation Projects. ; Bütçe kısıtlamalarıyla karşı karşıya ülkelerde objektif Yap-İşlet-Devret (YİD) sözleşmelerinin kavramsal aşamada değerlendirilmesi, gelecekde positive degerli projelerin uygulamasina yol açacaktır. Çeşitli risk değişkenleri ve YİD projenin gelecek, sonuclarin üzerindeki etkileri objektif bir analiz yapmak karmaşık ve zaman alıcıdır; çünkü sözleşme şartları içine birçok muhtemel senaryolar entegrasyonunu gerektirir. YİD projenin mali parametreleri ve çeşitli belirsizliklerin incelenme süreci otomatik olursa, bu yaklaşım birçok paydaşların objektif belirleme açısından bir dönüm noktası olabilir. Soft Computing modelleri kullanıcıya daha çok senaryoları analiz etmesine izin verdiyi icin, objective karar vermesine yol vermekdedir. Bu çalışmada iki Soft Computing yöntemleri, Yapay Sinir Ağları (ANN) ve Gen tabir programlama (GEP), projelerin etkili parametrelerini belirlemek için, uygulanmiştir. İlk Kıbrıs'ta yurt projelerinde YİD modelinin bir vaka çalışması analiz edildi. 0.9064 korelasyon katsayısı ile bir YİD modeli önemli proje parametrelerinin ve risk değişkenler arasındaki ilişkiyi modellemek için geliştirildi. YİD modelinde kullanılan önemli faktörler, Hassasiyet analizi ve Monte Carlo simülasyonun konvansiyonel elektronik tablo verilerinin uzerine yapilan sonuçlara dayanarak geliştirilmiştir. Bu modele dayalı ortaya çıkan uzlaşma, hükümet ve imtiyaz şirketine adil sözleşme ortami doğuracaktir. Bu araştimada, bir de Kaliforniya ABD kanalizasyon ve su projeleri için YİD sözleşmesinin finansal kapasitesi analiz edildi. Ayrıca hassasiyet analizi yardımıyla, risk parametreleri belirlenmiştir. Hassasiyet analizi sonuçları YİD projenin inşaat vi maliyetinin mali geliri belirleyen factor oldugunu göstermiştir. Bu nedenle, proje planlama aşamasının sınırlı bilgiye dayalı, güvenilir inşaat maliyet tahmini, objektif bir YİD sözleşmesi gelişimi için çok önemlidir. Bu çalışmada kullanılan Gen tabir programlama (GEP) model sonucunda su ve kanalizasyon rehabilitasyonu / değiştirme projelerinin inşaat maliyetini tahmin etmek için 0.8467 korelasyon katsayısı ile tahmin modeli geliştirmiştir. Bu tezin bilgiye katkisi, birçok senaryolari dahil etmekle, ANN modelin yeteneğini kullanarak, potansiyel riskleri göz önüne alarak, sözleşme terimleri tanımlamak için otomatik bir araç geliştirdi; ve basit bir denklem oluşturmakla GEP modelin yeteneğini kullanarak, bir YİD projenin mali değerlendirmeye yardımcı olmak üzere inşaat maliyet tahmini için bir formül geliştirdi. Anahtar Kelimeler: Kamu-Özel-Ortaklığı; Yap-İşlet-Devret (YİD); Monte Carlo simülasyonu; Sözleşmeler; Maliyet Tahmini; Yapay Sinir Ağları (ANN); Gen tabir programlama (GEP); Yurt Projeleri; Su ve Kanalizasyon Yedek / Rehabilitasyon Projeleri. ; Doctor of Philosophy in Civil Engineering. Thesis (Ph.D.)--Eastern Mediterranean University, Faculty of Engineering, Dept. of Civil Engineering, 2015. Supervisor: Prof. Dr. Tahir Çelik.
Concrete, steel, artificial light, complete technological automation, near-complete sensory deprivation, and total isolation - these are the basic conditions of supermaximum security prisons in the United States. "Supermax" prisoners remain alone twenty-three to twenty-four hours a day, under fluorescent lights that are never turned off. Meals arrive through a small slot in an automated cell door. Prisoners have little to no human contact for months, years, or even decades at a time, save brief interactions with correctional officers, who place hand, ankle, and waist cuffs on each prisoner before removing him from his cell. Prisoners only leave their cells four or five times per week, for showers or for brief, solitary exercise periods in "dog runs" - concrete pens with roofs at least partially open to natural light. In sum, supermax prisons across the United States detain thousands in long-term solitary confinement, under conditions of extreme sensory deprivation. They are prisons within prisons, confining the prisoners correctional administrators deem a threat to the general prison population.Arizona opened the first supermax, in 1986, and California opened the second, in 1989. Over the subsequent ten years, almost every state, along with the federal prison system, either built a supermax or retrofitted an existing facility to create a supermax unit. This dissertation examines the birth of the supermax in Arizona and California, and the spread of the institutional innovation across the United States. The research presented here draws on three major categories of data: archival materials, including legal decisions and case files, local news reporting, and legislative records; oral history interviews with more than thirty key informants, including correctional administrators, legal professionals, architects, and former prisoners; and quantitative data maintained by the California Department of Corrections and Rehabilitation. This is the first study specifically focused on unearthing the administrative and political processes that underwrote the supermax.The dissertation begins with an overview of the idea of solitary confinement and the term "supermax," leading to a working definition of supermaxes as institutions that are: bounded in time, administratively overseen, characterized by extremely restrictive conditions, and large in both scale of beds and durations of confinement. Chapter Two uses this definition to create a working list of supermax institutions in the United States. Chapter Three frames supermaxes as a product of mass incarceration, and introduces the core region (the Sunbelt) and core case study (California) on which the dissertation will focus. Chapters Four through Six document how and why correctional administrators decided to build California's first supermax, Pelican Bay State Prison. These chapters trace how socio-political changes in the 1970s - including structural changes to sentencing laws; race-based social organizing, both in and out of prison; and increasing judicial attention to prisons - paved the way for the supermax innovation in the 1980s. Chapter Four argues that critical power shifts between the executive, legislative, and judicial branches of government left a power vacuum, which correctional administrators quickly filled, in part through building supermaxes. Chapter Five describes the legislative process that nominally underwrote the supermax, and argues that California legislators ceded both planning and moral authority to state correctional administrators. Chapter Six describes the processes of architectural design and technological innovation that created the physical supermaxes themselves.Chapters Seven though Nine explore the implications of the administrative origins of the supermax. These chapters document how the institution has never had a clearly articulated purpose (Chapter Seven), and how it has been operated in contradiction to its articulated goals (Chapters Eight and Nine). Chapter Eight shows that supermaxes have detained prisoners for decades longer than intended, and have disproportionately impacted minority prisoners. Chapter Nine argues that courts have largely deferred to correctional administrators' claims about the necessity of the supermax, even though little empirical evidence exists to support these claims. Together, these chapters demonstrate that correctional bureaucrats not only made the initial supermax design decisions in California, but also progressively concentrated their power over supermax prisoners through hyper-automated buildings and internal prison classification procedures.Chapter Ten further explores the relationship between supermaxes and the federal judiciary, arguing that federal court decisions setting minimum conditions and standards for prisons in the 1970s and 1980s shaped the supermax innovation that followed. The conformity of supermaxes to these minimum constitutional standards has rendered the institutions resistant to legal challenges. The Conclusion discusses the implications of recent supermax-related trends, including prisoner protests about conditions of confinement, and state-based movements to reduce supermax populations. In sum, because supermaxes manifest, both physically and legally, the outer limits of non-death penalty punishment policy, they are critical for understanding both U.S. prison expansion, and the impacts of U.S. tough-on-crime policies.
The work carried out within the PhD was divided on the development of three different automation systems in the livestock industry for cattle, sheep and goat: Study and development of an integrated automatic traceability system for the bovine meat chain In Italy, the bovine meat production chain is extremely complex both under structural and organizational profile. This is due to: high number of operators involved; large fragmentation in agricultural and industrial phases; existence of remarkable import flow of animals and meat; commercial channels complexity. Traceability can be easily reached in each single separate step of the food chain (breeding, slaughtering, packaging and selling). Problems arise in integrating each productive process subsystem. Aim of the project was to study and develop an integrated automatic traceability system for the bovine meat chain based on RFID technologies. RFID tags were used to identify: animals; hooks used in the slaughtering for the carcass transport and storing; anatomical cuts. RFID readers were installed at: farm livestock gates; slaughtering entrance; each working station in the dissection area; stock rooms. All the system was controlled by using a software divided into three main sections for the: livestock; slaughtering; stock room/shop. The system entailed the following phases: - Animal ear tag reading at farm level to identify and to record each productive step in the "Titvlvs" software; - Animal Ear tag reading through fixed antenna in slaughter house standstill corridor to send information to management software; - Animal Ear tag and hooks microchips combining in the evisceration area; - Carcass number attributing and label writing in weighting area; - Recording of carcass entry in dissection area, by using a photocell-spar system which identifies hooks; - Rewritable microchip adding in each vacuum-packed anatomical cut. - Every selling action recording, in the store/shop area, by using weighting systems connected to a web software able to perform a return control on the sold meat. The system was used in a cooperative productive organization of Northern Italy. Tests showed the functionality of the different components and the whole integration of the subsystems. Study and development of an integrated system for slurry management monitoring and electronic reports drawing up in cattle breeding Over some years a new concept, identified as "precision farming", was introduced to improve field and farm management from agronomical, technical, environmental and economical perspectives through the use of new technologies, such as global positioning systems (GPS), sensors, satellite images and geographical information systems (GIS). Environmental and food safety regulations are becoming more and more hard-and-fast asking farmers to increase their organizational and technical skills for a well-integrated agro-territorial management. Aim of the research activity was the implementation of an integrated system for animal slurry management monitoring and electronic reports drawing up in cattle breeding in order to optimize the fertilization management and reduce the agricultural impacts. The developed system was based on the integration of the following mechanical, electronic and computational components: - Slurry production control subsystem, based on ultrasound sensors applied to slurry pits and pressure sensors applied to a slurry spreader; - Slurry spreading subsystem, constituted by an electronic card specifically developed and by a GPS/GSM device applied to tractors, able to automatically recognize the linkage with a slurry spreader and to notice its position on field; - Recording and spreading data visualization subsystem, based on a GIS able to elaborate slurry distribution maps with relative distributed quota and spreading periods; - Palm PC with a specific software (FarmWorks®) for field operations recording and data transfer to farm PC; - Management software (FarmWorks®) for field data filling and electronic reports drawing-up. The developed system, tested in a Northern Italy farm during 2008, has shown full functionality enabling to monitor animal slurry management (storage, collecting, spreading) and to optimize agricultural operations and fertilization management under an economical and environmental point of view. Study and development of a prototype collar sensitized with GPS technology to monitor the herds of livestock, the exploitation of grazing areas and for the avoidance of cattle stealing Cattle stealing, for Italian legislation, is an aggravated form of theft, which occurs with the removal of three or more head of cattle or sheep in order to profit from it (Article 625 n .8 - Penal Code). Since 2000, however, theft of livestock is a huge comeback affecting the entire country. In fact, in 2008 one hundred thousand head of cattle were stolen and intended to illegal slaughterhouses. It is, therefore, an offense that goes beyond the direct interests of agriculture affecting the whole community and, more specifically, product quality and public health. The approaches taken so far to try to counter the cattle theft phenomenon were limited to the territorial surveillance, on the other is not always feasible, especially in grazing areas often characterized by difficulties of access and remoteness from the farm.
The Israeli military has employed an artificial intelligence-driven "kill list" to select over 30,000 targets in Gaza with minimal human input, fueling civilian casualties in the war-torn strip, according to an explosive new investigation from +972 Magazine.Especially in the early days of the Gaza war, Israel Defense Forces (IDF) personnel ignored the AI's 10% false positive rate and intentionally targeted alleged militants in their homes with unguided "dumb bombs" despite an increased likelihood of civilian harm, according to IDF sources who spoke with +972 Magazine.The investigation sheds light on the myriad ways in which cutting-edge AI tech, combined with lax rules of engagement from IDF commanders on the ground, has fueled staggering rates of civilian harm in Gaza. At least 33,000 Palestinians have died due to Israel's campaign, which followed a Hamas attack that killed 1,200 Israelis last October.The AI targeting software, known as "Lavender," reportedly relies on sprawling surveillance networks and assigns a 1-100 score to every Gazan that estimates the likelihood that they are a Hamas militant. Soldiers then input this information into software known as "Where's Daddy," which uses AI to warn when an alleged militant has returned to their home.Previous reporting from +972 Magazine revealed the existence of a similar AI system for targeting houses used by militants, called "The Gospel." In both cases, the IDF said +972 Magazine exaggerated the role and impact of these high-tech tools."The doomsday scenario of killer algorithms is already unfolding in Gaza," argues Brianna Rosen, a senior fellow at Just Security and the University of Oxford who previously worked at the National Security Council during the Obama administration. RS spoke with Rosen to get her take on the latest revelations about Israel's use of AI in Gaza, how AI is changing war, and what U.S. policymakers should do to regulate military tech. The following conversation has been edited for length and clarity.RS: What does this new reporting from +972 Magazine tell us about how Israel has used AI in Gaza?Rosen: The first thing that I want to stress is that it's not just +972 Magazine. The IDF itself has actually commented on these systems as well. A lot of people claimed that the report overstates some of the claims about AI systems, but Israel itself has made a number of comments that support some of these facts. The report substantiates a trend that we've seen since December with Israel's use of AI in Gaza, which is that AI is increasing the pace of targeting in war and expanding the scope of war. As the IDF itself has acknowledged, it's using AI to accelerate targeting, and the facts are bearing this out. In the first two months of the conflict, Israel attacked roughly 25,000 targets — more than four times as many as previous wars in Gaza. And they're actioning more targets than they ever have in the past. At the same time that the pace of targeting is accelerating, AI is also expanding the scope of war, or the pool of potential targets that are actioned for elimination. They're targeting more junior operatives than they ever have before. In previous campaigns, Israel would run out of known combatants or legitimate military objectives. But this latest reporting [shows] that's not seemingly a barrier to killing anymore. AI is acting, in Israel's own words, as a force multiplier, meaning that it's removing the resource constraints that in the past would prevent the IDF from identifying enough targets. Now they're able to go after significantly lower targets with tenuous or no connections at all to Hamas even though, normally, they wouldn't pursue those targets because of the minimal impact of their death on military objectives.In short, AI is increasing the tempo of operations and expanding the pool of targets, which makes target verification and other precautionary obligations required under international law much harder to fulfill. All of this increases the risk that civilians will be misidentified and mistakenly targeted, contributing to the enormous civilian harm that we've seen thus far.RS: How does this relate to the idea of having a human "in the loop" for AI-driven decisions?Rosen: This is what is so concerning. The debate on military AI has been for so long focused on the wrong question. It's been focused on banning lethal autonomous weapons systems, or "killer robots," without recognizing that AI has already become a pervasive feature of war. Israel and other states, including the United States, are already integrating AI into military operations. They're saying that they're doing it in a responsible way with humans fully "in the loop." But the fear that I have, and which I think we're seeing play out here in Gaza, is that even with a human fully in the loop, there's significant civilian harm because the human reviews of machine decisions are essentially perfunctory.With this report that was released today, there's a claim that there is human verification of the outputs that the AI systems are generating but that the human verification was done in only 20 seconds, just long enough to see whether the target was male or female before authorizing the bombings. Regardless of whether that particular claim is actually borne out, there have been numerous academic studies about the risk of automation bias with AI, which I think is clearly at play here. Because the machine is so smart and has all of these data streams and intelligence streams being fed into it, there's a risk that humans don't sufficiently question its output. This risk of automation bias means that even if humans are approving the targets, they could be simply rubber stamping the decision to use force rather than thoroughly looking at the data that the machine has produced and going back and vetting the targets very carefully. That's just not being done, and it might not even be possible given the problems with explainability and traceability for humans to really understand how AI systems are generating these outputs. This is one of the questions that I asked, by the way, in my article in Just Security in December. Policymakers and the public need to press Israel on this question: What does the human review process really look like for these operations? Is this just rubber stamping the decision to use force, or is there serious review?RS: In this case, it seems like the impact of AI was amplified by the IDF's use of loose rules of engagement. Can you tell me a little bit more about the relationship between emerging tech and practical policy decisions about how to use it?Rosen: That's the other problem here. First of all, you have the problem of Israel's interpretation of international law, which is, in some ways, much more permissive than how other states interpret basic principles like proportionality. On top of that, there are inevitably going to be errors made with AI systems, which contributes to civilian harm. This latest report claims that the Lavender system, for example, was wrong 10% of the time. That margin of error could, in fact, be much greater depending on how Israel is classifying individuals as Hamas militants. The AI systems are trained on data, and Israel has identified certain characteristics of people who they claim are Hamas or Palestinian Islamic Jihad operatives, and then they feed that data into the machine. But what if the features that they are identifying are overly broad — such as carrying a weapon, being in a WhatsApp group with someone linked to Hamas, or even just moving house a lot, which everyone, of course, is doing now because it's a whole country of refugees. If these characteristics are fed into AI systems to identify militants, then that's a big concern because the system is going to take that data and misidentify civilians a great part of the time.Israel can say that it's following international law and that there's human review of all of these decisions, and all of that can be true. But again, it's Israel's interpretation of international law. And it's how they're defining who counts as the combatant in this war and how that data is fed into the AI systems. All of that compounds in a way that can create really serious harm. I also want to point out that all the well-documented problems with AI in the domestic context — from underlying biases in the algorithms to the problem of hallucination — are certainly going to persist in war, and it's going to be compounded because of the pace of decision making. None of this is going to be reviewed in a very careful way. For example, we know that Israel has a massive surveillance system in the Gaza Strip and that all of this data is being fed into the AI systems to contribute to these targeting outputs. Any underlying biases in those systems will feed into and compound into errors in the final targeting output. If human review is perfunctory, then the result will be significant civilian harm, which is what we have seen.RS: The U.S. is interested in AI for lots of military applications, including automated swarms of lethal drones. What does Israel's experience tell us about how American policymakers should approach this tech?Rosen: It tells us that U.S. policymakers have to be extremely circumspect about the use of AI in both intelligence and military operations. The White House and the Department of Defense and other agencies have put forth a number of statements about responsible AI, particularly in a military context. But these have all been very much at the level of principles. Everything depends on how these broad principles for the responsible use of military AI are operationalized in practice, and, of course, we haven't really had a case yet where we've seen the U.S. in a public way relying on these tools in their conflicts. But that's definitely coming, and the U.S. should use this time now to not only learn all the lessons of what's happening in Gaza, but to be very proactive in operationalizing those broad principles for responsible use of military AI, socializing them among other states, and really leading the world in signing on to these principles for military AI. They have to a certain extent, but the progress has been very, very slow. That's what's desperately needed right now.
Modern requirements for the design process of any technical object are very strict. Focus and independence, reduction of time for the process itself, maximum automation of the process, complexity - all these are integral characteristics of modern design. Therefore, approaches to the optimal design of technical systems are quite widespread. For technical objects, it is advisable to use the term optimal-rational design. In this paper, mechanical step transmissions are considered. It should be noted that mechanical step transmissions is the most common type and covers all classes of vehicles, from light vehicles, through trucks, to military vehicles and special-purpose equipment. This prevalence has led to a wide range of layout schemes and different aggregate-saturation. These aspects led engineers to formulate a number of specific requirements for mechanical step transmissions, which resulted in certain quality characteristics - criteria and use various concepts, techniques and design approaches. For the authors, it is mechanical step transmissions of wheeled vehicles that is of interest, as the big segment of transport equipment. Thus, the description of methods, approaches and directions to the optimal-rational design mechanical step transmissions wheeled vehicles is an urgent task that will reveal the "blank spots" of this issue, determine the advantages and disadvantages of existing areas and help to choose a vector of further scientific thought and determine the approaches and methods to optimal-rational design mechanical step transmissions wheeled vehicles. Several general directions of mechanical step transmissions wheeled vehicles design are highlighted: the choice of transmission parameters, taking into account the dynamics of torsional vibrations, rational distribution of torque between the drive wheels, power distribution in the transmission, selection of rational shift phases, etc. The main objects of research and common methods for solving design problems are described. A graphical representation of the presented material is provided, which allows you to quickly navigate the issue, and, if necessary, modify existing or synthesize new tasks. Keywords: transmission, optimal design, research directions ; Современные требования к процессу проектирования любого технического объекта очень жесткие. Направленность и независимость, сокращение времени на сам процесс, максимальная автоматизация процесса, комплексность – все это неотъемлемые характеристики современного проектирования. Поэтому достаточно широко распространены подходы к оптимальному проектированию технических систем. Для технических объектов целесообразно использовать термин оптимально-рационального проектирования. В данной работе рассмотрены механические ступенчатые трансмиссии. Следует заметить, что механические ступенчатые трансмиссии являются самым распространённым видом и охватывают все классы машин, от легковых, через грузовые автомобили, до военных машин и техники специального назначения. Эта распространенность привела к большому ряду компоновочных схем и разной агрегато-насыщенности. Указанные аспекты привели инженеров к формулированию ряда специфических требований к механическим ступенчатым трансмиссиям, которые вылились в определенные качественные характеристики – критерии и использованию различных концепций, методик и подходов к проектированию. Для авторов представляет интерес именно механические ступенчатые трансмиссии колесных машин, как распространенный сегмент транспортной техники. Таким образом, описание методов, подходов и направлений к оптимально-рациональному проектированию механических ступенчатых трансмиссий колесных машин является актуальной задачей, которая позволит выявить «белые пятна» этого вопроса, определить преимущества и недостатки существующих направлений и поможет выбрать вектор дальнейшей научной мысли и определиться с подходами и методами к оптимально-рациональному проектированию механических ступенчатых трансмиссий колесных машин. Освещены несколько общих направлений проектирования механических ступенчатых трансмиссий колесных машин: выбор параметров трансмиссии, учет динамики крутильных колебаний, рациональное распределение крутящих моментов между ведущими колесами, распределение мощностей в трансмиссии, выбор рациональных фаз переключения и т.д. Описаны основные объекты исследований и распространенные методы решения задач проектирования. Приведено графическое представление изложенного материала, позволяющее быстро ориентироваться в вопросе, а при необходимости – модифицировать существующие или синтезировать новые задачи Ключевые слова: трансмиссия, оптимальное проектирование, направления исследования ; Сучасні вимоги до процесу проектування будь-якого технічного об'єкту є дуже жорсткими. Спрямованість та незалежність, скорочення часу на процес, максимальна автоматизація процесу, комплексність – все це є невід'ємними характеристиками сучасного проєктування. Тому досить широко розповсюджені підходи до оптимального проєктування технічних систем. Для технічних об'єктів доцільно вживати термін оптимально-раціонального проєктування. У даній роботі розглянуті механічні ступінчасті трансмісії. Слід зауважити, що механічні ступінчасті трансмісії є одним із найрозповсюджених видів і охоплюють всі класи машин, від легкових, через вантажні автомобілі, до військових машин та техніки спеціального призначення. Ця розповсюдженість призвела до великого ряду компонувальних схем та різної агрегато-насиченості. Вказані аспекти змусили інженерів до формулювання ряду специфічних вимог щодо механічних ступінчастих трансмісій, які вилилися у певні якісні характеристики – критерії, та застосування різних концепцій, методик та підходів до проєктування. Для авторів є цікавими саме механічні ступінчасті трансмісії колісних машин, як розповсюджений сегмент транспортної техніки. Таким чином, освітлення методів, підходів та напрямків до оптимально-раціонального проєктування механічних ступінчастих трансмісій колісних машин є актуальною задачею, яка дасть змогу виявити «білі плями» цього питання, визначити переваги та недоліки існуючих напрямів, допоможе обрати вектор подальшої наукової думки, визначитися з підходами та методами до оптимально-раціонального проєктування механічних ступінчастих трансмісій колісних машин. Освітлено декілька загальних напрямів проєктування механічних ступінчастих трансмісій колісних машин: вибір параметрів трансмісії, врахування динаміки крутильних коливань, раціональний розподіл крутних моментів між ведучими колесами, розподіл потужностей у трансмісії, обрання раціональних фаз перемикання тощо. Описані основні об'єкти досліджень та розповсюджені методи розв'язання задач проєктування. Надано графічне подання викладеного матеріалу, що дає змогу швидко орієнтуватися у питанні, а при необхідності модифікувати існуючі чи синтезувати нові задачі. Ключові слова: трансмісія, оптимальне проєктування, напрями дослідження
The object of research is the military radio communication system. One of the most problematic areas in the military radio resources management is the interference of military radio systems and facilities. A number of works have been devoted to the study of ways to increase the noise immunity of military radio communication systems and facilities. However, the known works contain some research results by scientists, which are aimed at increasing the noise immunity of military radio communication systems and devices and do not have a comprehensive approach. This work solves the problem of substantiating ways to increase the noise immunity of military radio communication systems and facilities. The scientific problem is solved by substantiating the methodological principles of increasing the noise immunity of military radio communication systems in conditions of a priori uncertainty. During the research, the authors used the main provisions of the queuing theory, the theory of automation, the theory of complex technical systems, as well as general scientific methods of cognition, namely analysis and synthesis. The novelty is that in the course of work: – the purpose of functioning of the noise protection system of military radio communication systems is formulated; – indicators and criteria for increasing the noise immunity of military radio communication systems have been determined; – decomposition of the solution of this problem into problems depending on the signal and noise situation is carried out. An approach based on the hierarchical decomposition of the networks functional structure, the behavior of which is described by stochastic differential (or difference) equations of the high dimension state, into a number of interconnected but simpler functional structures is used for the functional description of military radio communication systems. The proposed approach to describe the behavior of military radio systems will allow to decompose the military radio system state and increase the efficiency of decision-making to adjust operating modes and parameters in real time. The results of the research should be used at the stage of parameters operational management and modes of the system operation. ; Объектом исследования является система военной радиосвязи. Одним из самых проблемных мест при управлении радиоресурсом военных радиосетей является помехозащищенность систем и средств военной радиосвязи. Исследованию путей повышения помехозащищенности систем и средств военной радиосвязи посвящен ряд работ. Однако известные работы содержат отдельные результаты исследований ученых, направленных на повышение помехозащищенности систем и средств военной радиосвязи и не имеют комплексного подхода. В данной работе решена задача обоснования путей повышения помехозащищенности систем и средств военной радиосвязи. Научная задача решена с помощью обоснования методологических основ повышения помехозащищенности систем военной радиосвязи в условиях априорной неопределенности. В ходе проведенного исследования авторами работы были использованы основные положения теории массового обслуживания, теории автоматизации, теории сложных технических систем, а также общенаучных методов познания, а именно анализа и синтеза. Новизна заключается в том, что в ходе работы: –сформулирована цель функционирования системы помехозащищенности систем военной радиосвязи; –определены показатели и критерии повышения помехозащищенности функционирования систем военной радиосвязи; –проведена декомпозиция решения данной проблемы на задачи в зависимости от сигнальной и помеховой обстановки. Для функционального описания систем военной радиосвязи применяется подход, основанный на иерархической декомпозиции функциональной структуры сетей, поведение которых описывается стохастическими дифференциальными (или разностными) уравнениями состояния высокой размерности, на ряд взаимосвязанных, но простых функциональных структур. Предложенный подход по описанию поведения систем военной радиосвязи позволит провести декомпозицию состояния системы военной радиосвязи и повысить оперативность принятия решений по корректировке режимов работы и параметров в режиме реального времени. Результаты исследования целесообразно использовать на этапе оперативного управления параметрами и режимами работы указанных систем. ; Об'єктом дослідження є система військового радіозв'язку. Одним з найбільш проблемних місць при управлінні радіоресурсом військових радіомереж є завадозахищеність систем та засобів військового радіозв'язку. Дослідженню шляхів підвищення завадозахищеності систем та засобів військового радіозв'язку присвячено низку робіт. Проте відомі роботи містять окремі результати досліджень вчених, що спрямовані на підвищення завадозахищеності систем та засобів військового радіозв'язку та не мають комплексного підходу. У даній роботі вирішено завдання обґрунтування шляхів підвищення завадозахищеності систем та засобів військового радіозв'язку. Наукове завдання вирішено за допомогою обґрунтування методологічних засад підвищення завадозахищеності систем військового радіозв'язку в умовах апріорної невизначеності. В ході проведеного дослідження авторами роботи були використані основні положення теорії масового обслуговування, теорії автоматизації, теорії складних технічних систем, а також загальнонаукові методи пізнання, а саме аналізу та синтезу. Новизна полягає в тому, що в ході роботи: –сформульована мета функціонування системи завадозахищеності систем військового радіозв'язку; –визначені показники та критерії підвищення завадозахищеності функціонування систем військового радіозв'язку; –проведена декомпозиція рішення даної проблеми на задачі в залежності від сигнальної та завадової обстановки. Для функціонального опису систем військового радіозв'язку застосовується підхід, заснований на ієрархічній декомпозиції функціональної структури мереж, поведінка яких описується стохастичними диференціальними (або різницевими) рівняннями стану високої розмірності, на ряд взаємопов'язаних, але простіших функціональних структур. Запропонований підхід з опису поведінки систем військового радіозв'язку дозволить провести декомпозицію стану системи військового радіозв'язку та підвищити оперативність прийняття рішень щодо корегування режимів роботи та параметрів в режимі реального часу. Результати дослідження доцільно використовувати на етапі оперативного управління параметрами та режимами роботи зазначених систем.
The object of research is the military radio communication system. One of the most problematic areas in the military radio resources management is the interference of military radio systems and facilities. A number of works have been devoted to the study of ways to increase the noise immunity of military radio communication systems and facilities. However, the known works contain some research results by scientists, which are aimed at increasing the noise immunity of military radio communication systems and devices and do not have a comprehensive approach. This work solves the problem of substantiating ways to increase the noise immunity of military radio communication systems and facilities. The scientific problem is solved by substantiating the methodological principles of increasing the noise immunity of military radio communication systems in conditions of a priori uncertainty. During the research, the authors used the main provisions of the queuing theory, the theory of automation, the theory of complex technical systems, as well as general scientific methods of cognition, namely analysis and synthesis. The novelty is that in the course of work: – the purpose of functioning of the noise protection system of military radio communication systems is formulated; – indicators and criteria for increasing the noise immunity of military radio communication systems have been determined; – decomposition of the solution of this problem into problems depending on the signal and noise situation is carried out. An approach based on the hierarchical decomposition of the networks functional structure, the behavior of which is described by stochastic differential (or difference) equations of the high dimension state, into a number of interconnected but simpler functional structures is used for the functional description of military radio communication systems. The proposed approach to describe the behavior of military radio systems will allow to decompose the military radio system state and increase the efficiency of decision-making to adjust operating modes and parameters in real time. The results of the research should be used at the stage of parameters operational management and modes of the system operation. ; Объектом исследования является система военной радиосвязи. Одним из самых проблемных мест при управлении радиоресурсом военных радиосетей является помехозащищенность систем и средств военной радиосвязи. Исследованию путей повышения помехозащищенности систем и средств военной радиосвязи посвящен ряд работ. Однако известные работы содержат отдельные результаты исследований ученых, направленных на повышение помехозащищенности систем и средств военной радиосвязи и не имеют комплексного подхода. В данной работе решена задача обоснования путей повышения помехозащищенности систем и средств военной радиосвязи. Научная задача решена с помощью обоснования методологических основ повышения помехозащищенности систем военной радиосвязи в условиях априорной неопределенности. В ходе проведенного исследования авторами работы были использованы основные положения теории массового обслуживания, теории автоматизации, теории сложных технических систем, а также общенаучных методов познания, а именно анализа и синтеза. Новизна заключается в том, что в ходе работы: –сформулирована цель функционирования системы помехозащищенности систем военной радиосвязи; –определены показатели и критерии повышения помехозащищенности функционирования систем военной радиосвязи; –проведена декомпозиция решения данной проблемы на задачи в зависимости от сигнальной и помеховой обстановки. Для функционального описания систем военной радиосвязи применяется подход, основанный на иерархической декомпозиции функциональной структуры сетей, поведение которых описывается стохастическими дифференциальными (или разностными) уравнениями состояния высокой размерности, на ряд взаимосвязанных, но простых функциональных структур. Предложенный подход по описанию поведения систем военной радиосвязи позволит провести декомпозицию состояния системы военной радиосвязи и повысить оперативность принятия решений по корректировке режимов работы и параметров в режиме реального времени. Результаты исследования целесообразно использовать на этапе оперативного управления параметрами и режимами работы указанных систем. ; Об'єктом дослідження є система військового радіозв'язку. Одним з найбільш проблемних місць при управлінні радіоресурсом військових радіомереж є завадозахищеність систем та засобів військового радіозв'язку. Дослідженню шляхів підвищення завадозахищеності систем та засобів військового радіозв'язку присвячено низку робіт. Проте відомі роботи містять окремі результати досліджень вчених, що спрямовані на підвищення завадозахищеності систем та засобів військового радіозв'язку та не мають комплексного підходу. У даній роботі вирішено завдання обґрунтування шляхів підвищення завадозахищеності систем та засобів військового радіозв'язку. Наукове завдання вирішено за допомогою обґрунтування методологічних засад підвищення завадозахищеності систем військового радіозв'язку в умовах апріорної невизначеності. В ході проведеного дослідження авторами роботи були використані основні положення теорії масового обслуговування, теорії автоматизації, теорії складних технічних систем, а також загальнонаукові методи пізнання, а саме аналізу та синтезу. Новизна полягає в тому, що в ході роботи: –сформульована мета функціонування системи завадозахищеності систем військового радіозв'язку; –визначені показники та критерії підвищення завадозахищеності функціонування систем військового радіозв'язку; –проведена декомпозиція рішення даної проблеми на задачі в залежності від сигнальної та завадової обстановки. Для функціонального опису систем військового радіозв'язку застосовується підхід, заснований на ієрархічній декомпозиції функціональної структури мереж, поведінка яких описується стохастичними диференціальними (або різницевими) рівняннями стану високої розмірності, на ряд взаємопов'язаних, але простіших функціональних структур. Запропонований підхід з опису поведінки систем військового радіозв'язку дозволить провести декомпозицію стану системи військового радіозв'язку та підвищити оперативність прийняття рішень щодо корегування режимів роботи та параметрів в режимі реального часу. Результати дослідження доцільно використовувати на етапі оперативного управління параметрами та режимами роботи зазначених систем.
Governments around the world are facing increasing constraints on their resources, but they must provide better public services. At the same time, there are increasing concerns about mismanagement of funds, lack of transparency, and prevalence of corruption. As part of the efforts to tackle these challenges, the World Bank is supporting countries in modernizing their public financial management (PFM) and implementing financial management information systems (FMIS).
A recently released World Bank Operational Guidance Note provides policymakers with operational guidance on how to ensure that FMIS projects better achieve the desired improvements in PFM outcomes while contributing to good governance. It draws on an extensive body of diagnostic and analytical work and more importantly, the lessons learned from FMIS implementation in more than 80 countries over the last 30 years.
Given its extensive coverage of the three phases of FMIS projects, i.e. the diagnostic, systems development life cycle, and coverage and utilization phases, the Note can be used by policymakers and practitioners to develop their strategies for any stage of FMIS implementation. It includes detailed guidance on how to avoid mistakes in procurement and contract management. It also discusses the potential use of disruptive technologies to maximize returns on existing investments.
Here are some key messages:
An adequate diagnosis of all aspects of budget management – not just accounting and reporting – is fundamental. This review should be undertaken to identify the needs that the system is intended to address before procuring and implementing a new FMIS.
The policy and institutional framework under which FMIS will operate is very important. The effectiveness of an FMIS as a budget management tool depends on its technical robustness as well as the policy and institutional environment, including a comprehensive single treasury account and the accompanying banking arrangements for government funds. It also depends on an appropriate budget classification structure and financial regulations that ensure budgetary compliance. According to the 2016 World Development Report on Digital Dividends, FMIS also needs analog complements to make them effective and protect against downside risks.
Strong government commitment must be sustained throughout the process. This can be fostered through well-designed project management structures, complemented with adequate considerations for training and change management.
System design should be cognizant of larger budget management issues and follow functional and business process requirements of government. System designs that follow predominantly technical considerations will be less effective in solving budget management problems. System implementation strategies should strategically take a phased approach rather than simultaneously implementing a wide set of functionalities that may overstretch client capacity. A modular approach can be more cost-effective, and could prioritize budget execution and reporting to achieve significant progress on budgetary control and cash management.
Transaction processing through FMIS needs to be comprehensive to ensure credible and complete information for financial operations and management reporting. The benefits from an FMIS pertain only to transactions processed through it.
It is important to understand the transaction ecosystem. While ultimately all transactions should be processed through FMIS, first targeting high-value transactions in system deployment will strengthen fiscal discipline. The following principles could help achieve ample coverage (and expenditure control):
All transactions generated at the central Ministry of Finance such as fiscal transfers, subsidies, and debt service payments, should be processed through FMIS; All payroll and civil service pension payments calculated by a central system should be processed through FMIS (these would likely constitute some 30-40% of the total budget); All payments from line ministries or spending units above the transaction threshold should be processed through FMIS While low-value payments should also be processed through FMIS, they can be disbursed through innovative FinTech products such as mobile money or smart cards.
Accountability and budget compliance are necessary for FMIS to be effective in managing public expenditures. This requires significant political commitment to overcome resistance from vested interests.
Governments can take advantage of disruptive technologies and FMIS innovations. There are tremendous opportunities to deploy technologies such as cloud computing, big data, and machine learning, and robotic process automation to improve budget management. When adopting disruptive technologies, it is important to follow good GovTech principles, such as: a citizen-centric approach, and whole-of-government approach rather than ministry-specific solutions.
Ed Olowo-Okere Senior Advisor in the Equitable Growth, Finance, and Institutions (EFI) Vice Presidency at the World Bank. More Blogs By Ed
1
Dr Mitrajeet D.MARAYE
September 11, 2020
That appears to be an excellent tool to improve on good governance in the affairs of government and what is more important is total transparency. It is most important for governments to implement a clear legal requirement to ensure "effective FREEDOM OF INFORMATION". Unfortunately in many developing countries opacity in the affairs of governments and state owned enterprises is a major source of corruption throughout the system.
Due to the popularity of the Internet and the networked services that it facilitates, networked devices have become increasingly common in both the workplace and everyday life in recent years—following the trail blazed by smartphones. The data provided by these devices allow for the creation of rich user profiles. As a result, the collection, processing and exchange of such personal data have become drivers of economic growth. History shows that the adoption of new technologies is likely to influence both individual and societal concepts of privacy. Research into privacy has therefore been confronted with continuously changing concepts due to technological progress. From a legal perspective, privacy laws that reflect social values are sought. Privacy enhancing technologies are developed or adapted to take account of technological development. Organizations must also identify protective measures that are effective in terms of scalability and automation. Similarly, research is being conducted from the perspective of Human-Computer Interaction (HCI) to explore design spaces that empower individuals to manage their protection needs with regard to novel data, which they may perceive as sensitive. Taking such an HCI perspective with regard to understanding privacy management on the Internet of Things (IoT), this research mainly focuses on three interrelated goals across the fields of application: 1. Exploring and analyzing how people make sense of data, especially when managing privacy and data disclosure; 2. Identifying, framing and evaluating potential resources for designing sense-making processes; and 3. Exploring the fitness of the identified concepts for inclusion in legal and technical perspectives on supporting decisions regarding privacy on the IoT. Although this work's point of departure is the HCI perspective, it emphasizes the importance of the interrelationships among seemingly independent perspectives. Their interdependence is therefore also emphasized and taken into account by subscribing to a user-centered design process throughout this study. More specifically, this thesis adopts a design case study approach. This approach makes it possible to conduct full user-centered design lifecycles in a concrete application case with participants in the context of everyday life. Based on this approach, it was possible to investigate several domains of the IoT that are currently relevant, namely smart metering, smartphones, smart homes and connected cars. The results show that the participants were less concerned about (raw) data than about the information that could potentially be derived from it. Against the background of the constant collection of highly technical and abstract data, the content of which only becomes visible through the application of complex algorithms, this study indicates that people should learn to explore and understand these data flexibly, and provides insights in how to design for supporting this aim. From the point of view of design for usable privacy protection measures, the information that is provided to users about data disclosure should be focused on the consequences thereof for users' environments and life. A related concept from law is "informed consent," which I propose should be further developed in order to implement usable mechanisms for individual privacy protection in the era of the IoT. Finally, this thesis demonstrates how research on HCI can be methodologically embedded in a regulative process that will inform both the development of technology and the drafting of legislation. ; Aufgrund der Popularität des Internets und der dadurch ermöglichten vernetzten Dienste sind in den letzten Jahren vernetzte Geräte sowohl am Arbeitsplatz als auch im Alltag immer häufiger anzutreffen – und erinnern damit an den Siegeszug des Smartphones. Die von diesen Geräten gelieferten Daten ermöglichen die Erstellung umfassender Benutzerprofile. Als Folge sind Erfassung, Verarbeitung und der Austausch solch persönlicher Daten zu einem Motor des Wirtschaftswachstums geworden. Die Geschichte zeigt, dass die Einführung neuer Technologien häufig sowohl die individuellen als auch die gesellschaftlichen Konzepte von Privatsphäre beeinflusst. Forschung im Bereich der Privatsphäre ist daher aufgrund des technologischen Fortschritts mit sich ständig ändernden Konzepten konfrontiert. Aus rechtlicher Sicht werden Datenschutzgesetze angestrebt, die gesellschaftliche Werte widerspiegeln. Technologien zum Schutz der Privatsphäre werden weiterentwickelt oder angepasst, um der fortschreitenden technologischen Entwicklung Rechnung zu tragen. Organisationen müssen auch schützende Maßnahmen identifizieren, die im Hinblick auf Skalierbarkeit und Automatisierung wirksam sind. In ähnlicher Weise zielt Forschung aus der Perspektive der Mensch-Computer-Interaktion (HCI) darauf ab, Gestaltungsräume zu erforschen die Individuen in die Lage versetzen, ihre Schutzbedarfe im Hinblick auf neuartige Daten, die sie als sensibel empfinden, umsetzen zu können. Im Hinblick auf das Verständnis von Privatsphäre-Management im Internet der Dinge (IoT) nimmt die vorliegende Arbeit eine solche HCI-Perspektive ein und konzentriert sich über mehrere konkrete Anwendungstechnologien hinweg hauptsächlich auf drei miteinander verbundene Ziele: 1. Exploration und Analyse der Frage, wie Menschen Daten Sinn zuschreiben, insbesondere im Management von Datenschutz und in der Preisgabe von Daten; 2. Identifikation, Konzeptualisierung und Evaluation potentieller Ressourcen für die Gestaltung Sinnzuschreibungs-Prozessen; und 3. Untersuchung der Eignung der identifizierten Konzepte für die Aufnahme in rechtliche und technische Perspektiven zur Unterstützung von Entscheidungen im Bereich Datenschutz im Kontext des IoT. Obwohl der Ausgangspunkt dieser Arbeit die HCI-Perspektive ist, betont sie die Bedeutung der Wechselbeziehungen zwischen den scheinbar unabhängigen Perspektiven. Diesen Interdependenzen wird Rechnung getragen, indem die gesamten Studie sich einem nutzerzentrierten Designprozess verschreibt. Genauer noch wird in dieser Arbeit ein Design-Fallstudien-Ansatz verwendet. Dieser Ansatz ermöglicht für einen konkreten Anwendungsfalls die Durchführung vollständig nutzerzentrierter Gestaltungsprozesszyklen mit Teilnehmern im Kontext des Alltagslebens. Darauf aufbauend Ansatz war es möglich, mehrere Bereiche des IoT zu untersuchen, die aktuell relevant sind, nämlich Smart Metering, Smartphones, Smart Homes und das vernetzte Auto. Die Ergebnisse zeigen, dass die Teilnehmer weniger um (Roh-)Daten besorgt waren als um über die Informationen, die potenziell daraus abgeleitet werden könnten. Vor dem Hintergrund der ständigen Sammlung hochtechnischer und abstrakter Daten, deren Inhalt erst durch die Anwendung von komplexen Algorithmen ersichtlich wird, weist diese Studie darauf hin, dass Menschen lernen sollten, entsprechende Daten flexibel zu explorieren und zu verstehen und gibt Einblicke darin, wie für die Erreichung dieses Ziels gestaltet werden sollte. Unter dem Gesichtspunkt der Gestaltung für gebrauchstaugliche Maßnahmen zum Schutz der Privatsphäre, sollten Informationen, die den Benutzern über die Offenlegung von Daten zur Verfügung gestellt werden sich auf die sich daraus ergebenden Folgen für die Umwelt und das Leben der Nutzer konzentrieren. Ein verwandtes Konzept aus dem Recht ist die "informierte Einwilligung", das, wie ich vorschlage, weiterentwickelt werden sollte, um gebrauchstaugliche Mechanismen für den Schutz der Privatsphäre des Einzelnen im Zeitalter das IoT bereitzustellen. Schließlich zeigt diese Arbeit, wie HCI-Forschung methodisch in einen regulativen Prozess eingebettet werden kann, um sowohl die die Entwicklung von Technologien und die Ausarbeitung von Gesetzen aus Nutzersicht zu informieren.