The following links lead to the full text from the respective local libraries:
Alternatively, you can try to access the desired document yourself via your local library catalog.
If you have access problems, please contact us.
13 results
Sort by:
In: Modern simulation & training: MS & T ; the international training journal, Issue 5, p. 12-19
ISSN: 0937-6348
In: Materials & Design, Volume 8, Issue 1, p. 41-45
In: African affairs: the journal of the Royal African Society, Volume 77, Issue 306, p. 131-132
ISSN: 1468-2621
In: Journal of ethnic and migration studies: JEMS, Volume 5, Issue 4, p. 448-451
ISSN: 1469-9451
In: African affairs: the journal of the Royal African Society, Volume 76, Issue 302, p. 118-119
ISSN: 1468-2621
In: International African Library
In: IAL
Frontmatter -- CONTENTS -- LIST OF MAPS -- LIST OF TABLES, FIGURES AND GRAPHS -- NOTE ON PHOTOGRAPHS -- PREFACE -- NOTE ON NAMES, ORTHOGRAPHY AND PRONUNCIATION -- ABBREVIATIONS -- INTRODUCTION -- PART I MEDICINE MURDER: HISTORICAL BACKGROUND, POLITICAL CONTEXT AND CASE STUDIES -- 1 BASUTOLAND: 'A VERY PRICKLY HEDGEHOG' -- Case Study 1 THE CASE OF THE COBBLER'S HEAD: MORIJA, 1945 -- 2 MEDICINE MURDER: BELIEF AND INCIDENCE -- Case Study 2 'THE CHIEFS OF TODAY HAVE TURNED AGAINST THE PEOPLE': KOMA-KOMA, 1948 -- 3 MEDICINE MURDER: THE DEBATES OF THE LATE 1940s -- Case Study 3 THE 'BATTLE OF THE MEDICINE HORNS': 'MAMATHE'S, LATE 1940s -- 4 NARRATIVE AND COUNTER-NARRATIVE: EXPLAINING MEDICINE MURDER -- Case Study 4 'A MOST UNSAVOURY STATE OF AFFAIRS': MOKHOTLONG, 1940s-50s -- 5 DIAGNOSES AND RESOLUTIONS: FROM FAILURE TO RECRIMINATION TO SILENCE -- INTERLUDE MEDICINE MURDER AND THE LITERARY IMAGINATION -- PART II MEDICINE MURDER: AN ANALYSIS OF PROCESS -- 6 MURDERERS AND THEIR MOTIVES -- 7 PLOTS, MURDERS, MUTILATIONS AND MEDICINE -- 8 POLICE INVESTIGATIONS -- 9 THE JUDICIAL PROCESS -- AFTERMATH -- CONCLUSION -- ADDENDUM: TOWARDSF~EWORKSOF COMPARISON -- APPENDIX -- NOTES -- SOURCES -- INDEX
In: Economic analysis and policy, Volume 21, p. 47-78
ISSN: 0313-5926
In: Network science, Volume 8, Issue 4, p. 543-550
ISSN: 2050-1250
AbstractR-MAT (for Recursive MATrix) is a simple, widely used model for generating graphs with a power law degree distribution, a small diameter, and communitys structure. It is particularly attractive for generating very large graphs because edges can be generated independently by an arbitrary number of processors. However, current R-MAT generators need time logarithmic in the number of nodes for generating an edge— constant time for generating one bit at a time for node IDs of the connected nodes. We achieve constant time per edge by precomputing pieces of node IDs of logarithmic length. Using an alias table data structure, these pieces can then be sampled in constant time. This simple technique leads to practical improvements by an order of magnitude. This further pushes the limits of attainable graph size and makes generation overhead negligible in most situations.
In: Habitat international: a journal for the study of human settlements, Volume 49, p. 547-558
International audience ; Field-programmable gate arrays (FPGAs) can offer invaluable computational performance for many compute-intensive algorithms. However, to justify their purchase and administration costs it is necessary to maximize resource utilization over their expected lifetime. Making FPGAs available in a cloud environment would make them attractive to new types of users and applications and help democratize this increasingly popular technology. However, there currently exists no satisfactory technique for offering FPGAs as cloud resources and sharing them between multiple tenants. We propose FPGA groups, which are seen by their clients as a single virtual FPGA, and which aggregate the computational power of multiple physical FPGAs. FPGA groups are elastic, and they may be shared among multiple tenants. We present an autoscaling algorithm to maximize FPGA groups' resource utilization and reduce user-perceived computation latencies. FPGA groups incur a low overhead in the order of 0.09ms per submitted task. When faced with a challenging workload, the autoscaling algorithm increases resource utilization from 52% to 61% compared to a static resource allocation, while reducing task execution latencies by 61%.
BASE
International audience ; Field-programmable gate arrays (FPGAs) can offer invaluable computational performance for many compute-intensive algorithms. However, to justify their purchase and administration costs it is necessary to maximize resource utilization over their expected lifetime. Making FPGAs available in a cloud environment would make them attractive to new types of users and applications and help democratize this increasingly popular technology. However, there currently exists no satisfactory technique for offering FPGAs as cloud resources and sharing them between multiple tenants. We propose FPGA groups, which are seen by their clients as a single virtual FPGA, and which aggregate the computational power of multiple physical FPGAs. FPGA groups are elastic, and they may be shared among multiple tenants. We present an autoscaling algorithm to maximize FPGA groups' resource utilization and reduce user-perceived computation latencies. FPGA groups incur a low overhead in the order of 0.09ms per submitted task. When faced with a challenging workload, the autoscaling algorithm increases resource utilization from 52% to 61% compared to a static resource allocation, while reducing task execution latencies by 61%.
BASE