𝔖 Scriptorium
✦   LIBER   ✦

📁

Performance Evaluation and Benchmarking: 14th TPC Technology Conference, TPCTC 2022, Sydney, NSW, Australia, September 5, 2022, Revised Selected Papers

✍ Scribed by Raghunath Nambiar, Meikel Poess


Publisher
Springer
Year
2023
Tongue
English
Leaves
159
Series
Lecture Notes in Computer Science, 13860
Category
Library

⬇  Acquire This Volume

No coin nor oath required. For personal study only.

✦ Synopsis


This book constitutes the refereed post-conference proceedings the 14th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2022, which was held in Sydney, NSW, Australia, on September 5, 2022.

The 5 revised full papers presented were carefully selected from 12 submissions. The conference focuses on Pick and Mix Isolation Levels; Benchmarking considerations for Trustworthy and Responsible AI (Panel); Preliminary Scaling Characterization with TPCx-AI and New Initiatives. 

✦ Table of Contents


Preface
TPCTC 2022 Organization
Contents
Pick & Mix Isolation Levels: Mixed Serialization Graph Testing
1 Introduction
2 Serialization Graph Testing
2.1 Protocol Description
2.2 Algorithmic Adjustments
2.3 Many-Core Optimizations
3 Mixing in the Wild
4 Mixing Theory
4.1 System Model
4.2 Weak Isolation Levels
4.3 Mixing of Isolation Levels
5 Mixed Serialization Graph Testing
5.1 Protocol Design
5.2 Discussion
5.3 Implementation Details
6 Evaluation
6.1 Isolation
6.2 Contention
6.3 Update Rate
6.4 Scalability
7 Conclusion
References
BoDS: A Benchmark on Data Sortedness
1 Introduction
2 Data Sortedness Metrics
3 Generating (K,L)-Sorted Data
4 The Benchmark on Data Sortedness
5 BoDS in Action
5.1 Raw Ingestion Performance
5.2 Mixed Workload Performance
6 Toward Sortedness Awareness
7 Conclusion
References
Disaggregated Database Management Systems
1 Introduction
2 Hardware Disaggregation
2.1 Fungible's DPU-Based Disaggregation
2.2 Liqid's Composable Disaggregated Infrastructure (CDI)
3 Memory Disaggregation
4 Disaggregated Database Management Systems
4.1 AlloyDB
4.2 Rockset
4.3 Nova-LSM
5 Future Research Directions
References
TPCx-AI on NVIDIA Jetsons
1 Introduction
2 Background
2.1 Resource-Aware Machine Learning
2.2 System-on-Chip Devices
2.3 TPCx-AI Benchmark
3 Related Work
4 Experimental Methodology and Setup
4.1 Systems
4.2 Metrics
4.3 Benchmark Suite Modifications
5 Results
5.1 Whole Benchmark Run
5.2 Time-Breakdown per Use Case
5.3 Use Case 8
6 Discussion
6.1 Machine Learning on Jetsons
6.2 TPCx-AI Benchmark for Edge Devices
7 Conclusion
References
More the Merrier: Comparative Evaluation of TPCx-AI and MLPerf Benchmarks for AI
1 Introduction
2 AI Benchmarking Tools
3 MLPerf AI Benchmarks
3.1 MLPerf Training
3.2 MLPerf Inference
4 TPCx-AI Benchmark
4.1 TPCx-AI Metric
4.2 TPCx-AI Metric Analysis
5 MLPerf vs TPCx-AI
5.1 Scope and Scoring
5.2 Results Review
5.3 Code License
5.4 Cost
5.5 Efficiency Scores
5.6 Accelerators
5.7 When to Use Each Tool
6 Summary and Conclusions
References
Preliminary Scaling Characterization of TPCx-AI
1 Introduction
2 Related Work
3 TPCx-AI Kit
3.1 Licensing and Setup
3.2 Configuration
3.3 Benchmark Execution
4 Performance Results
4.1 System Under Test (SUT)
4.2 Single-node Implementation
4.3 Multinode Implementation
5 Conclusions and Future Work
References
4mbench: Performance Benchmark of Manufacturing Business Database
1 Introduction
2 Description of the Manufacturing Business Based on the 4m Model
3 4mbench
3.1 Database Schema
3.2 Business Case and Dataset Generation
3.3 Test Queries
4 Experimental Study
4.1 Dataset Generation and Loading
4.2 Test Queries
4.3 4mQ.3 on Different Settings
5 Related Work
6 Conclusion
A Experiment Configuration
A.1 SQL Description of Test Query (4mQ.3)
A.2 [DATE] Variable Substitution
A.3 Configuration Parameters of PostgreSQL
References
Benchmarking Considerations for Trustworthy and Responsible AI (Panel)
1 Introduction
2 Current State of AI Benchmarking
3 Deconstructing Trust and Responsibility in AI
4 Metrics for Trust and Responsibility
5 Challenges and Opportunities for Benchmarking Trust and Interpretability
6 Summary and Conclusions
References
TPCx-AI: First Adopter’s Experience Report
1 TTA
1.1 Background of TTA
1.2 Test Description
References
New Initiatives in the TPC
1 Introduction
1.1 Venturing into New Benchmark Domains
1.2 TPC's Benchmark Development Model Until 2013
1.3 Express BenchmarksTM, a Benchmark Model for Rapid Benchmark Development
2 TPC Express BenchmarksTM Becoming Reality
3 Big Data Benchmarks
3.1 Virtualization Benchmarks, TPCx-V and TPCx-HCI
3.2 TPCx-IoT
3.3 Enterprise and Express Class Publications
4 TPC Derived Benchmarks
4.1 Initiative to Allow the Use of TPC Benchmark Material in Non-TPC Benchmarks
4.2 TPC's Open Source Initiative
5 TPCx-AITM, First End-To-End AI benchmark Standard
5.1 Allowing Cloud Based Benchmark Publications
6 Conclusion
References
Author Index


📜 SIMILAR VOLUMES


Performance Evaluation and Benchmarking:
✍ Raghunath Nambiar; Meikel Poess 📂 Library 📅 2023 🏛 Springer Nature 🌐 English

This book constitutes the refereed post-conference proceedings the 14th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2022, which was held in Sydney, NSW, Australia, on September 5, 2022. The 5 revised full papers presented were carefully selected from 12 submissions. T

Selected Topics in Performance Evaluatio
✍ Raghunath Nambiar, Meikel Poess (auth.), Raghunath Nambiar, Meikel Poess (eds.) 📂 Library 📅 2013 🏛 Springer-Verlag Berlin Heidelberg 🌐 English

<p><p>This book constitutes the refereed proceedings of the 4th TPC Technology Conference, TPCTC 2012, held in Istanbul, Turkey, in August 2012. </p><p>It contains 10 selected peer-reviewed papers, 2 invited talks, a report from the TPC Public Relations Committee, and a report from the workshop on B

Performance Evaluation and Benchmarking:
✍ Raghunath Nambiar (editor), Meikel Poess (editor) 📂 Library 📅 2021 🏛 Springer 🌐 English

<span>This book constitutes the refereed post-conference proceedings of the 12th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2020, held in August 2020.The 8 papers presented were carefully reviewed and cover the following topics: testing ACID compliance in the LDBC so

Performance Evaluation and Benchmarking:
✍ Raghunath Nambiar (editor), Meikel Poess (editor) 📂 Library 📅 2022 🏛 Springer 🌐 English

<p><span>This book constitutes the refereed post-conference proceedings of the 13th TPC Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2021, held in August 2021.</span></p><p><span>The 9 papers presented were carefully reviewed and selected from numerous submissions. The TPC

Performance Characterization and Benchma
✍ Raghunath Nambiar, Meikel Poess (auth.), Raghunath Nambiar, Meikel Poess (eds.) 📂 Library 📅 2014 🏛 Springer International Publishing 🌐 English

<p>This book constitutes the refereed post-proceedings of the 5th TPC Technology Conference, TPCTC 2013, held in Trento, Italy, in August 2013. It contains 7 selected peer-reviewed papers, a report from the TPC Public Relations Committee and one invited paper. The papers present novel ideas and meth

Performance Evaluation and Benchmarking.
✍ Raghunath Nambiar, Meikel Poess (eds.) 📂 Library 📅 2017 🏛 Springer International Publishing 🌐 English

<p>This book constitutes the thoroughly refereed post-conference proceedings of the 8th TPC Technology Conference, on Performance Evaluation and Benchmarking, TPCTC 2016, held in conjunction with the 41st International Conference on Very Large Databases (VLDB 2016) in New Delhi, India, in September