References

0003, Mu Li, David G. Andersen, Alexander J. Smola, and Kai Yu. 2014. “Communication Efficient Distributed Machine Learning with the Parameter Server.” In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, edited by Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, 19–27. https://proceedings.neurips.cc/paper/2014/hash/1ff1de774005f8da13f42943881c655f-Abstract.html.
0003, Song Han, Jeff Pool, John Tran, and William J. Dally. 2015. “Learning Both Weights and Connections for Efficient Neural Networks.” CoRR. http://arxiv.org/abs/1506.02626.
Abadi, Martín, Ashish Agarwal, Paul Barham, et al. 2015. “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems.” Google Brain.
Abadi, Martín, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, et al. 2016. “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.” arXiv Preprint arXiv:1603.04467, March. http://arxiv.org/abs/1603.04467v2.
Abadi, Martín, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. “TensorFlow: A System for Large-Scale Machine Learning.” In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 265–83. USENIX Association. https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi.
Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318.
Abdelkader, Ahmed, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, and Chen Zhu. 2020. “Headless Horseman: Adversarial Attacks on Transfer Learning Models.” In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3087–91. IEEE. https://doi.org/10.1109/icassp40776.2020.9053181.
Abdelkhalik, Hamdy, Yehia Arafa, Nandakishore Santhi, and Abdel-Hameed A. Badawy. 2022. “Demystifying the Nvidia Ampere Architecture Through Microbenchmarking and Instruction-Level Analysis.” In 2022 IEEE High Performance Extreme Computing Conference (HPEC). IEEE. https://doi.org/10.1109/hpec55821.2022.9926299.
Addepalli, Sravanti, B. S. Vivek, Arya Baburaj, Gaurang Sriramanan, and R. Venkatesh Babu. 2020. “Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1020–29. IEEE. https://doi.org/10.1109/cvpr42600.2020.00110.
Agarwal, Alekh, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna M. Wallach. 2018. “A Reductions Approach to Fair Classification.” In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, edited by Jennifer G. Dy and Andreas Krause, 80:60–69. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v80/agarwal18a.html.
Agrawal, Dakshi, Selcuk Baktir, Deniz Karakoyunlu, Pankaj Rohatgi, and Berk Sunar. 2007. “Trojan Detection Using IC Fingerprinting.” In 2007 IEEE Symposium on Security and Privacy (SP ’07), 296–310. Springer; IEEE. https://doi.org/10.1109/sp.2007.36.
Ahmadilivani, Mohammad Hasan, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, and Maksim Jenihhin. 2024. “A Systematic Literature Review on Hardware Reliability Assessment Methods for Deep Neural Networks.” ACM Computing Surveys 56 (6): 1–39. https://doi.org/10.1145/3638242.
Ahmed, Reyan, Greg Bodwin, Keaton Hamm, Stephen Kobourov, and Richard Spence. 2021. “On Additive Spanners in Weighted Graphs with Local Error.” arXiv Preprint arXiv:2103.09731 64 (12): 58–65. https://doi.org/10.1145/3467017.
Akidau, Tyler, Robert Bradshaw, Craig Chambers, Slava Chernyak, Rafael J. Fernández-Moctezuma, Reuven Lax, Sam McVeety, et al. 2015. “The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing.” Proceedings of the VLDB Endowment 8 (12): 1792–1803. https://doi.org/10.14778/2824032.2824076.
Alghamdi, Wael, Hsiang Hsu, Haewon Jeong, Hao Wang 0063, Peter Michalák, Shahab Asoodeh, and Flávio P. Calmon. 2022. “Beyond Adult and COMPAS: Fair Multi-Class Prediction via Information Projection.” In NeurIPS, 35:38747–60. http://papers.nips.cc/paper_files/paper/2022/hash/fd5013ea0c3f96931dec77174eaf9d80-Abstract-Conference.html.
Altayeb, Moez, Marco Zennaro, and Marcelo Rovai. 2022. “Classifying Mosquito Wingbeat Sound Using TinyML.” In Proceedings of the 2022 ACM Conference on Information Technology for Social Good, 132–37. ACM. https://doi.org/10.1145/3524458.3547258.
Amershi, Saleema, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, and Thomas Zimmermann. 2019. “Software Engineering for Machine Learning: A Case Study.” In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), 291–300. IEEE. https://doi.org/10.1109/icse-seip.2019.00042.
Amiel, Frederic, Christophe Clavier, and Michael Tunstall. 2006. “Fault Analysis of DPA-Resistant Algorithms.” In Fault Diagnosis and Tolerance in Cryptography, 223–36. Springer; Springer Berlin Heidelberg. https://doi.org/10.1007/11889700\_20.
Amodei, Dario, Danny Hernandez, et al. 2018. “AI and Compute.” OpenAI Blog. https://openai.com/research/ai-and-compute.
Andrae, Anders, and Tomas Edler. 2015. “On Global Electricity Usage of Communication Technology: Trends to 2030.” Challenges 6 (1): 117–57. https://doi.org/10.3390/challe6010117.
Anthony, Lasse F. Wolff, Benjamin Kanding, and Raghavendra Selvan. 2020. ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems.
Antonakakis, Manos, Tim April, Michael Bailey, Matt Bernhard, Elie Bursztein, Jaime Cochran, Zakir Durumeric, et al. 2017. “Understanding the Mirai Botnet.” In 26th USENIX Security Symposium (USENIX Security 17), 1093–1110.
Ardila, Rosana, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. “Common Voice: A Massively-Multilingual Speech Corpus.” In Proceedings of the Twelfth Language Resources and Evaluation Conference, 4218–22. Marseille, France: European Language Resources Association. https://aclanthology.org/2020.lrec-1.520.
Arifeen, Tooba, Abdus Sami Hassan, and Jeong-A Lee. 2020. “Approximate Triple Modular Redundancy: A Survey.” IEEE Access 8: 139851–67. https://doi.org/10.1109/access.2020.3012673.
Asonov, D., and R. Agrawal. n.d. “Keyboard Acoustic Emanations.” In IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004, 3–11. IEEE; IEEE. https://doi.org/10.1109/secpri.2004.1301311.
Ateniese, Giuseppe, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. “Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers.” International Journal of Security and Networks 10 (3): 137. https://doi.org/10.1504/ijsn.2015.071829.
Attia, Zachi I., Alan Sugrue, Samuel J. Asirvatham, Michael J. Ackerman, Suraj Kapa, Paul A. Friedman, and Peter A. Noseworthy. 2018. “Noninvasive Assessment of Dofetilide Plasma Concentration Using a Deep Learning (Neural Network) Analysis of the Surface Electrocardiogram: A Proof of Concept Study.” PLOS ONE 13 (8): e0201059. https://doi.org/10.1371/journal.pone.0201059.
Aygun, Sercan, Ece Olcay Gunes, and Christophe De Vleeschouwer. 2021. “Efficient and Robust Bitstream Processing in Binarised Neural Networks.” Electronics Letters 57 (5): 219–22. https://doi.org/10.1049/ell2.12045.
Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. “Layer Normalization.” arXiv Preprint arXiv:1607.06450, July. http://arxiv.org/abs/1607.06450v1.
Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. “Neural Machine Translation by Jointly Learning to Align and Translate.” arXiv Preprint arXiv:1409.0473, September. http://arxiv.org/abs/1409.0473v7.
Bai, Tao, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. 2021. “Recent Advances in Adversarial Training for Adversarial Robustness.” arXiv Preprint arXiv:2102.01356, February. http://arxiv.org/abs/2102.01356v5.
Bamoumen, Hatim, Anas Temouden, Nabil Benamar, and Yousra Chtouki. 2022. “How TinyML Can Be Leveraged to Solve Environmental Problems: A Survey.” In 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 338–43. IEEE; IEEE. https://doi.org/10.1109/3ict56508.2022.9990661.
Banbury, Colby R., Vijay Janapa Reddi, Max Lam, William Fu, Amin Fazel, Jeremy Holleman, Xinyuan Huang, et al. 2020. “Benchmarking TinyML Systems: Challenges and Direction.” arXiv Preprint arXiv:2003.04821. https://arxiv.org/abs/2003.04821.
Banbury, Colby, Emil Njor, Andrea Mattia Garavagno, Matthew Stewart, Pete Warden, Manjunath Kudlur, Nat Jeffries, Xenofon Fafoutis, and Vijay Janapa Reddi. 2024. “Wake Vision: A Tailored Dataset and Benchmark Suite for TinyML Computer Vision Applications,” May. http://arxiv.org/abs/2405.00892v4.
Banbury, Colby, Vijay Janapa Reddi, Peter Torelli, Jeremy Holleman, Nat Jeffries, Csaba Kiraly, Pietro Montino, et al. 2021. “MLPerf Tiny Benchmark.” arXiv Preprint arXiv:2106.07597, June. http://arxiv.org/abs/2106.07597v4.
Bannon, Pete, Ganesh Venkataramanan, Debjit Das Sarma, and Emil Talpes. 2019. “Computer and Redundancy Solution for the Full Self-Driving Computer.” In 2019 IEEE Hot Chips 31 Symposium (HCS), 1–22. IEEE Computer Society; IEEE. https://doi.org/10.1109/hotchips.2019.8875645.
Baraglia, David, and Hokuto Konno. 2019. “On the Bauer-Furuta and Seiberg-Witten Invariants of Families of 4-Manifolds.” arXiv Preprint arXiv:1903.01649, March, 8955–67. http://arxiv.org/abs/1903.01649v3.
Bardenet, Rémi, Olivier Cappé, Gersende Fort, and Balázs Kégl. 2015. “Adaptive MCMC with Online Relabeling.” Bernoulli 21 (3). https://doi.org/10.3150/13-bej578.
Barenghi, Alessandro, Guido M. Bertoni, Luca Breveglieri, Mauro Pellicioli, and Gerardo Pelosi. 2010. “Low Voltage Fault Attacks to AES.” In 2010 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), 7–12. IEEE; IEEE. https://doi.org/10.1109/hst.2010.5513121.
Barroso, Luiz André, Jimmy Clidaras, and Urs Hölzle. 2013. The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines. Springer International Publishing. https://doi.org/10.1007/978-3-031-01741-4.
Barroso, Luiz André, and Urs Hölzle. 2007a. “The Case for Energy-Proportional Computing.” Computer 40 (12): 33–37. https://doi.org/10.1109/mc.2007.443.
———. 2007b. “The Case for Energy-Proportional Computing.” Computer 40 (12): 33–37. https://doi.org/10.1109/mc.2007.443.
Barroso, Luiz André, Urs Hölzle, and Parthasarathy Ranganathan. 2019. The Datacenter as a Computer: Designing Warehouse-Scale Machines. Springer International Publishing. https://doi.org/10.1007/978-3-031-01761-2.
Bau, David, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. “Network Dissection: Quantifying Interpretability of Deep Visual Representations.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3319–27. IEEE. https://doi.org/10.1109/cvpr.2017.354.
Baydin, Atilim Gunes, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. 2017a. “Automatic Differentiation in Machine Learning: A Survey.” J. Mach. Learn. Res. 18: 153:1–43. https://jmlr.org/papers/v18/17-468.html.
———. 2017b. “Automatic Differentiation in Machine Learning: A Survey.” J. Mach. Learn. Res. 18 (153): 153:1–43. https://jmlr.org/papers/v18/17-468.html.
Beaton, Albert E., and John W. Tukey. 1974. “The Fitting of Power Series, Meaning Polynomials, Illustrated on Band-Spectroscopic Data.” Technometrics 16 (2): 147. https://doi.org/10.2307/1267936.
Beck, Nathaniel, and Simon Jackman. 1998. “Beyond Linearity by Default: Generalized Additive Models.” American Journal of Political Science 42 (2): 596. https://doi.org/10.2307/2991772.
Bedford Taylor, Michael. 2017. “The Evolution of Bitcoin Hardware.” Computer 50 (9): 58–66. https://doi.org/10.1109/mc.2017.3571056.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. ACM. https://doi.org/10.1145/3442188.3445922.
Bengio, Emmanuel, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. 2015. “Conditional Computation in Neural Networks for Faster Models.” arXiv Preprint arXiv:1511.06297, November. http://arxiv.org/abs/1511.06297v2.
Bengio, Yoshua, Nicholas Léonard, and Aaron Courville. 2013a. “Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation.” arXiv Preprint, August. http://arxiv.org/abs/1308.3432v1.
———. 2013b. “Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation.” arXiv Preprint arXiv:1308.3432, August. http://arxiv.org/abs/1308.3432v1.
Ben-Nun, Tal, and Torsten Hoefler. 2019. “Demystifying Parallel and Distributed Deep Learning: An in-Depth Concurrency Analysis.” ACM Computing Surveys 52 (4): 1–43. https://doi.org/10.1145/3320060.
Berger, Vance W., and YanYan Zhou. 2014. “Kolmogorov–Smirnov Test: Overview.” Wiley Statsref: Statistics Reference Online. Wiley. https://doi.org/10.1002/9781118445112.stat06558.
Bergstra, James, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. “Theano: A CPU and GPU Math Compiler in Python.” In Proceedings of the 9th Python in Science Conference, 4:18–24. 1. SciPy. https://doi.org/10.25080/majora-92bf1922-003.
Beyer, Lucas, Olivier J. Hénaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. 2020. “Are We Done with ImageNet?” arXiv Preprint arXiv:2006.07159, June. http://arxiv.org/abs/2006.07159v1.
Bhagoji, Arjun Nitin, Warren He, Bo Li, and Dawn Song. 2018. “Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms.” In Computer Vision – ECCV 2018, 158–74. Springer International Publishing. https://doi.org/10.1007/978-3-030-01258-8_10.
Bhamra, Ran, Adrian Small, Christian Hicks, and Olimpia Pilch. 2024. “Impact Pathways: Geopolitics, Risk and Ethics in Critical Minerals Supply Chains.” International Journal of Operations &Amp; Production Management, September. https://doi.org/10.1108/ijopm-03-2024-0228.
Biega, Asia J., Peter Potash, Hal Daumé, Fernando Diaz, and Michèle Finck. 2020. “Operationalizing the Legal Principle of Data Minimization for Personalization.” In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, edited by Jimmy Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, 399–408. ACM. https://doi.org/10.1145/3397271.3401034.
Biggio, Battista, Blaine Nelson, and Pavel Laskov. 2012. “Poisoning Attacks Against Support Vector Machines.” In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress. http://icml.cc/2012/papers/880.pdf.
Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Springer.
Blackwood, Jayden, Frances C. Wright, Nicole J. Look Hong, and Anna R. Gagliardi. 2019. “Quality of DCIS Information on the Internet: A Content Analysis.” Breast Cancer Research and Treatment 177 (2): 295–305. https://doi.org/10.1007/s10549-019-05315-8.
Bohr, Adam, and Kaveh Memarzadeh. 2020. “The Rise of Artificial Intelligence in Healthcare Applications.” In Artificial Intelligence in Healthcare, 25–60. Elsevier. https://doi.org/10.1016/b978-0-12-818438-7.00002-2.
Bolchini, Cristiana, Luca Cassano, Antonio Miele, and Alessandro Toschi. 2023. “Fast and Accurate Error Simulation for CNNs Against Soft Errors.” IEEE Transactions on Computers 72 (4): 984–97. https://doi.org/10.1109/tc.2022.3184274.
Bondi, Elizabeth, Ashish Kapoor, Debadeepta Dey, James Piavis, Shital Shah, Robert Hannaford, Arvind Iyer, Lucas Joppa, and Milind Tambe. 2018. “Near Real-Time Detection of Poachers from Drones in AirSim.” In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, edited by Jérôme Lang, 5814–16. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2018/847.
Bourtoule, Lucas, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. “Machine Unlearning.” In 2021 IEEE Symposium on Security and Privacy (SP), 141–59. IEEE; IEEE. https://doi.org/10.1109/sp40001.2021.00019.
Bradbury, James, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, et al. 2018. “JAX: Composable Transformations of Python+NumPy Programs.” http://github.com/google/jax.
Brain, Google. 2020. “XLA: Optimizing Compiler for Machine Learning.” TensorFlow Blog. https://tensorflow.org/xla.
———. 2022. TensorFlow Documentation. https://www.tensorflow.org/.
Breier, Jakub, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2018. “DeepLaser: Practical Fault Attack on Deep Neural Networks.” ArXiv Preprint abs/1806.05859 (June). http://arxiv.org/abs/1806.05859v2.
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, and et al. 2020. “Language Models Are Few-Shot Learners.” Advances in Neural Information Processing Systems (NeurIPS) 33: 1877–1901.
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” arXiv Preprint arXiv:2005.14165, May. http://arxiv.org/abs/2005.14165v4.
Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, 1st Edition. W. W. Norton Company.
Buolamwini, Joy, and Timnit Gebru. 2018a. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91. PMLR. http://proceedings.mlr.press/v81/buolamwini18a.html.
———. 2018b. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91. PMLR. http://proceedings.mlr.press/v81/buolamwini18a.html.
Burnet, David, and Richard Thomas. 1989. “Spycatcher: The Commodification of Truth.” Journal of Law and Society 16 (2): 210. https://doi.org/10.2307/1410360.
Bushnell, Michael L, and Vishwani D Agrawal. 2002. “Built-in Self-Test.” Essentials of Electronic Testing for Digital, Memory and Mixed-Signal VLSI Circuits, 489–548.
Buyya, Rajkumar, Anton Beloglazov, and Jemal Abawajy. 2010. “Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges,” June. http://arxiv.org/abs/1006.0308v1.
Cai, Carrie J., Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, et al. 2019. “Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, edited by Jennifer G. Dy and Andreas Krause, 80:1–14. Proceedings of Machine Learning Research. ACM. https://doi.org/10.1145/3290605.3300234.
Cai, Han, Chuang Gan, and Song Han. 2020. “Once-for-All: Train One Network and Specialize It for Efficient Deployment.” In International Conference on Learning Representations.
Cai, Han, Chuang Gan, Ligeng Zhu, and Song Han 0003. 2020. “TinyTL: Reduce Memory, Not Parameters for Efficient on-Device Learning.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/81f7acabd411274fcf65ce2070ed568a-Abstract.html.
Calvo, Rafael A., Dorian Peters, Karina Vold, and Richard M. Ryan. 2020. “Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry.�� In Ethics of Digital Well-Being, 31–54. Springer International Publishing. https://doi.org/10.1007/978-3-030-50585-1_2.
Carlini, Nicholas, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang 0001, Micah Sherr, Clay Shields, David A. Wagner 0001, and Wenchao Zhou. 2016. “Hidden Voice Commands.” In 25th USENIX Security Symposium (USENIX Security 16), 513–30. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/carlini.
Carlini, Nicolas, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. 2023. “Extracting Training Data from Diffusion Models.” In 32nd USENIX Security Symposium (USENIX Security 23), 5253–70.
Carta, Salvatore, Alessandro Sebastian Podda, Diego Reforgiato Recupero, and Roberto Saia. 2020. “A Local Feature Engineering Strategy to Improve Network Anomaly Detection.” Future Internet 12 (10): 177. https://doi.org/10.3390/fi12100177.
Cavoukian, Ann. 2009. “Privacy by Design.” Office of the Information and Privacy Commissioner.
Cenci, Marcelo Pilotto, Tatiana Scarazzato, Daniel Dotto Munchen, Paula Cristina Dartora, Hugo Marcelo Veit, Andrea Moura Bernardes, and Pablo R. Dias. 2021. “Eco‐friendly Electronics—a Comprehensive Review.” Advanced Materials Technologies 7 (2): 2001263. https://doi.org/10.1002/admt.202001263.
Chandola, Varun, Arindam Banerjee, and Vipin Kumar. 2009. “Anomaly Detection: A Survey.” ACM Computing Surveys 41 (3): 1–58. https://doi.org/10.1145/1541880.1541882.
Chapelle, O., B. Scholkopf, and A. Zien Eds. 2009. “Semi-Supervised Learning (Chapelle, o. Et Al., Eds.; 2006) [Book Reviews].” IEEE Transactions on Neural Networks 20 (3): 542–42. https://doi.org/10.1109/tnn.2009.2015974.
Chen, Chaofan, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan Su. 2019. “This Looks Like That: Deep Learning for Interpretable Image Recognition.” In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, edited by Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, 8928–39. https://proceedings.neurips.cc/paper/2019/hash/adf7ee2dcf142b0e11888e72b43fcb75-Abstract.html.
Chen, Emma, Shvetank Prakash, Vijay Janapa Reddi, David Kim, and Pranav Rajpurkar. 2023. “A Framework for Integrating Artificial Intelligence for Clinical Care with Continuous Therapeutic Monitoring.” Nature Biomedical Engineering, November. https://doi.org/10.1038/s41551-023-01115-0.
Chen, H.-W. 2006. “Gallium, Indium, and Arsenic Pollution of Groundwater from a Semiconductor Manufacturing Area of Taiwan.” Bulletin of Environmental Contamination and Toxicology 77 (2): 289–96. https://doi.org/10.1007/s00128-006-1062-3.
Chen, Mark, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, et al. 2021. “Evaluating Large Language Models Trained on Code.” arXiv Preprint arXiv:2107.03374, July. http://arxiv.org/abs/2107.03374v2.
Chen, Mia Xu, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, et al. 2018. “The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 30:5998–6008. Association for Computational Linguistics. https://doi.org/10.18653/v1/p18-1008.
Chen, Tianqi, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. “MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems.” arXiv Preprint arXiv:1512.01274, December. http://arxiv.org/abs/1512.01274v1.
Chen, Tianqi, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, et al. 2018. “TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.” In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 578–94.
Chen, Tianqi, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. “Training Deep Nets with Sublinear Memory Cost.” CoRR abs/1604.06174 (April). http://arxiv.org/abs/1604.06174v2.
Chen, Yu-Hsin, Joel Emer, and Vivienne Sze. 2017. “Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks.” IEEE Micro, 1–1. https://doi.org/10.1109/mm.2017.265085944.
Chen, Yu-Hsin, Tushar Krishna, Joel S. Emer, and Vivienne Sze. 2016. “Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks.” IEEE Journal of Solid-State Circuits 51 (1): 186–98. https://doi.org/10.1109/JSSC.2015.2488709.
Chen, Zhiyong, and Shugong Xu. 2023. “Learning Domain-Heterogeneous Speaker Recognition Systems with Personalized Continual Federated Learning.” EURASIP Journal on Audio, Speech, and Music Processing 2023 (1): 33. https://doi.org/10.1186/s13636-023-00299-2.
Chen, Zitao, Guanpeng Li, Karthik Pattabiraman, and Nathan DeBardeleben. 2019. “<I>BinFI</i>: An Efficient Fault Injector for Safety-Critical Machine Learning Systems.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 1–23. SC ’19. New York, NY, USA: ACM. https://doi.org/10.1145/3295500.3356177.
Chen, Zitao, Niranjhana Narayanan, Bo Fang, Guanpeng Li, Karthik Pattabiraman, and Nathan DeBardeleben. 2020. “TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications.” In 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE), 426–35. IEEE; IEEE. https://doi.org/10.1109/issre5003.2020.00047.
Cheng, Eric, Shahrzad Mirkhani, Lukasz G. Szafaryn, Chen-Yong Cher, Hyungmin Cho, Kevin Skadron, Mircea R. Stan, et al. 2016. “CLEAR: <U>c</u> Ross <u>-l</u> Ayer <u>e</u> Xploration for <u>a</u> Rchitecting <u>r</u> Esilience - Combining Hardware and Software Techniques to Tolerate Soft Errors in Processor Cores.” In Proceedings of the 53rd Annual Design Automation Conference, 1–6. ACM. https://doi.org/10.1145/2897937.2897996.
Cheng, Yu et al. 2022. “Memory-Efficient Deep Learning: Advances in Model Compression and Sparsification.” ACM Computing Surveys.
Chetlur, Sharan, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. 2014. “cuDNN: Efficient Primitives for Deep Learning.” arXiv Preprint arXiv:1410.0759, October. http://arxiv.org/abs/1410.0759v3.
Cho, Kyunghyun, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. “On the Properties of Neural Machine Translation: Encoder-Decoder Approaches.” In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8), 103–11. Association for Computational Linguistics.
Choi, Jungwook, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. 2018. “PACT: Parameterized Clipping Activation for Quantized Neural Networks.” arXiv Preprint, May. http://arxiv.org/abs/1805.06085v2.
Chollet, François et al. 2015. “Keras.” GitHub Repository. https://github.com/fchollet/keras.
Chollet, François. 2018. “Introduction to Keras.” March 9th.
Choudhary, Tejalal, Vipul Mishra, Anurag Goswami, and Jagannathan Sarangapani. 2020. “A Comprehensive Survey on Model Compression and Acceleration.” Artificial Intelligence Review 53: 5113–55. https://doi.org/10.1007/s10462-020-09816-7.
Chowdhery, Aakanksha, Anatoli Noy, Gaurav Misra, Zhuyun Dai, Quoc V. Le, and Jeff Dean. 2021. “Edge TPU: An Edge-Optimized Inference Accelerator for Deep Learning.” In International Symposium on Computer Architecture.
Christiano, Paul F., Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. “Deep Reinforcement Learning from Human Preferences.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4299–4307. https://proceedings.neurips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html.
Chu, Grace, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, Berkin Akin, Suyog Gupta, and Andrew Howard. 2021. “Discovering Multi-Hardware Mobile Models via Architecture Search.” In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 3016–25. IEEE. https://doi.org/10.1109/cvprw53098.2021.00337.
Chua, L. 1971. “Memristor-the Missing Circuit Element.” IEEE Transactions on Circuit Theory 18 (5): 507–19. https://doi.org/10.1109/tct.1971.1083337.
Chung, Jae-Won, Yile Gu, Insu Jang, Luoxi Meng, Nikhil Bansal, and Mosharaf Chowdhury. 2023. “Reducing Energy Bloat in Large Model Training.” ArXiv Preprint abs/2312.06902 (December). http://arxiv.org/abs/2312.06902v3.
Cohen, Maxime C., Ruben Lobel, and Georgia Perakis. 2016. “The Impact of Demand Uncertainty on Consumer Subsidies for Green Technology Adoption.” Management Science 62 (5): 1235–58. https://doi.org/10.1287/mnsc.2015.2173.
Coleman, Cody, Edward Chou, Julian Katz-Samuels, Sean Culatana, Peter Bailis, Alexander C. Berg, Robert Nowak, Roshan Sumbaly, Matei Zaharia, and I. Zeki Yalniz. 2022. “Similarity Search for Efficient Active Learning and Search of Rare Concepts.” Proceedings of the AAAI Conference on Artificial Intelligence 36 (6): 6402–10. https://doi.org/10.1609/aaai.v36i6.20591.
Constantinescu, Cristian. 2008. “Intermittent Faults and Effects on Reliability of Integrated Circuits.” In 2008 Annual Reliability and Maintainability Symposium, 370–74. IEEE; IEEE. https://doi.org/10.1109/rams.2008.4925824.
Contro, Filippo, Marco Crosara, Mariano Ceccato, and Mila Dalla Preda. 2021. “EtherSolve: Computing an Accurate Control-Flow Graph from Ethereum Bytecode.” arXiv Preprint arXiv:2103.09113, March. http://arxiv.org/abs/2103.09113v1.
Cooper, Tom, Suzanne Fallender, Joyann Pafumi, Jon Dettling, Sebastien Humbert, and Lindsay Lessard. 2011. “A Semiconductor Company’s Examination of Its Water Footprint Approach.” In Proceedings of the 2011 IEEE International Symposium on Sustainable Systems and Technology, 1–6. IEEE; IEEE. https://doi.org/10.1109/issst.2011.5936865.
Cope, Gord. 2009. “Pure Water, Semiconductors and the Recession.” Global Water Intelligence 10 (10).
Corporation, Intel. 2021. oneDNN: Intel’s Deep Learning Neural Network Library. https://github.com/oneapi-src/oneDNN.
Corporation, NVIDIA. 2017. “GPU-Accelerated Machine Learning and Deep Learning.” Technical Report.
———. 2021. NVIDIA cuDNN: GPU Accelerated Deep Learning. https://developer.nvidia.com/cudnn.
Corporation, Thinking Machines. 1992. CM-5 Technical Summary. Thinking Machines Corporation.
Costa, Tiago, Chen Shi, Kevin Tien, and Kenneth L. Shepard. 2019. “A CMOS 2D Transmit Beamformer with Integrated PZT Ultrasound Transducers for Neuromodulation.” In 2019 IEEE Custom Integrated Circuits Conference (CICC), 1–4. IEEE. https://doi.org/10.1109/cicc.2019.8780236.
Courbariaux, Matthieu, Yoshua Bengio, and Jean-Pierre David. 2016. “BinaryConnect: Training Deep Neural Networks with Binary Weights During Propagations.” Advances in Neural Information Processing Systems (NeurIPS) 28: 3123–31.
Courbariaux, Matthieu, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. “Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1.” arXiv Preprint arXiv:1602.02830, February. http://arxiv.org/abs/1602.02830v3.
Crankshaw, Daniel, Xin Wang, Guilio Zhou, Michael J Franklin, Joseph E Gonzalez, and Ion Stoica. 2017. “Clipper: A {Low-Latency} Online Prediction Serving System.” In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), 613–27.
Cui, Hongyi, Jiajun Li, and Peng et al. Xie. 2019. “A Survey on Machine Learning Compilers: Taxonomy, Challenges, and Future Directions.” ACM Computing Surveys 52 (4): 1–39.
Curnow, H. J. 1976. “A Synthetic Benchmark.” The Computer Journal 19 (1): 43–49. https://doi.org/10.1093/comjnl/19.1.43.
Cybenko, G. 1992. “Approximation by Superpositions of a Sigmoidal Function.” Mathematics of Control, Signals, and Systems 5 (4): 455–55. https://doi.org/10.1007/bf02134016.
D’Ignazio, Catherine, and Lauren F. Klein. 2020. “Seven Intersectional Feminist Principles for Equitable and Actionable COVID-19 Data.” Big Data &Amp; Society 7 (2): 2053951720942544. https://doi.org/10.1177/2053951720942544.
Dally, William J., Stephen W. Keckler, and David B. Kirk. 2021. “Evolution of the Graphics Processing Unit (GPU).” IEEE Micro 41 (6): 42–51. https://doi.org/10.1109/mm.2021.3113475.
Darvish Rouhani, Bita, Azalia Mirhoseini, and Farinaz Koushanfar. 2017. “TinyDL: Just-in-Time Deep Learning Solution for Constrained Embedded Systems.” In 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 1–4. IEEE. https://doi.org/10.1109/iscas.2017.8050343.
Davarzani, Samaneh, David Saucier, Purva Talegaonkar, Erin Parker, Alana Turner, Carver Middleton, Will Carroll, et al. 2023. “Closing the Wearable Gap: Foot–Ankle Kinematic Modeling via Deep Learning Models Based on a Smart Sock Wearable.” Wearable Technologies 4. https://doi.org/10.1017/wtc.2023.3.
David, Robert, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, et al. 2021. “Tensorflow Lite Micro: Embedded Machine Learning for Tinyml Systems.” Proceedings of Machine Learning and Systems 3: 800–811.
Davies, Martin. 2011. “Endangered Elements: Critical Thinking.” In Study Skills for International Postgraduates, 111–30. Macmillan Education UK. https://doi.org/10.1007/978-0-230-34553-9\_8.
Davies, Mike et al. 2021. “Advancing Neuromorphic Computing with Sparse Networks.” Nature Electronics.
Davis, Jacqueline, Daniel Bizo, Andy Lawrence, Owen Rogers, and Max Smolaks. 2022. “Uptime Institute Global Data Center Survey 2022.” Uptime Institute.
Dayarathna, Miyuru, Yonggang Wen, and Rui Fan. 2016. “Data Center Energy Consumption Modeling: A Survey.” IEEE Communications Surveys &Amp; Tutorials 18 (1): 732–94. https://doi.org/10.1109/comst.2015.2481183.
Dean, Jeffrey, and Sanjay Ghemawat. 2008. “MapReduce: Simplified Data Processing on Large Clusters.” Communications of the ACM 51 (1): 107–13. https://doi.org/10.1145/1327452.1327492.
Dean, Jeffrey, David Patterson, and Cliff Young. 2018. “A New Golden Age in Computer Architecture: Empowering the Machine-Learning Revolution.” IEEE Micro 38 (2): 21–29.
Deng, Jia, Wei Dong, R. Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. “ImageNet: A Large-Scale Hierarchical Image Database.” In 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–55. Ieee; IEEE. https://doi.org/10.1109/cvprw.2009.5206848.
Deng, Li. 2012. “The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web].” IEEE Signal Processing Magazine 29 (6): 141–42. https://doi.org/10.1109/msp.2012.2211477.
Desai, Tanvi, Felix Ritchie, Richard Welpton, et al. 2016. “Five Safes: Designing Data Access for Research.” Economics Working Paper Series 1601: 28.
Dettmers, Tim, and Luke Zettlemoyer. 2019. “Sparse Networks from Scratch: Faster Training Without Losing Performance.” arXiv Preprint arXiv:1907.04840, July. http://arxiv.org/abs/1907.04840v2.
Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding,” October, 4171–86. http://arxiv.org/abs/1810.04805v2.
Dhar, Sauptik, Junyao Guo, Jiayi (Jason) Liu, Samarth Tripathi, Unmesh Kurup, and Mohak Shah. 2021. “A Survey of on-Device Machine Learning: An Algorithms and Learning Theory Perspective.” ACM Transactions on Internet of Things 2 (3): 1–49. https://doi.org/10.1145/3450494.
Domingos, Pedro. 2016. “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World.” Choice Reviews Online 53 (07): 53–3100. https://doi.org/10.5860/choice.194685.
Dongarra, Jack J., Jeremy Du Croz, Sven Hammarling, and Richard J. Hanson. 1988. “An Extended Set of FORTRAN Basic Linear Algebra Subprograms.” ACM Transactions on Mathematical Software 14 (1): 1–17. https://doi.org/10.1145/42288.42291.
Dosovitskiy, Alexey, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, et al. 2020. “An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale.” International Conference on Learning Representations (ICLR), October. http://arxiv.org/abs/2010.11929v2.
Dosovitskiy, Alexey, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, et al. 2021. “An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale.” International Conference on Learning Representations.
Duarte, Javier, Nhan Tran, Ben Hawks, Christian Herwig, Jules Muhizi, Shvetank Prakash, and Vijay Janapa Reddi. 2022b. “FastML Science Benchmarks: Accelerating Real-Time Scientific Edge Machine Learning,” July. http://arxiv.org/abs/2207.07958v1.
———. 2022a. “FastML Science Benchmarks: Accelerating Real-Time Scientific Edge Machine Learning.” arXiv Preprint arXiv:2207.07958, July. http://arxiv.org/abs/2207.07958v1.
Duisterhof, Bardienus P., Shushuai Li, Javier Burgues, Vijay Janapa Reddi, and Guido C. H. E. de Croon. 2021. “Sniffy Bug: A Fully Autonomous Swarm of Gas-Seeking Nano Quadcopters in Cluttered Environments.” In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 9099–9106. IEEE; IEEE. https://doi.org/10.1109/iros51168.2021.9636217.
Dwork, Cynthia. n.d. “Differential Privacy: A Survey of Results.” In Theory and Applications of Models of Computation, 1–19. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-79228-4\_1.
Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. “Calibrating Noise to Sensitivity in Private Data Analysis.” In Theory of Cryptography, edited by Shai Halevi and Tal Rabin, 265–84. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/11681878\_14.
Dwork, Cynthia, and Aaron Roth. 2013. “The Algorithmic Foundations of Differential Privacy.” Foundations and Trends® in Theoretical Computer Science 9 (3-4): 211–407. https://doi.org/10.1561/0400000042.
Ebrahimi, Khosrow, Gerard F. Jones, and Amy S. Fleischer. 2014. “A Review of Data Center Cooling Technology, Operating Conditions and the Corresponding Low-Grade Waste Heat Recovery Opportunities.” Renewable and Sustainable Energy Reviews 31 (March): 622–38. https://doi.org/10.1016/j.rser.2013.12.007.
Egwutuoha, Ifeanyi P., David Levy, Bran Selic, and Shiping Chen. 2013. “A Survey of Fault Tolerance Mechanisms and Checkpoint/Restart Implementations for High Performance Computing Systems.” The Journal of Supercomputing 65 (3): 1302–26. https://doi.org/10.1007/s11227-013-0884-0.
Eisenman, Assaf, Kiran Kumar Matam, Steven Ingram, Dheevatsa Mudigere, Raghuraman Krishnamoorthi, Krishnakumar Nair, Misha Smelyanskiy, and Murali Annavaram. 2022. “Check-n-Run: A Checkpointing System for Training Deep Learning Recommendation Models.” In 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22), 929–43. https://www.usenix.org/conference/nsdi22/presentation/eisenman.
Eldan, Ronen, and Mark Russinovich. 2023. “Who’s Harry Potter? Approximate Unlearning in LLMs.” ArXiv Preprint abs/2310.02238 (October). http://arxiv.org/abs/2310.02238v2.
Elman, Jeffrey L. 2002. “Finding Structure in Time.” In Cognitive Modeling, 14:257–88. 2. The MIT Press. https://doi.org/10.7551/mitpress/1888.003.0015.
Elsen, Erich, Marat Dukhan, Trevor Gale, and Karen Simonyan. 2020. “Fast Sparse ConvNets.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14617–26. IEEE. https://doi.org/10.1109/cvpr42600.2020.01464.
Elsken, Thomas, Jan Hendrik Metzen, and Frank Hutter. 2019a. “Neural Architecture Search.” In Automated Machine Learning, 63–77. Springer International Publishing. https://doi.org/10.1007/978-3-030-05318-5\_3.
———. 2019b. “Neural Architecture Search.” In Automated Machine Learning, 20:63–77. 55. Springer International Publishing. https://doi.org/10.1007/978-3-030-05318-5\_3.
Emily Denton, Rob Fergus, Soumith Chintala. 2014. “Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation.” In Advances in Neural Information Processing Systems (NeurIPS), 1269–77.
Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. 2017. “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks.” Nature 542 (7639): 115–18. https://doi.org/10.1038/nature21056.
Everingham, Mark, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. 2009. “The Pascal Visual Object Classes (VOC) Challenge.” International Journal of Computer Vision 88 (2): 303–38. https://doi.org/10.1007/s11263-009-0275-4.
Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. “Robust Physical-World Attacks on Deep Learning Models.” ArXiv Preprint abs/1707.08945 (July). http://arxiv.org/abs/1707.08945v5.
Farwell, James P., and Rafal Rohozinski. 2011. “Stuxnet and the Future of Cyber War.” Survival 53 (1): 23–40. https://doi.org/10.1080/00396338.2011.555586.
Fedus, William, Barret Zoph, and Noam Shazeer. 2021. “Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity.” Journal of Machine Learning Research.
Fei-Fei, Li, R. Fergus, and P. Perona. n.d. “Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories.” In 2004 Conference on Computer Vision and Pattern Recognition Workshop. IEEE. https://doi.org/10.1109/cvpr.2004.383.
Feldman, Andrew, Sean Lie, Michael James, et al. 2020. “The Cerebras Wafer-Scale Engine: Opportunities and Challenges of Building an Accelerator at Wafer Scale.” IEEE Micro 40 (2): 20–29. https://doi.org/10.1109/MM.2020.2975796.
Ferentinos, Konstantinos P. 2018. “Deep Learning Models for Plant Disease Detection and Diagnosis.” Computers and Electronics in Agriculture 145 (February): 311–18. https://doi.org/10.1016/j.compag.2018.01.009.
Feurer, Matthias, Aaron Klein, Katharina Eggensperger, Jost Tobias Springenberg, Manuel Blum, and Frank Hutter. 2019. “Auto-Sklearn: Efficient and Robust Automated Machine Learning.” In Automated Machine Learning, 113–34. Springer International Publishing. https://doi.org/10.1007/978-3-030-05318-5\_6.
Fisher, Lawrence D. 1981. “The 8087 Numeric Data Processor.” IEEE Computer 14 (7): 19–29. https://doi.org/10.1109/MC.1981.1653991.
Flynn, M. J. 1966. “Very High-Speed Computing Systems.” Proceedings of the IEEE 54 (12): 1901–9. https://doi.org/10.1109/proc.1966.5273.
Francalanza, Adrian, Luca Aceto, Antonis Achilleos, Duncan Paul Attard, Ian Cassar, Dario Della Monica, and Anna Ingólfsdóttir. 2017. “A Foundation for Runtime Monitoring.” In Runtime Verification, 8–29. Springer; Springer International Publishing. https://doi.org/10.1007/978-3-319-67531-2\_2.
Friedman, Batya. 1996. “Value-Sensitive Design.” Interactions 3 (6): 16–23. https://doi.org/10.1145/242485.242493.
Fursov, Ivan, Matvey Morozov, Nina Kaploukhaya, Elizaveta Kovtun, Rodrigo Rivera-Castro, Gleb Gusev, Dmitry Babaev, Ivan Kireev, Alexey Zaytsev, and Evgeny Burnaev. 2021. “Adversarial Attacks on Deep Models for Financial Transaction Records.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining, 2868–78. ACM. https://doi.org/10.1145/3447548.3467145.
Gale, Trevor, Erich Elsen, and Sara Hooker. 2019a. “The State of Sparsity in Deep Neural Networks.” arXiv Preprint arXiv:1902.09574, February. http://arxiv.org/abs/1902.09574v1.
———. 2019b. “The State of Sparsity in Deep Neural Networks.” arXiv Preprint arXiv:1902.09574, February. http://arxiv.org/abs/1902.09574v1.
Gandolfi, Karine, Christophe Mourtel, and Francis Olivier. 2001. “Electromagnetic Analysis: Concrete Results.” In Cryptographic Hardware and Embedded Systems — CHES 2001, 251–61. Springer; Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-44709-1\_21.
Gao, Yansong, Said F. Al-Sarawi, and Derek Abbott. 2020. “Physical Unclonable Functions.” Nature Electronics 3 (2): 81–91. https://doi.org/10.1038/s41928-020-0372-5.
Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2021b. “Datasheets for Datasets.” Communications of the ACM 64 (12): 86–92. https://doi.org/10.1145/3458723.
———. 2021a. “Datasheets for Datasets.” Communications of the ACM 64 (12): 86–92. https://doi.org/10.1145/3458723.
Geiger, Atticus, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. “Causal Abstractions of Neural Networks.” In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, edited by Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, 9574–86. https://proceedings.neurips.cc/paper/2021/hash/4f5c422f4d49a5a807eda27434231040-Abstract.html.
Gholami, Amir, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021a. “A Survey of Quantization Methods for Efficient Neural Network Inference.” arXiv Preprint arXiv:2103.13630, March. http://arxiv.org/abs/2103.13630v3.
———. 2021b. “A Survey of Quantization Methods for Efficient Neural Network Inference.” arXiv Preprint arXiv:2103.13630 abs/2103.13630 (March). http://arxiv.org/abs/2103.13630v3.
Gholami, Amir, Zhewei Yao, Sehoon Kim, Coleman Hooper, Michael W. Mahoney, and Kurt Keutzer. 2024. “AI and Memory Wall.” IEEE Micro 44 (3): 33–39. https://doi.org/10.1109/mm.2024.3373763.
Gnad, Dennis R. E., Fabian Oboril, and Mehdi B. Tahoori. 2017. “Voltage Drop-Based Fault Attacks on FPGAs Using Valid Bitstreams.” In 2017 27th International Conference on Field Programmable Logic and Applications (FPL), 1–7. IEEE; IEEE. https://doi.org/10.23919/fpl.2017.8056840.
Goldberg, David. 1991. “What Every Computer Scientist Should Know about Floating-Point Arithmetic.” ACM Computing Surveys 23 (1): 5–48. https://doi.org/10.1145/103162.103163.
Golub, Gene H., and Charles F. Van Loan. 1996. Matrix Computations. Johns Hopkins University Press.
Gong, Ruihao, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. 2019. “Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks.” arXiv Preprint arXiv:1908.05033, August. http://arxiv.org/abs/1908.05033v1.
Goodfellow, Ian J., Aaron Courville, and Yoshua Bengio. 2013a. “Scaling up Spike-and-Slab Models for Unsupervised Feature Learning.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8): 1902–14. https://doi.org/10.1109/tpami.2012.273.
———. 2013b. “Scaling up Spike-and-Slab Models for Unsupervised Feature Learning.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8): 1902–14. https://doi.org/10.1109/tpami.2012.273.
———. 2013c. “Scaling up Spike-and-Slab Models for Unsupervised Feature Learning.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8): 1902–14. https://doi.org/10.1109/tpami.2012.273.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Communications of the ACM 63 (11): 139–44. https://doi.org/10.1145/3422622.
Google. n.d. “XLA: Optimizing Compiler for Machine Learning.” <https://www.tensorflow.org/xla>.
Gordon, Mitchell, Kevin Duh, and Nicholas Andrews. 2020. “Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning.” In Proceedings of the 5th Workshop on Representation Learning for NLP. Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.repl4nlp-1.18.
Gou, Jianping, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. 2021. “Knowledge Distillation: A Survey.” International Journal of Computer Vision 129 (6): 1789–819. https://doi.org/10.1007/s11263-021-01453-z.
Gräfe, Ralf, Qutub Syed Sha, Florian Geissler, and Michael Paulitsch. 2023. “Large-Scale Application of Fault Injection into PyTorch Models -an Extension to PyTorchFI for Validation Efficiency.” In 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-s), 56–62. IEEE; IEEE. https://doi.org/10.1109/dsn-s58398.2023.00025.
Graphcore. 2020. “The Colossus MK2 IPU Processor.” Graphcore Technical Paper.
Greengard, Samuel. 2021. The Internet of Things. The MIT Press. https://doi.org/10.7551/mitpress/13937.001.0001.
Groeneveld, Dirk, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, et al. 2024. “OLMo: Accelerating the Science of Language Models.” arXiv Preprint arXiv:2402.00838, February. http://arxiv.org/abs/2402.00838v4.
Grossman, Elizabeth. 2007. High Tech Trash: Digital Devices, Hidden Toxics, and Human Health. Island press.
Gruslys, Audrunas, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. 2016. “Memory-Efficient Backpropagation Through Time.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 4125–33. https://proceedings.neurips.cc/paper/2016/hash/a501bebf79d570651ff601788ea9d16d-Abstract.html.
Gu, Ivy. 2023. “Deep Learning Model Compression (Ii) by Ivy Gu Medium.” https://ivygdy.medium.com/deep-learning-model-compression-ii-546352ea9453.
Gudivada, Venkat N., Dhana Rao Rao, et al. 2017. “Data Quality Considerations for Big Data and Machine Learning: Going Beyond Data Cleaning and Transformations.” IEEE Transactions on Knowledge and Data Engineering.
Gujarati, Arpan, Reza Karimi, Safya Alzayat, Wei Hao, Antoine Kaufmann, Ymir Vigfusson, and Jonathan Mace. 2020. “Serving DNNs Like Clockwork: Performance Predictability from the Bottom Up.” In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), 443–62. https://www.usenix.org/conference/osdi20/presentation/gujarati.
Gulshan, Varun, Lily Peng, Marc Coram, Martin C. Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, et al. 2016. “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.” JAMA 316 (22): 2402. https://doi.org/10.1001/jama.2016.17216.
Guo, Yutao, Hao Wang, Hui Zhang, Tong Liu, Zhaoguang Liang, Yunlong Xia, Li Yan, et al. 2019. “Mobile Photoplethysmographic Technology to Detect Atrial Fibrillation.” Journal of the American College of Cardiology 74 (19): 2365–75. https://doi.org/10.1016/j.jacc.2019.08.019.
Gupta, Maanak, Charankumar Akiri, Kshitiz Aryal, Eli Parker, and Lopamudra Praharaj. 2023. “From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy.” IEEE Access 11: 80218–45. https://doi.org/10.1109/access.2023.3300381.
Gupta, Maya R., Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Robert Canini, Alexander Mangylov, Wojtek Moczydlowski, and Alexander Van Esbroeck. 2016. “Monotonic Calibrated Interpolated Look-up Tables.” J. Mach. Learn. Res. 17 (1): 109:1–47. https://jmlr.org/papers/v17/15-243.html.
Gupta, Suyog, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. 2015. “Deep Learning with Limited Numerical Precision.” In International Conference on Machine Learning, 1737–46. PMLR.
Gupta, Udit, Mariam Elgamal, Gage Hills, Gu-Yeon Wei, Hsien-Hsin S. Lee, David Brooks, and Carole-Jean Wu. 2022. “ACT: Designing Sustainable Computer Systems with an Architectural Carbon Modeling Tool.” In Proceedings of the 49th Annual International Symposium on Computer Architecture, 784–99. ACM. https://doi.org/10.1145/3470496.3527408.
Hamming, R. W. 1950. “Error Detecting and Error Correcting Codes.” Bell System Technical Journal 29 (2): 147–60. https://doi.org/10.1002/j.1538-7305.1950.tb00463.x.
Han, Song, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. 2016. “EIE: Efficient Inference Engine on Compressed Deep Neural Network.” In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), 243–54. IEEE. https://doi.org/10.1109/isca.2016.30.
Han, Song, Huizi Mao, and William J. Dally. 2015. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.” arXiv Preprint arXiv:1510.00149, October. http://arxiv.org/abs/1510.00149v5.
———. 2016. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.” International Conference on Learning Representations (ICLR).
Handlin, Oscar. 1965. “Science and Technology in Popular Culture.” Daedalus-Us., 156–70.
Hardt, Moritz, Eric Price, and Nati Srebro. 2016. “Equality of Opportunity in Supervised Learning.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 3315–23. https://proceedings.neurips.cc/paper/2016/hash/9d2682367c3935defcb1f9e247a97c0d-Abstract.html.
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. “Deep Residual Learning for Image Recognition.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. IEEE. https://doi.org/10.1109/cvpr.2016.90.
———. 2016b. “Deep Residual Learning for Image Recognition.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. IEEE. https://doi.org/10.1109/cvpr.2016.90.
He, Xuzhen. 2023a. “Accelerated Linear Algebra Compiler for Computationally Efficient Numerical Models: Success and Potential Area of Improvement.” PLOS ONE 18 (2): e0282265. https://doi.org/10.1371/journal.pone.0282265.
———. 2023b. “Accelerated Linear Algebra Compiler for Computationally Efficient Numerical Models: Success and Potential Area of Improvement.” PLOS ONE 18 (2): e0282265. https://doi.org/10.1371/journal.pone.0282265.
He, Yi, Prasanna Balaprakash, and Yanjing Li. 2020. “FIdelity: Efficient Resilience Analysis Framework for Deep Learning Accelerators.” In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 270–81. IEEE; IEEE. https://doi.org/10.1109/micro50266.2020.00033.
He, Yihui, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. 2018. “AMC: AutoML for Model Compression and Acceleration on Mobile Devices.” In Computer Vision – ECCV 2018, 815–32. Springer International Publishing. https://doi.org/10.1007/978-3-030-01234-2\_48.
He, Yi, Mike Hutton, Steven Chan, Robert De Gruijl, Rama Govindaraju, Nishant Patil, and Yanjing Li. 2023. “Understanding and Mitigating Hardware Failures in Deep Learning Training Systems.” In Proceedings of the 50th Annual International Symposium on Computer Architecture, 1–16. IEEE; ACM. https://doi.org/10.1145/3579371.3589105.
Hébert-Johnson, Úrsula, Michael P. Kim, Omer Reingold, and Guy N. Rothblum. 2018. “Multicalibration: Calibration for the (Computationally-Identifiable) Masses.” In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, edited by Jennifer G. Dy and Andreas Krause, 80:1944–53. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v80/hebert-johnson18a.html.
Henderson, Peter, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. “Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning.” CoRR abs/2002.05651 (1): 10039–81. http://arxiv.org/abs/2002.05651v2.
Hendrycks, Dan, and Thomas Dietterich. 2019. “Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.” arXiv Preprint arXiv:1903.12261, March. http://arxiv.org/abs/1903.12261v1.
Hennessy, John L., and David A. Patterson. 2019. “A New Golden Age for Computer Architecture.” Communications of the ACM 62 (2): 48–60. https://doi.org/10.1145/3282307.
Hennessy, John L, and David A Patterson. 2003. “Computer Architecture: A Quantitative Approach.” Morgan Kaufmann.
Hernandez, Danny, Tom B. Brown, et al. 2020. “Measuring the Algorithmic Efficiency of Neural Networks.” OpenAI Blog. https://openai.com/research/ai-and-efficiency.
Hernandez, Danny, and Tom B. Brown. 2020. “Measuring the Algorithmic Efficiency of Neural Networks.” arXiv Preprint arXiv:2007.03051, May. https://doi.org/10.48550/arxiv.2005.04305.
Heyndrickx, Wouter, Lewis Mervin, Tobias Morawietz, Noé Sturm, Lukas Friedrich, Adam Zalewski, Anastasia Pentina, et al. 2023. “Melloddy: Cross-Pharma Federated Learning at Unprecedented Scale Unlocks Benefits in Qsar Without Compromising Proprietary Information.” Journal of Chemical Information and Modeling 64 (7): 2331–44. https://pubs.acs.org/doi/10.1021/acs.jcim.3c00799.
Himmelstein, Gracie, David Bates, and Li Zhou. 2022. “Examination of Stigmatizing Language in the Electronic Health Record.” JAMA Network Open 5 (1): e2144967. https://doi.org/10.1001/jamanetworkopen.2021.44967.
Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. 2015a. “Distilling the Knowledge in a Neural Network.” arXiv Preprint arXiv:1503.02531, March. http://arxiv.org/abs/1503.02531v1.
———. 2015b. “Distilling the Knowledge in a Neural Network,” March. https://doi.org/10.1002/0471743984.vse0673.
Hirschberg, Julia, and Christopher D. Manning. 2015. “Advances in Natural Language Processing.” Science 349 (6245): 261–66. https://doi.org/10.1126/science.aaa8685.
Hochreiter, Sepp. 1998. “The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions.” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 06 (02): 107–16. https://doi.org/10.1142/s0218488598000094.
Hochreiter, Sepp, and Jürgen Schmidhuber. 1997. “Long Short-Term Memory.” Neural Computation 9 (8): 1735–80. https://doi.org/10.1162/neco.1997.9.8.1735.
Hoefler, Torsten, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. 2021. “Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks.” arXiv Preprint arXiv:2102.00554 22 (January): 1–124. http://arxiv.org/abs/2102.00554v1.
Hoefler, Torsten, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandros Nikolaos Ziogas. 2021. “Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks.” Journal of Machine Learning Research 22 (241): 1–124.
Hong, Sanghyun, Nicholas Carlini, and Alexey Kurakin. 2023. “Publishing Efficient on-Device Models Increases Adversarial Vulnerability.” In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), abs 1603 5279:271–90. IEEE; IEEE. https://doi.org/10.1109/satml54575.2023.00026.
Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. 1989. “Multilayer Feedforward Networks Are Universal Approximators.” Neural Networks 2 (5): 359–66. https://doi.org/10.1016/0893-6080(89)90020-8.
Horowitz, Mark. 2014. “1.1 Computing’s Energy Problem (and What We Can Do about It).” In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC). IEEE. https://doi.org/10.1109/isscc.2014.6757323.
Hosseini, Hossein, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. “Deceiving Google’s Perspective API Built for Detecting Toxic Comments.” ArXiv Preprint abs/1702.08138 (February). http://arxiv.org/abs/1702.08138v1.
Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017a. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” April. http://arxiv.org/abs/1704.04861v1.
———. 2017b. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv Preprint abs/1704.04861 (April). http://arxiv.org/abs/1704.04861v1.
Howard, Jeremy, and Sylvain Gugger. 2020. “Fastai: A Layered API for Deep Learning.” Information 11 (2): 108. https://doi.org/10.3390/info11020108.
Hsiao, Yu-Shun, Zishen Wan, Tianyu Jia, Radhika Ghosal, Abdulrahman Mahmoud, Arijit Raychowdhury, David Brooks, Gu-Yeon Wei, and Vijay Janapa Reddi. 2023. “MAVFI: An End-to-End Fault Analysis Framework with Anomaly Detection and Recovery for Micro Aerial Vehicles.” In 2023 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1–6. IEEE; IEEE. https://doi.org/10.23919/date56975.2023.10137246.
Hsu, Liang-Ching, Ching-Yi Huang, Yen-Hsun Chuang, Ho-Wen Chen, Ya-Ting Chan, Heng Yi Teah, Tsan-Yao Chen, Chiung-Fen Chang, Yu-Ting Liu, and Yu-Min Tzou. 2016. “Accumulation of Heavy Metals and Trace Elements in Fluvial Sediments Received Effluents from Traditional and Semiconductor Industries.” Scientific Reports 6 (1): 34250. https://doi.org/10.1038/srep34250.
Hu, Bowen, Zhiqiang Zhang, and Yun Fu. 2021. “Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference.” Advances in Neural Information Processing Systems 34: 18537–50.
Huang, Wei, Jie Chen, and Lei Zhang. 2023. “Adaptive Neural Networks for Real-Time Processing in Autonomous Systems.” IEEE Transactions on Intelligent Transportation Systems.
Huang, Yanping et al. 2019. “GPipe: Efficient Training of Giant Neural Networks Using Pipeline Parallelism.” In Advances in Neural Information Processing Systems (NeurIPS).
Hubara, Itay, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2018. “Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations.” Journal of Machine Learning Research (JMLR) 18: 1–30.
Hutter, Frank, Lars Kotthoff, and Joaquin Vanschoren. 2019b. Automated Machine Learning: Methods, Systems, Challenges. Automated Machine Learning. Springer International Publishing. https://doi.org/10.1007/978-3-030-05318-5.
———. 2019a. Automated Machine Learning: Methods, Systems, Challenges. Springer International Publishing. https://doi.org/10.1007/978-3-030-05318-5.
Hutter, Michael, Jorn-Marc Schmidt, and Thomas Plos. 2009. “Contact-Based Fault Injections and Power Analysis on RFID Tags.” In 2009 European Conference on Circuit Theory and Design, 409–12. IEEE; IEEE. https://doi.org/10.1109/ecctd.2009.5275012.
Hwu, Wen-mei W. 2011. “Introduction.” In GPU Computing Gems Emerald Edition, xix–xx. Elsevier. https://doi.org/10.1016/b978-0-12-384988-5.00064-4.
Iandola, Forrest N., Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. 2016. “SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and <0.5MB Model Size,” February. http://arxiv.org/abs/1602.07360v4.
Inc., Tesla. 2021. “Tesla AI Day: D1 Dojo Chip.” Tesla AI Day Presentation.
Inmon, W. H. 2005. Building the Data Warehouse. John Wiley Sons.
Ioffe, Sergey, and Christian Szegedy. 2015a. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” International Conference on Machine Learning, 448–56.
———. 2015b. “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.” International Conference on Machine Learning (ICML), February, 448–56. http://arxiv.org/abs/1502.03167v3.
Ippolito, Daphne, Florian Tramer, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher Choquette Choo, and Nicholas Carlini. 2023. “Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy.” In Proceedings of the 16th International Natural Language Generation Conference, 28–53. Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.inlg-main.3.
Irimia-Vladu, Mihai. 2014. ‘Green’ Electronics: Biodegradable and Biocompatible Materials and Devices for Sustainable Future.” Chem. Soc. Rev. 43 (2): 588–610. https://doi.org/10.1039/c3cs60235d.
Jacob, Benoit, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018b. “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2704–13. IEEE. https://doi.org/10.1109/cvpr.2018.00286.
———. 2018a. “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2704–13. IEEE. https://doi.org/10.1109/cvpr.2018.00286.
———. 2018c. “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2704–13. IEEE. https://doi.org/10.1109/cvpr.2018.00286.
Jacobs, David, Bas Rokers, Archisman Rudra, and Zili Liu. 2002. “Fragment Completion in Humans and Machines.” In Advances in Neural Information Processing Systems 14, 35:27–34. The MIT Press. https://doi.org/10.7551/mitpress/1120.003.0008.
Jaech, Aaron, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, et al. 2024. “OpenAI O1 System Card.” CoRR. https://doi.org/10.48550/ARXIV.2412.16720.
Janapa Reddi, Vijay et al. 2022. “MLPerf Mobile V2. 0: An Industry-Standard Benchmark Suite for Mobile Machine Learning.” In Proceedings of Machine Learning and Systems, 4:806–23.
Janapa Reddi, Vijay, Alexander Elium, Shawn Hymel, David Tischler, Daniel Situnayake, Carl Ward, Louis Moreau, et al. 2023. “Edge Impulse: An MLOps Platform for Tiny Machine Learning.” Proceedings of Machine Learning and Systems 5.
Jha, A. R. 2014. Rare Earth Materials: Properties and Applications. CRC Press. https://doi.org/10.1201/b17045.
Jha, Saurabh, Subho Banerjee, Timothy Tsai, Siva K. S. Hari, Michael B. Sullivan, Zbigniew T. Kalbarczyk, Stephen W. Keckler, and Ravishankar K. Iyer. 2019. “ML-Based Fault Injection for Autonomous Vehicles: A Case for Bayesian Fault Injection.” In 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 112–24. IEEE; IEEE. https://doi.org/10.1109/dsn.2019.00025.
Jia, Xianyan, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie, et al. 2018. “Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes.” arXiv Preprint arXiv:1807.11205, July. http://arxiv.org/abs/1807.11205v1.
Jia, Xu, Bert De Brabandere, Tinne Tuytelaars, and Luc Van Gool. 2016. “Dynamic Filter Networks.” Advances in Neural Information Processing Systems 29.
Jia, Yangqing, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. “Caffe: Convolutional Architecture for Fast Feature Embedding.” In Proceedings of the 22nd ACM International Conference on Multimedia, 675–78. ACM. https://doi.org/10.1145/2647868.2654889.
Jia, Zhihao, Matei Zaharia, and Alex Aiken. 2018. “Beyond Data and Model Parallelism for Deep Neural Networks.” arXiv Preprint arXiv:1807.05358, July. http://arxiv.org/abs/1807.05358v1.
Jia, Ziheng, Nathan Tillman, Luis Vega, Po-An Ouyang, Matei Zaharia, and Joseph E. Gonzalez. 2019. “Optimizing DNN Computation with Relaxed Graph Substitutions.” Conference on Machine Learning and Systems (MLSys).
Jiao, Xiaoqi, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. “TinyBERT: Distilling BERT for Natural Language Understanding.” In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.findings-emnlp.372.
Jin, Yilun, Xiguang Wei, Yang Liu, and Qiang Yang. 2020. “Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective.” arXiv Preprint arXiv:2002.11545, February. http://arxiv.org/abs/2002.11545v2.
Johnson-Roberson, Matthew, Charles Barto, Rounak Mehta, Sharath Nittur Sridhar, Karl Rosaen, and Ram Vasudevan. 2017. “Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?” In 2017 IEEE International Conference on Robotics and Automation (ICRA), 746–53. Singapore, Singapore: IEEE. https://doi.org/10.1109/icra.2017.7989092.
Jones, Gareth A. 2018. “Joining Dessins Together.” arXiv Preprint arXiv:1810.03960, October. http://arxiv.org/abs/1810.03960v1.
Jordan, T. L. 1982. “A Guide to Parallel Computation and Some Cray-1 Experiences.” In Parallel Computations, 1–50. Elsevier. https://doi.org/10.1016/b978-0-12-592101-5.50006-3.
Joulin, Armand, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. “Bag of Tricks for Efficient Text Classification.” In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, 18:1–42. Association for Computational Linguistics. https://doi.org/10.18653/v1/e17-2068.
Jouppi, Norman P. et al. 2017. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” Proceedings of the 44th Annual International Symposium on Computer Architecture (ISCA).
Jouppi, Norman P., Doe Hyun Yoon, Matthew Ashcraft, Mark Gottscho, Thomas B. Jablin, George Kurian, James Laudon, et al. 2021b. “Ten Lessons from Three Generations Shaped Google’s TPUv4i : Industrial Product.” In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 64:1–14. 5. IEEE. https://doi.org/10.1109/isca52012.2021.00010.
———, et al. 2021a. “Ten Lessons from Three Generations Shaped Google’s TPUv4i : Industrial Product.” In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 1–14. IEEE. https://doi.org/10.1109/isca52012.2021.00010.
Jouppi, Norman P., Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, and David Patterson. 2020. “A Domain-Specific Supercomputer for Training Deep Neural Networks.” Communications of the ACM 63 (7): 67–78. https://doi.org/10.1145/3360307.
Jouppi, Norman P., Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, et al. 2017b. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ACM. https://doi.org/10.1145/3079856.3080246.
———, et al. 2017c. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ACM. https://doi.org/10.1145/3079856.3080246.
———, et al. 2017a. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ACM. https://doi.org/10.1145/3079856.3080246.
Joye, Marc, and Michael Tunstall. 2012. Fault Analysis in Cryptography. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-29656-7.
Kairouz, Peter, Sewoong Oh, and Pramod Viswanath. 2015. “Secure Multi-Party Differential Privacy.” In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, edited by Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, 2008–16. https://proceedings.neurips.cc/paper/2015/hash/a01610228fe998f515a72dd730294d87-Abstract.html.
Kannan, Harish, Pradeep Dubey, and Mark Horowitz. 2023. “Chiplet-Based Architectures: The Future of AI Accelerators.” IEEE Micro 43 (1): 46–55. https://doi.org/10.1109/MM.2022.1234567.
Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. “Scaling Laws for Neural Language Models.” ArXiv Preprint abs/2001.08361 (January). http://arxiv.org/abs/2001.08361v1.
Karargyris, Alexandros, Renato Umeton, Micah J. Sheller, Alejandro Aristizabal, Johnu George, Anna Wuest, Sarthak Pati, et al. 2023. “Federated Benchmarking of Medical Artificial Intelligence with MedPerf.” Nature Machine Intelligence 5 (7): 799–810. https://doi.org/10.1038/s42256-023-00652-2.
Kaur, Harmanpreet, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning.” In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, edited by Regina Bernhaupt, Florian ’Floyd’Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, et al., 1–14. ACM. https://doi.org/10.1145/3313831.3376219.
Kawazoe Aguilera, Marcos, Wei Chen, and Sam Toueg. 1997. “Heartbeat: A Timeout-Free Failure Detector for Quiescent Reliable Communication.” In Distributed Algorithms, 126–40. Springer; Springer Berlin Heidelberg. https://doi.org/10.1007/bfb0030680.
Khan, Mohammad Emtiyaz, and Siddharth Swaroop. 2021. “Knowledge-Adaptation Priors.” In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, edited by Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, 19757–70. https://proceedings.neurips.cc/paper/2021/hash/a4380923dd651c195b1631af7c829187-Abstract.html.
Kiela, Douwe, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, et al. 2021. “Dynabench: Rethinking Benchmarking in NLP.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 9:418–34. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naacl-main.324.
Kim, Jungrae, Michael Sullivan, and Mattan Erez. 2015. “Bamboo ECC: Strong, Safe, and Flexible Codes for Reliable Computer Memory.” In 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA), 101–12. IEEE; IEEE. https://doi.org/10.1109/hpca.2015.7056025.
Kim, Sunju, Chungsik Yoon, Seunghon Ham, Jihoon Park, Ohun Kwon, Donguk Park, Sangjun Choi, Seungwon Kim, Kwonchul Ha, and Won Kim. 2018. “Chemical Use in the Semiconductor Manufacturing Industry.” International Journal of Occupational and Environmental Health 24 (3-4): 109–18. https://doi.org/10.1080/10773525.2018.1519957.
Kingma, Diederik P., and Jimmy Ba. 2014. “Adam: A Method for Stochastic Optimization.” ICLR, December. http://arxiv.org/abs/1412.6980v9.
Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, et al. 2017. “Overcoming Catastrophic Forgetting in Neural Networks.” Proceedings of the National Academy of Sciences 114 (13): 3521–26. https://doi.org/10.1073/pnas.1611835114.
Kleppmann, Martin. 2016. Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems. O’Reilly Media. http://shop.oreilly.com/product/0636920032175.do.
Ko, Yohan. 2021. “Characterizing System-Level Masking Effects Against Soft Errors.” Electronics 10 (18): 2286. https://doi.org/10.3390/electronics10182286.
Kocher, Paul, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, et al. 2019a. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP), 1–19. IEEE. https://doi.org/10.1109/sp.2019.00002.
———, et al. 2019b. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP), 1–19. IEEE. https://doi.org/10.1109/sp.2019.00002.
Kocher, Paul, Joshua Jaffe, and Benjamin Jun. 1999. “Differential Power Analysis.” In Advances in Cryptology — CRYPTO’ 99, 388–97. Springer; Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-48405-1\_25.
Kocher, Paul, Joshua Jaffe, Benjamin Jun, and Pankaj Rohatgi. 2011. “Introduction to Differential Power Analysis.” Journal of Cryptographic Engineering 1 (1): 5–27. https://doi.org/10.1007/s13389-011-0006-y.
Koh, Pang Wei, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. “Concept Bottleneck Models.” In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, 119:5338–48. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v119/koh20a.html.
Koh, Pang Wei, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, et al. 2021. “WILDS: A Benchmark of in-the-Wild Distribution Shifts.” In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, edited by Marina Meila and Tong Zhang, 139:5637–64. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v139/koh21a.html.
Koizumi, Yuma, Shoichiro Saito, Hisashi Uematsu, Noboru Harada, and Keisuke Imoto. 2019. “ToyADMOS: A Dataset of Miniature-Machine Operating Sounds for Anomalous Sound Detection.” In 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 313–17. IEEE; IEEE. https://doi.org/10.1109/waspaa.2019.8937164.
Krishnamoorthi, Raghuraman. 2018. “Quantizing Deep Convolutional Networks for Efficient Inference: A Whitepaper.” arXiv Preprint arXiv:1806.08342, June. http://arxiv.org/abs/1806.08342v1.
Krishnan, Rayan, Pranav Rajpurkar, and Eric J. Topol. 2022. “Self-Supervised Learning in Medicine and Healthcare.” Nature Biomedical Engineering 6 (12): 1346–52. https://doi.org/10.1038/s41551-022-00914-1.
Krizhevsky, Alex. 2009. “Learning Multiple Layers of Features from Tiny Images.”
Krizhevsky, Alex, Geoffrey Hinton, et al. 2009. “Learning Multiple Layers of Features from Tiny Images.”
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2017a. “ImageNet Classification with Deep Convolutional Neural Networks.” Communications of the ACM 60 (6): 84–90. https://doi.org/10.1145/3065386.
———. 2017b. “ImageNet Classification with Deep Convolutional Neural Networks.” Edited by F. Pereira, C. J. Burges, L. Bottou, and K. Q. Weinberger. Communications of the ACM 60 (6): 84–90. https://doi.org/10.1145/3065386.
———. 2017c. “ImageNet Classification with Deep Convolutional Neural Networks.” Communications of the ACM 60 (6): 84–90. https://doi.org/10.1145/3065386.
Kuchaiev, Oleksii, Boris Ginsburg, Igor Gitman, Vitaly Lavrukhin, Carl Case, and Paulius Micikevicius. 2018. “OpenSeq2Seq: Extensible Toolkit for Distributed and Mixed Precision Training of Sequence-to-Sequence Models.” In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), 41–46. Association for Computational Linguistics. https://doi.org/10.18653/v1/w18-2507.
Kuhn, Max, and Kjell Johnson. 2013. Applied Predictive Modeling. Springer New York. https://doi.org/10.1007/978-1-4614-6849-3.
Kung, H. T. 1982. “Why Systolic Architectures?” IEEE Computer 15 (1): 37–46. https://doi.org/10.1109/MC.1982.1653825.
Kung, Hsiang Tsung, and Charles E Leiserson. 1979. “Systolic Arrays (for VLSI).” In Sparse Matrix Proceedings 1978, 1:256–82. Society for industrial; applied mathematics Philadelphia, PA, USA.
Kurth, Thorsten, Shashank Subramanian, Peter Harrington, Jaideep Pathak, Morteza Mardani, David Hall, Andrea Miele, Karthik Kashinath, and Anima Anandkumar. 2023. “FourCastNet: Accelerating Global High-Resolution Weather Forecasting Using Adaptive Fourier Neural Operators.” In Proceedings of the Platform for Advanced Scientific Computing Conference, 1–11. ACM. https://doi.org/10.1145/3592979.3593412.
Kwon, Young D., Rui Li, Stylianos I. Venieris, Jagmohan Chauhan, Nicholas D. Lane, and Cecilia Mascolo. 2023. “TinyTrain: Resource-Aware Task-Adaptive Sparse Training of DNNs at the Data-Scarce Edge.” ArXiv Preprint abs/2307.09988 (July). http://arxiv.org/abs/2307.09988v2.
Labarge, Isaac E. n.d. “Neural Network Pruning for ECG Arrhythmia Classification.” Proceedings of Machine Learning and Systems (MLSys). PhD thesis, California Polytechnic State University. https://doi.org/10.15368/theses.2020.76.
Lai, Liangzhen, Naveen Suda, and Vikas Chandra. 2018. “CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-m CPUs.” ArXiv Preprint abs/1801.06601 (January). http://arxiv.org/abs/1801.06601v1.
Lakkaraju, Himabindu, and Osbert Bastani. 2020. “"How Do i Fool You?": Manipulating User Trust via Misleading Black Box Explanations.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 79–85. ACM. https://doi.org/10.1145/3375627.3375833.
Lam, Monica D., Edward E. Rothberg, and Michael E. Wolf. 1991. “The Cache Performance and Optimizations of Blocked Algorithms.” In Proceedings of the Fourth International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS-IV, 63–74. ACM Press. https://doi.org/10.1145/106972.106981.
Lam, Remi, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Ferran Alet, Suman Ravuri, et al. 2023. “Learning Skillful Medium-Range Global Weather Forecasting.” Science 382 (6677): 1416–21. https://doi.org/10.1126/science.adi2336.
Lange, Klaus-Dieter. 2009. “Identifying Shades of Green: The SPECpower Benchmarks.” Computer 42 (3): 95–97. https://doi.org/10.1109/mc.2009.84.
Lannelongue, Loïc, Jason Grealey, and Michael Inouye. 2021. “Green Algorithms: Quantifying the Carbon Footprint of Computation.” Advanced Science 8 (12): 2100707. https://doi.org/10.1002/advs.202100707.
Lattner, Chris, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis, Jacques Pienaar, River Riddle, Tatiana Shpeisman, Nicolas Vasilache, and Oleksandr Zinenko. 2020. “MLIR: A Compiler Infrastructure for the End of Moore’s Law.” arXiv Preprint arXiv:2002.11054, February. http://arxiv.org/abs/2002.11054v2.
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015a. “Deep Learning.” Nature 521 (7553): 436–44. https://doi.org/10.1038/nature14539.
———. 2015b. “Deep Learning.” Nature 521 (7553): 436–44. https://doi.org/10.1038/nature14539.
LeCun, Yann, Leon Bottou, Genevieve B. Orr, and Klaus -Robert Müller. 1998. “Efficient BackProp.” In Neural Networks: Tricks of the Trade, 1524:9–50. Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-49430-8\_2.
LeCun, Yann, John S. Denker, and Sara A. Solla. 1989. “Optimal Brain Damage.” In Advances in Neural Information Processing Systems, 2:598–605. Morgan-Kaufmann. http://papers.nips.cc/paper/250-optimal-brain-damage.
LeCun, Y., B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. “Backpropagation Applied to Handwritten Zip Code Recognition.” Neural Computation 1 (4): 541–51. https://doi.org/10.1162/neco.1989.1.4.541.
Lecun, Y., L. Bottou, Y. Bengio, and P. Haffner. 1998. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE 86 (11): 2278–2324. https://doi.org/10.1109/5.726791.
Lee, Minwoong, Namho Lee, Huijeong Gwon, Jongyeol Kim, Younggwan Hwang, and Seongik Cho. 2022. “Design of Radiation-Tolerant High-Speed Signal Processing Circuit for Detecting Prompt Gamma Rays by Nuclear Explosion.” Electronics 11 (18): 2970. https://doi.org/10.3390/electronics11182970.
Lepikhin, Dmitry et al. 2020. “GShard: Scaling Giant Models with Conditional Computation.” In Proceedings of the International Conference on Learning Representations.
LeRoy Poff, N, MM Brinson, and JW Day. 2002. “Aquatic Ecosystems & Global Climate Change.” Pew Center on Global Climate Change.
Li, Fengfu, Bin Liu, Xiaoxing Wang, Bo Zhang, and Junchi Yan. 2016. “Ternary Weight Networks.” arXiv Preprint, May. http://arxiv.org/abs/1605.04711v3.
Li, Guanpeng, Siva Kumar Sastry Hari, Michael Sullivan, Timothy Tsai, Karthik Pattabiraman, Joel Emer, and Stephen W. Keckler. 2017. “Understanding Error Propagation in Deep Learning Neural Network (DNN) Accelerators and Applications.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 1–12. ACM. https://doi.org/10.1145/3126908.3126964.
Li, Jingzhen, Igbe Tobore, Yuhang Liu, Abhishek Kandwal, Lei Wang, and Zedong Nie. 2021. “Non-Invasive Monitoring of Three Glucose Ranges Based on ECG by Using DBSCAN-CNN.” IEEE Journal of Biomedical and Health Informatics 25 (9): 3340–50. https://doi.org/10.1109/jbhi.2021.3072628.
Li, Lisha, Kevin G. Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2017. “Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.” J. Mach. Learn. Res. 18: 185:1–52. https://jmlr.org/papers/v18/16-558.html.
Li, Qinbin, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Yuan Li, Xu Liu, and Bingsheng He. 2023. “A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection.” IEEE Transactions on Knowledge and Data Engineering 35 (4): 3347–66. https://doi.org/10.1109/tkde.2021.3124599.
Li, Tian, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. “Federated Learning: Challenges, Methods, and Future Directions.” IEEE Signal Processing Magazine 37 (3): 50–60. https://doi.org/10.1109/msp.2020.2975749.
Li, Xiang, Tao Qin, Jian Yang, and Tie-Yan Liu. 2016. “LightRNN: Memory and Computation-Efficient Recurrent Neural Networks.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 4385–93. https://proceedings.neurips.cc/paper/2016/hash/c3e4035af2a1cde9f21e1ae1951ac80b-Abstract.html.
Li, Zhuohan, Lianmin Zheng, Yinmin Zhong, Vincent Liu, Ying Sheng, Xin Jin, Yanping Huang, et al. 2023. {AlpaServe}: Statistical Multiplexing with Model Parallelism for Deep Learning Serving.” In 17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23), 663–79.
Liang, Percy, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, et al. 2022. “Holistic Evaluation of Language Models.” arXiv Preprint arXiv:2211.09110, November. http://arxiv.org/abs/2211.09110v2.
Lin, Ji, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han. 2020. “MCUNet: Tiny Deep Learning on IoT Devices.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/86c51678350f656dcc7f490a43946ee5-Abstract.html.
Lin, Jiong, Qing Gao, Yungui Gong, Yizhou Lu, Chao Zhang, and Fengge Zhang. 2020. “Primordial Black Holes and Secondary Gravitational Waves from k/g Inflation.” arXiv Preprint arXiv:2001.05909, January. http://arxiv.org/abs/2001.05909v2.
Lin, Ji, Jiaming Tang, Haotian Tang, Shang Yang, Wei-Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2023. “AWQ: Activation-Aware Weight Quantization for LLM Compression and Acceleration.” arXiv Preprint arXiv:2306.00978 abs/2306.00978 (June). http://arxiv.org/abs/2306.00978v5.
Lin, Ji, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han. 2022. “On-Device Training Under 256kb Memory.” Adv. Neur. In. 35: 22941–54.
Lin, Ji, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, and Song Han. 2023. “Tiny Machine Learning: Progress and Futures [Feature].” IEEE Circuits and Systems Magazine 23 (3): 8–34. https://doi.org/10.1109/mcas.2023.3302182.
Lin, Tsung-Yi, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. “Microsoft COCO: Common Objects in Context.” In Computer Vision – ECCV 2014, 740–55. Springer; Springer International Publishing. https://doi.org/10.1007/978-3-319-10602-1\_48.
Lindgren, Simon. 2023. Handbook of Critical Studies of Artificial Intelligence. Edward Elgar Publishing.
Lindholm, Andreas, Dave Zachariah, Petre Stoica, and Thomas B. Schon. 2019. “Data Consistency Approach to Model Validation.” IEEE Access 7: 59788–96. https://doi.org/10.1109/access.2019.2915109.
Lindholm, Erik, John Nickolls, Stuart Oberman, and John Montrym. 2008. “NVIDIA Tesla: A Unified Graphics and Computing Architecture.” IEEE Micro 28 (2): 39–55. https://doi.org/10.1109/mm.2008.31.
Liu, C, G Bellec, B Vogginger, D Kappel, J Partzsch, F Neumärker, S Höppner, et al. 2018. “Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype.” Frontiers in Neuroscience 12: 840. https://doi.org/10.3389/fnins.2018.00840.
Liu, Yanan, Xiaoxia Wei, Jinyu Xiao, Zhijie Liu, Yang Xu, and Yun Tian. 2020. “Energy Consumption and Emission Mitigation Prediction Based on Data Center Traffic and PUE for Global Data Centers.” Global Energy Interconnection 3 (3): 272–82. https://doi.org/10.1016/j.gloei.2020.07.008.
Liu, Yingcheng, Guo Zhang, Christopher G. Tarolli, Rumen Hristov, Stella Jensen-Roberts, Emma M. Waddell, Taylor L. Myers, et al. 2022. “Monitoring Gait at Home with Radio Waves in Parkinson’s Disease: A Marker of Severity, Progression, and Medication Response.” Science Translational Medicine 14 (663): eadc9669. https://doi.org/10.1126/scitranslmed.adc9669.
Lopez-Paz, David, and Marc’Aurelio Ranzato. 2017. “Gradient Episodic Memory for Continual Learning.” In NIPS, 30:6467–76. https://proceedings.neurips.cc/paper/2017/hash/f87522788a2be2d171666752f97ddebb-Abstract.html.
Lou, Yin, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. “Accurate Intelligible Models with Pairwise Interactions.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, edited by Inderjit S. Dhillon, Yehuda Koren, Rayid Ghani, Ted E. Senator, Paul Bradley, Rajesh Parekh, Jingrui He, Robert L. Grossman, and Ramasamy Uthurusamy, 623–31. ACM. https://doi.org/10.1145/2487575.2487579.
Lowy, Andrew, Sina Baharlouei, Rakesh Pavan, Meisam Razaviyayn, and Ahmad Beirami. 2021. “A Stochastic Optimization Framework for Fair Risk Minimization.” CoRR abs/2102.12586 (February). http://arxiv.org/abs/2102.12586v5.
Lundberg, Scott M., and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4765–74. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html.
Lyons, Richard G. 2011. Understanding Digital Signal Processing. 3rd ed. Prentice Hall.
Ma, Dongning, Fred Lin, Alban Desmaison, Joel Coburn, Daniel Moore, Sriram Sankar, and Xun Jiao. 2024. “Dr. DNA: Combating Silent Data Corruptions in Deep Learning Using Distribution of Neuron Activations.” In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, 239–52. ACM. https://doi.org/10.1145/3620666.3651349.
Maas, Martin, David G. Andersen, Michael Isard, Mohammad Mahdi Javanmard, Kathryn S. McKinley, and Colin Raffel. 2024. “Combining Machine Learning and Lifetime-Based Resource Management for Memory Allocation and Beyond.” Communications of the ACM 67 (4): 87–96. https://doi.org/10.1145/3611018.
Madry, Aleksander, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. “Towards Deep Learning Models Resistant to Adversarial Attacks.” arXiv Preprint arXiv:1706.06083, June. http://arxiv.org/abs/1706.06083v4.
Mahmoud, Abdulrahman, Neeraj Aggarwal, Alex Nobbe, Jose Rodrigo Sanchez Vicarte, Sarita V. Adve, Christopher W. Fletcher, Iuri Frosio, and Siva Kumar Sastry Hari. 2020. “PyTorchFI: A Runtime Perturbation Tool for DNNs.” In 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-w), 25–31. IEEE; IEEE. https://doi.org/10.1109/dsn-w50199.2020.00014.
Mahmoud, Abdulrahman, Siva Kumar Sastry Hari, Christopher W. Fletcher, Sarita V. Adve, Charbel Sakr, Naresh Shanbhag, Pavlo Molchanov, Michael B. Sullivan, Timothy Tsai, and Stephen W. Keckler. 2021. “Optimizing Selective Protection for CNN Resilience.” In 2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE), 127–38. IEEE. https://doi.org/10.1109/issre52982.2021.00025.
Mahmoud, Abdulrahman, Thierry Tambe, Tarek Aloui, David Brooks, and Gu-Yeon Wei. 2022. “GoldenEye: A Platform for Evaluating Emerging Numerical Data Formats in DNN Accelerators.” In 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 206–14. IEEE. https://doi.org/10.1109/dsn53405.2022.00031.
Martin, C. Dianne. 1993. “The Myth of the Awesome Thinking Machine.” Communications of the ACM 36 (4): 120–33. https://doi.org/10.1145/255950.153587.
Marulli, Fiammetta, Stefano Marrone, and Laura Verde. 2022. “Sensitivity of Machine Learning Approaches to Fake and Untrusted Data in Healthcare Domain.” Journal of Sensor and Actuator Networks 11 (2): 21. https://doi.org/10.3390/jsan11020021.
Maslej, Nestor, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, et al. 2023. “Artificial Intelligence Index Report 2023.” ArXiv Preprint abs/2310.03715 (October). http://arxiv.org/abs/2310.03715v1.
Maslej, Nestor, Loredana Fattorini, C. Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, et al. 2024. “Artificial Intelligence Index Report 2024.” CoRR. https://doi.org/10.48550/ARXIV.2405.19522.
Mattson, Peter, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, et al. 2020. “MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance.” IEEE Micro 40 (2): 8–16. https://doi.org/10.1109/mm.2020.2974843.
Mazumder, Mark, Sharad Chitlangia, Colby Banbury, Yiping Kang, Juan Manuel Ciro, Keith Achorn, Daniel Galvez, et al. 2021. “Multilingual Spoken Words Corpus.” In Thirty-Fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
McAuliffe, Michael, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. “Montreal Forced Aligner: Trainable Text-Speech Alignment Using Kaldi.” In Interspeech 2017, 498–502. ISCA. https://doi.org/10.21437/interspeech.2017-1386.
McCarthy, John. 1981. “EPISTEMOLOGICAL PROBLEMS OF ARTIFICIAL INTELLIGENCE.” In Readings in Artificial Intelligence, 459–65. Elsevier. https://doi.org/10.1016/b978-0-934613-03-3.50035-0.
McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017b. “Communication-Efficient Learning of Deep Networks from Decentralized Data.” In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, edited by Aarti Singh and Xiaojin (Jerry) Zhu, 54:1273–82. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v54/mcmahan17a.html.
———. 2017a. “Communication-Efficient Learning of Deep Networks from Decentralized Data.” In Artificial Intelligence and Statistics, 1273–82. PMLR. http://proceedings.mlr.press/v54/mcmahan17a.html.
Mellempudi, Naveen, Sudarshan Srinivasan, Dipankar Das, and Bharat Kaul. 2019. “Mixed Precision Training with 8-Bit Floating Point.” arXiv Preprint arXiv:1905.12334.
Merity, Stephen, Caiming Xiong, James Bradbury, and Richard Socher. 2016. “Pointer Sentinel Mixture Models.” arXiv Preprint arXiv:1609.07843, September. http://arxiv.org/abs/1609.07843v1.
Micikevicius, Paulius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, et al. 2017b. “Mixed Precision Training.” arXiv Preprint arXiv:1710.03740, October. http://arxiv.org/abs/1710.03740v3.
———, et al. 2017a. “Mixed Precision Training.” arXiv Preprint arXiv:1710.03740, October. http://arxiv.org/abs/1710.03740v3.
Micikevicius, Paulius, Dusan Stosic, Neil Burgess, Marius Cornea, Pradeep Dubey, Richard Grisenthwaite, Sangwon Ha, et al. 2022. “FP8 Formats for Deep Learning.” arXiv Preprint arXiv:2209.05433. https://arxiv.org/abs/2209.05433.
Miller, Charlie. 2019. “Lessons Learned from Hacking a Car.” IEEE Design &Amp; Test 36 (6): 7–9. https://doi.org/10.1109/mdat.2018.2863106.
Miller, Charlie, and Chris Valasek. 2015. “Remote Exploitation of an Unaltered Passenger Vehicle.” Black Hat USA 2015 (S 91): 1–91.
Mills, Andrew, and Stephen Le Hunte. 1997. “An Overview of Semiconductor Photocatalysis.” Journal of Photochemistry and Photobiology A: Chemistry 108 (1): 1–35. https://doi.org/10.1016/s1010-6030(97)00118-4.
Mirhoseini, Azalia et al. 2017. “Device Placement Optimization with Reinforcement Learning.” International Conference on Machine Learning (ICML).
Mohanram, K., and N. A. Touba. n.d. “Partial Error Masking to Reduce Soft Error Failure Rate in Logic Circuits.” In Proceedings. 16th IEEE Symposium on Computer Arithmetic, 433–40. IEEE; IEEE Comput. Soc. https://doi.org/10.1109/dftvs.2003.1250141.
Monyei, Chukwuka G., and Kirsten E. H. Jenkins. 2018. “Electrons Have No Identity: Setting Right Misrepresentations in Google and Apple’s Clean Energy Purchasing.” Energy Research &Amp; Social Science 46 (December): 48–51. https://doi.org/10.1016/j.erss.2018.06.015.
Moore, Gordon. 2021. “Cramming More Components onto Integrated Circuits (1965).” In Ideas That Created the Future, 261–66. The MIT Press. https://doi.org/10.7551/mitpress/12274.003.0027.
Moore, Sean S., Kevin J. O’Sullivan, and Francesco Verdecchia. 2015. “Shrinking the Supply Chain for Implantable Coronary Stent Devices.” Annals of Biomedical Engineering 44 (2): 497–507. https://doi.org/10.1007/s10439-015-1471-8.
Moshawrab, Mohammad, Mehdi Adda, Abdenour Bouzouane, Hussein Ibrahim, and Ali Raad. 2023. “Reviewing Federated Learning Aggregation Algorithms; Strategies, Contributions, Limitations and Future Perspectives.” Electronics 12 (10): 2287. https://doi.org/10.3390/electronics12102287.
Mukherjee, S. S., J. Emer, and S. K. Reinhardt. n.d. “The Soft Error Problem: An Architectural Perspective.” In 11th International Symposium on High-Performance Computer Architecture, 243–47. IEEE; IEEE. https://doi.org/10.1109/hpca.2005.37.
Myllyaho, Lalli, Mikko Raatikainen, Tomi Männistö, Jukka K. Nurminen, and Tommi Mikkonen. 2022. “On Misbehaviour and Fault Tolerance in Machine Learning Systems.” Journal of Systems and Software 183 (January): 111096. https://doi.org/10.1016/j.jss.2021.111096.
Nagel, Markus, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, and Tijmen Blankevoort. 2021a. “A White Paper on Neural Network Quantization.” arXiv Preprint arXiv:2106.08295, June. http://arxiv.org/abs/2106.08295v1.
———. 2021b. “A White Paper on Neural Network Quantization.” arXiv Preprint arXiv:2106.08295, June. http://arxiv.org/abs/2106.08295v1.
Narayanan, Arvind, and Vitaly Shmatikov. 2006. “How to Break Anonymity of the Netflix Prize Dataset.” CoRR. http://arxiv.org/abs/cs/0610105.
Narayanan, Deepak, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Anand Korthikanti, Dmitri Vainbrand, et al. 2021a. “Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM.” NeurIPS, April. http://arxiv.org/abs/2104.04473v5.
Narayanan, Deepak, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, et al. 2021b. “Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 1–15. ACM. https://doi.org/10.1145/3458817.3476209.
Nayak, Prateeth, Takuya Higuchi, Anmol Gupta, Shivesh Ranjan, Stephen Shum, Siddharth Sigtia, Erik Marchi, et al. 2022. “Improving Voice Trigger Detection with Metric Learning.” arXiv Preprint arXiv:2204.02455, April. http://arxiv.org/abs/2204.02455v2.
Ng, Davy Tsz Kit, Jac Ka Lok Leung, Kai Wah Samuel Chu, and Maggie Shen Qiao. 2021. “<Scp>AI</Scp> Literacy: Definition, Teaching, Evaluation and Ethical Issues.” Proceedings of the Association for Information Science and Technology 58 (1): 504–9. https://doi.org/10.1002/pra2.487.
Ngo, Richard, Lawrence Chan, and Sören Mindermann. 2022. “The Alignment Problem from a Deep Learning Perspective.” ArXiv Preprint abs/2209.00626 (August). http://arxiv.org/abs/2209.00626v6.
Nguyen, Ngoc-Bao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, and Ngai-Man Cheung. 2023. “Re-Thinking Model Inversion Attacks Against Deep Neural Networks.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16384–93. IEEE. https://doi.org/10.1109/cvpr52729.2023.01572.
Nishigaki, Shinsuke. 2024. “Eigenphase Distributions of Unimodular Circular Ensembles.” arXiv Preprint arXiv:2401.09045 36 (January). http://arxiv.org/abs/2401.09045v2.
Norrie, Thomas, Nishant Patil, Doe Hyun Yoon, George Kurian, Sheng Li, James Laudon, Cliff Young, Norman Jouppi, and David Patterson. 2021. “The Design Process for Google’s Training Chips: TPUv2 and TPUv3.” IEEE Micro 41 (2): 56–63. https://doi.org/10.1109/mm.2021.3058217.
Northcutt, Curtis G, Anish Athalye, and Jonas Mueller. 2021. “Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks.” arXiv. https://doi.org/https://doi.org/10.48550/arXiv.2103.14749 arXiv-issued DOI via DataCite.
NVIDIA. 2021. “TensorRT: High-Performance Deep Learning Inference Library.” NVIDIA Developer Blog. https://developer.nvidia.com/tensorrt.
Oakden-Rayner, Luke, Jared Dunnmon, Gustavo Carneiro, and Christopher Re. 2020. “Hidden Stratification Causes Clinically Meaningful Failures in Machine Learning for Medical Imaging.” In Proceedings of the ACM Conference on Health, Inference, and Learning, 151–59. ACM. https://doi.org/10.1145/3368555.3384468.
Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366 (6464): 447–53. https://doi.org/10.1126/science.aax2342.
Oecd. 2023. “A Blueprint for Building National Compute Capacity for Artificial Intelligence.” 350. Organisation for Economic Co-Operation; Development (OECD). https://doi.org/10.1787/876367e3-en.
OECD.AI. 2021. “Measuring the Geographic Distribution of AI Computing Capacity.” <https://oecd.ai/en/policy-circle/computing-capacity>.
Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. 2020. “Zoom in: An Introduction to Circuits.” Distill 5 (3): e00024–001. https://doi.org/10.23915/distill.00024.001.
Oliynyk, Daryna, Rudolf Mayer, and Andreas Rauber. 2023. “I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences.” ACM Computing Surveys 55 (14s): 1–41. https://doi.org/10.1145/3595292.
Oprea, Alina, Anoop Singhal, and Apostol Vassilev. 2022. “Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?” Computer 55 (11): 94–99. https://doi.org/10.1109/mc.2022.3190787.
Owens, J. D., M. Houston, D. Luebke, S. Green, J. E. Stone, and J. C. Phillips. 2008. “GPU Computing.” Proceedings of the IEEE 96 (5): 879–99. https://doi.org/10.1109/jproc.2008.917757.
Palmer, John F. 1980. “The INTEL® 8087 Numeric Data Processor.” In Proceedings of the May 19-22, 1980, National Computer Conference on - AFIPS ’80, 887. ACM Press. https://doi.org/10.1145/1500518.1500674.
Pan, Sinno Jialin, and Qiang Yang. 2010. “A Survey on Transfer Learning.” IEEE Transactions on Knowledge and Data Engineering 22 (10): 1345–59. https://doi.org/10.1109/tkde.2009.191.
Panda, Priyadarshini, Indranil Chakraborty, and Kaushik Roy. 2019. “Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks.” IEEE Access 7: 70157–68. https://doi.org/10.1109/access.2019.2919463.
Papadimitriou, George, and Dimitris Gizopoulos. 2021. “Demystifying the System Vulnerability Stack: Transient Fault Effects Across the Layers.” In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 902–15. IEEE; IEEE. https://doi.org/10.1109/isca52012.2021.00075.
Papernot, Nicolas, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. “Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks.” In 2016 IEEE Symposium on Security and Privacy (SP), 582–97. IEEE; IEEE. https://doi.org/10.1109/sp.2016.41.
Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. “BLEU: A Method for Automatic Evaluation of Machine Translation.” In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL ’02, 311. Association for Computational Linguistics. https://doi.org/10.3115/1073083.1073135.
Park, Daniel S., William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.” arXiv Preprint arXiv:1904.08779, April. http://arxiv.org/abs/1904.08779v3.
Parrish, Alicia, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, et al. 2023. “Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models.” ArXiv Preprint abs/2305.14384 (May). http://arxiv.org/abs/2305.14384v1.
Paszke, Adam, Sam Gross, Francisco Massa, and et al. 2019. “PyTorch: An Imperative Style, High-Performance Deep Learning Library.” Advances in Neural Information Processing Systems (NeurIPS) 32: 8026–37.
Paszke, Adam, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, et al. 2019. “PyTorch: An Imperative Style, High-Performance Deep Learning Library.” In Advances in Neural Information Processing Systems, 8026–37.
Patel, Jay M. 2020. “Introduction to Common Crawl Datasets.” In Getting Structured Data from the Internet, 277–324. Apress. https://doi.org/10.1007/978-1-4842-6576-5\_6.
Patterson, David A., and John L. Hennessy. 2021a. Computer Architecture: A Quantitative Approach. 6th ed. Morgan Kaufmann.
———. 2021b. Computer Organization and Design RISC-v Edition: The Hardware Software Interface. 2nd ed. San Francisco, CA: Morgan Kaufmann.
———. 2021c. Computer Organization and Design: The Hardware/Software Interface. 5th ed. Morgan Kaufmann.
Patterson, David, Joseph Gonzalez, Urs Holzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. 2022. “The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink.” Computer 55 (7): 18–28. https://doi.org/10.1109/mc.2022.3148714.
Patterson, David, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021a. “Carbon Emissions and Large Neural Network Training.” arXiv Preprint arXiv:2104.10350.
———. 2021b. “Carbon Emissions and Large Neural Network Training.” arXiv Preprint arXiv:2104.10350, April. http://arxiv.org/abs/2104.10350v3.
Penedo, Guilherme, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, and Thomas Wolf. 2024. “The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale.” arXiv Preprint arXiv:2406.17557, June. http://arxiv.org/abs/2406.17557v2.
Peters, Dorian, Rafael A. Calvo, and Richard M. Ryan. 2018. “Designing for Motivation, Engagement and Wellbeing in Digital Experience.” Frontiers in Psychology 9 (May): 797. https://doi.org/10.3389/fpsyg.2018.00797.
Phillips, P. Jonathon, Carina A. Hahn, Peter C. Fontana, David A. Broniatowski, and Mark A. Przybocki. 2020. “Four Principles of Explainable Artificial Intelligence.” Gaithersburg, Maryland. National Institute of Standards; Technology (NIST). https://doi.org/10.6028/nist.ir.8312-draft.
Pineau, Joelle, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d’Alché-Buc, Emily Fox, and Hugo Larochelle. 2021. “Improving Reproducibility in Machine Learning Research (a Report from the Neurips 2019 Reproducibility Program).” Journal of Machine Learning Research 22 (164): 1–20.
Plank, James S. 1997. “A Tutorial on Reed-Solomon Coding for Fault-Tolerance in RAID-Like Systems.” Software: Practice and Experience 27 (9): 995–1012. https://doi.org/10.1002/(sici)1097-024x(199709)27:9<995::aid-spe111>3.0.co;2-6.
Pont, Michael J, and Royan HL Ong. 2002. “Using Watchdog Timers to Improve the Reliability of Single-Processor Embedded Systems: Seven New Patterns and a Case Study.” In Proceedings of the First Nordic Conference on Pattern Languages of Programs, 159–200. Citeseer.
Prakash, Shvetank, Tim Callahan, Joseph Bushagour, Colby Banbury, Alan V. Green, Pete Warden, Tim Ansell, and Vijay Janapa Reddi. 2023. “CFU Playground: Full-Stack Open-Source Framework for Tiny Machine Learning (TinyML) Acceleration on FPGAs.” In 2023 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), abs/2201.01863:157–67. IEEE. https://doi.org/10.1109/ispass57527.2023.00024.
Prakash, Shvetank, Matthew Stewart, Colby Banbury, Mark Mazumder, Pete Warden, Brian Plancher, and Vijay Janapa Reddi. 2023. “Is TinyML Sustainable? Assessing the Environmental Impacts of Machine Learning on Microcontrollers.” ArXiv Preprint abs/2301.11899 (January). http://arxiv.org/abs/2301.11899v3.
Psoma, Sotiria D., and Chryso Kanthou. 2023. “Wearable Insulin Biosensors for Diabetes Management: Advances and Challenges.” Biosensors 13 (7): 719. https://doi.org/10.3390/bios13070719.
Pushkarna, Mahima, Andrew Zaldivar, and Oddur Kjartansson. 2022. “Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI.” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 1776–826. ACM. https://doi.org/10.1145/3531146.3533231.
Putnam, Andrew, Adrian M. Caulfield, Eric S. Chung, Derek Chiou, Kypros Constantinides, John Demme, Hadi Esmaeilzadeh, et al. 2014. “A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services.” ACM SIGARCH Computer Architecture News 42 (3): 13–24. https://doi.org/10.1145/2678373.2665678.
Qi, Chen, Shibo Shen, Rongpeng Li, Zhifeng Zhao, Qing Liu, Jing Liang, and Honggang Zhang. 2021. “An Efficient Pruning Scheme of Deep Neural Networks for Internet of Things Applications.” EURASIP Journal on Advances in Signal Processing 2021 (1): 31. https://doi.org/10.1186/s13634-021-00744-4.
Qi, Xuan, Burak Kantarci, and Chen Liu. 2017. “GPU-Based Acceleration of SDN Controllers.” In Network as a Service for Next Generation Internet, 339–56. Institution of Engineering; Technology. https://doi.org/10.1049/pbte073e\_ch14.
R. V., Rashmi, and Karthikeyan A. 2018. “Secure Boot of Embedded Applications - a Review.” In 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 291–98. IEEE. https://doi.org/10.1109/iceca.2018.8474730.
Radosavovic, Ilija, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dollar. 2020. “Designing Network Design Spaces.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10428–36. IEEE. https://doi.org/10.1109/cvpr42600.2020.01044.
Rajbhandari, Samyam, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. “ZeRO: Memory Optimization Towards Training Trillion Parameter Models.” Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC). https://doi.org/10.5555/3433701.3433721.
Rajpurkar, Pranav, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. “SQuAD: 100,000+ Questions for Machine Comprehension of Text.” arXiv Preprint arXiv:1606.05250, June, 2383–92. https://doi.org/10.18653/v1/d16-1264.
Ramaswamy, Vikram V., Sunnie S. Y. Kim, Ruth Fong, and Olga Russakovsky. 2023a. “UFO: A Unified Method for Controlling Understandability and Faithfulness Objectives in Concept-Based Explanations for CNNs.” ArXiv Preprint abs/2303.15632 (March). http://arxiv.org/abs/2303.15632v1.
———. 2023b. “Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10932–41. IEEE. https://doi.org/10.1109/cvpr52729.2023.01052.
Ramcharan, Amanda, Kelsee Baranowski, Peter McCloskey, Babuali Ahmed, James Legg, and David P. Hughes. 2017. “Deep Learning for Image-Based Cassava Disease Detection.” Frontiers in Plant Science 8 (October): 1852. https://doi.org/10.3389/fpls.2017.01852.
Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. “Zero-Shot Text-to-Image Generation.” In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, edited by Marina Meila and Tong Zhang, 139:8821–31. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v139/ramesh21a.html.
Ranganathan, Parthasarathy, and Urs Hölzle. 2024. “Twenty Five Years of Warehouse-Scale Computing.” IEEE Micro 44 (5): 11–22. https://doi.org/10.1109/mm.2024.3409469.
Rashid, Layali, Karthik Pattabiraman, and Sathish Gopalakrishnan. 2012. “Intermittent Hardware Errors Recovery: Modeling and Evaluation.” In 2012 Ninth International Conference on Quantitative Evaluation of Systems, 220–29. IEEE; IEEE. https://doi.org/10.1109/qest.2012.37.
———. 2015. “Characterizing the Impact of Intermittent Hardware Faults on Programs.” IEEE Transactions on Reliability 64 (1): 297–310. https://doi.org/10.1109/tr.2014.2363152.
Rastegari, Mohammad, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks.” In Computer Vision – ECCV 2016, 525–42. Springer International Publishing. https://doi.org/10.1007/978-3-319-46493-0\_32.
Ratner, Alex, Braden Hancock, Jared Dunnmon, Roger Goldman, and Christopher Ré. 2018. “Snorkel MeTaL: Weak Supervision for Multi-Task Learning.” In Proceedings of the Second Workshop on Data Management for End-to-End Machine Learning. ACM. https://doi.org/10.1145/3209889.3209898.
Reagen, Brandon, Robert Adolf, Paul Whatmough, Gu-Yeon Wei, and David Brooks. 2017. Deep Learning for Computer Architects. Springer International Publishing. https://doi.org/10.1007/978-3-031-01756-8.
Reagen, Brandon, Udit Gupta, Lillian Pentecost, Paul Whatmough, Sae Kyu Lee, Niamh Mulholland, David Brooks, and Gu-Yeon Wei. 2018. “Ares: A Framework for Quantifying the Resilience of Deep Neural Networks.” In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1–6. IEEE. https://doi.org/10.1109/dac.2018.8465834.
Real, Esteban, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2019a. “Regularized Evolution for Image Classifier Architecture Search.” Proceedings of the AAAI Conference on Artificial Intelligence 33 (01): 4780–89. https://doi.org/10.1609/aaai.v33i01.33014780.
———. 2019b. “Regularized Evolution for Image Classifier Architecture Search.” Proceedings of the AAAI Conference on Artificial Intelligence 33 (01): 4780–89. https://doi.org/10.1609/aaai.v33i01.33014780.
Reddi, Vijay Janapa, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, et al. 2019. “MLPerf Inference Benchmark.” arXiv Preprint arXiv:1911.02549, November, 446–59. https://doi.org/10.1109/isca45697.2020.00045.
Reddi, Vijay Janapa, and Meeta Sharma Gupta. 2013. Resilient Architecture Design for Voltage Variation. Springer International Publishing. https://doi.org/10.1007/978-3-031-01739-1.
Reis, G. A., J. Chang, N. Vachharajani, R. Rangan, and D. I. August. n.d. “SWIFT: Software Implemented Fault Tolerance.” In International Symposium on Code Generation and Optimization, 243–54. IEEE; IEEE. https://doi.org/10.1109/cgo.2005.34.
Research, Microsoft. 2021. DeepSpeed: Extreme-Scale Model Training for Everyone.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “” Why Should i Trust You?” Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44.
Richter, Joel D., and Xinyu Zhao. 2021. “The Molecular Biology of FMRP: New Insights into Fragile x Syndrome.” Nature Reviews Neuroscience 22 (4): 209–22. https://doi.org/10.1038/s41583-021-00432-0.
Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10674–85. IEEE. https://doi.org/10.1109/cvpr52688.2022.01042.
Romero, Francisco, Qian Li 0027, Neeraja J. Yadwadkar, and Christos Kozyrakis. 2021. “INFaaS: Automated Model-Less Inference Serving.” In 2021 USENIX Annual Technical Conference (USENIX ATC 21), 397–411. https://www.usenix.org/conference/atc21/presentation/romero.
Rosenblatt, F. 1958. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review 65 (6): 386���408. https://doi.org/10.1037/h0042519.
Rudin, Cynthia. 2019. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1 (5): 206–15. https://doi.org/10.1038/s42256-019-0048-x.
Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. “Learning Representations by Back-Propagating Errors.” Nature 323 (6088): 533–36. https://doi.org/10.1038/323533a0.
Russell, Stuart. 2021. “Human-Compatible Artificial Intelligence.” In Human-Like Machine Intelligence, 3–23. Oxford University Press. https://doi.org/10.1093/oso/9780198862536.003.0001.
Ryan, Richard M., and Edward L. Deci. 2000. “Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being.” American Psychologist 55 (1): 68–78. https://doi.org/10.1037/0003-066x.55.1.68.
Sabour, Sara, Nicholas Frosst, and Geoffrey E Hinton. 2017. “Dynamic Routing Between Capsules.” In Advances in Neural Information Processing Systems. Vol. 30.
Sambasivan, Nithya, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021a. ‘Everyone Wants to Do the Model Work, Not the Data Work’: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15. ACM. https://doi.org/10.1145/3411764.3445518.
———. 2021b. ‘Everyone Wants to Do the Model Work, Not the Data Work’: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15. ACM. https://doi.org/10.1145/3411764.3445518.
Sangchoolie, Behrooz, Karthik Pattabiraman, and Johan Karlsson. 2017. “One Bit Is (Not) Enough: An Empirical Study of the Impact of Single and Multiple Bit-Flip Errors.” In 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 97���108. IEEE; IEEE. https://doi.org/10.1109/dsn.2017.30.
Sanh, Victor, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. “DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter.” arXiv Preprint arXiv:1910.01108, October. http://arxiv.org/abs/1910.01108v4.
Scardapane, Simone, Ye Wang, and Massimo Panella. 2020. “Why Should i Trust You? A Survey of Explainability of Machine Learning for Healthcare.” Pattern Recognition Letters 140: 47–57.
Schäfer, Mike S. 2023. “The Notorious GPT: Science Communication in the Age of Artificial Intelligence.” Journal of Science Communication 22 (02): Y02. https://doi.org/10.22323/2.22020402.
Schwartz, Daniel, Jonathan Michael Gomes Selman, Peter Wrege, and Andreas Paepcke. 2021. “Deployment of Embedded Edge-AI for Wildlife Monitoring in Remote Regions.” In 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), 1035–42. IEEE; IEEE. https://doi.org/10.1109/icmla52953.2021.00170.
Schwartz, Roy, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. “Green AI.” Communications of the ACM 63 (12): 54–63. https://doi.org/10.1145/3381831.
Seide, Frank, and Amit Agarwal. 2016. “CNTK: Microsoft’s Open-Source Deep-Learning Toolkit.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2135–35. ACM. https://doi.org/10.1145/2939672.2945397.
Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.” In 2017 IEEE International Conference on Computer Vision (ICCV), 618–26. IEEE. https://doi.org/10.1109/iccv.2017.74.
Seong, Nak Hee, Dong Hyuk Woo, Vijayalakshmi Srinivasan, Jude A. Rivers, and Hsien-Hsin S. Lee. 2010. “SAFER: Stuck-at-Fault Error Recovery for Memories.” In 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture, 115–24. IEEE; IEEE. https://doi.org/10.1109/micro.2010.46.
Settles, Burr. 2012b. Active Learning. University of Wisconsin-Madison Department of Computer Sciences. Vol. 1648. Springer International Publishing. https://doi.org/10.1007/978-3-031-01560-1.
———. 2012a. Active Learning. Computer Sciences Technical Report. University of Wisconsin–Madison; Springer International Publishing. https://doi.org/10.1007/978-3-031-01560-1.
Shalev-Shwartz, Shai, Shaked Shammah, and Amnon Shashua. 2017. “On a Formal Model of Safe and Scalable Self-Driving Cars.” ArXiv Preprint abs/1708.06374 (August). http://arxiv.org/abs/1708.06374v6.
Shallue, Christopher J., Jaehoon Lee, et al. 2019. “Measuring the Effects of Data Parallelism on Neural Network Training.” Journal of Machine Learning Research 20: 1–49. http://jmlr.org/papers/v20/18-789.html.
Shan, Shawn, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, and Ben Y. Zhao. 2023. “Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.” ArXiv Preprint abs/2310.13828 (October). http://arxiv.org/abs/2310.13828v3.
Shang, J., G. Wang, and Y. Liu. 2018. “Accelerating Genomic Data Analysis with Domain-Specific Architectures.” IEEE Transactions on Computers 67 (7): 965–78. https://doi.org/10.1109/TC.2018.2799212.
Shazeer, Noam, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, et al. 2018. “Mesh-TensorFlow: Deep Learning for Supercomputers.” arXiv Preprint arXiv:1811.02084, November. http://arxiv.org/abs/1811.02084v1.
Shazeer, Noam, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.” arXiv Preprint arXiv:1701.06538, January. http://arxiv.org/abs/1701.06538v1.
Shazeer, Noam, Azalia Mirhoseini, Piotr Maziarz, et al. 2017. “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.” In International Conference on Learning Representations.
Sheaffer, Jeremy W., David P. Luebke, and Kevin Skadron. 2007. “A Hardware Redundancy and Recovery Mechanism for Reliable Scientific Computation on Graphics Processors.” In Graphics Hardware, 2007:55–64. Citeseer. https://doi.org/10.2312/EGGH/EGGH07/055-064.
Shehabi, Arman, Sarah Smith, Dale Sartor, Richard Brown, Magnus Herrlin, Jonathan Koomey, Eric Masanet, Nathaniel Horner, Inês Azevedo, and William Lintner. 2016. “United States Data Center Energy Usage Report.” Office of Scientific; Technical Information (OSTI). https://doi.org/10.2172/1372902.
Shen, Sheng, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2019. “Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT.” Proceedings of the AAAI Conference on Artificial Intelligence 34 (05): 8815–21. https://doi.org/10.1609/aaai.v34i05.6409.
Sheng, Victor S., and Jing Zhang. 2019. “Machine Learning with Crowdsourcing: A Brief Summary of the Past Research and Future Directions.” Proceedings of the AAAI Conference on Artificial Intelligence 33 (01): 9837–43. https://doi.org/10.1609/aaai.v33i01.33019837.
Shi, Hongrui, and Valentin Radu. 2022. “Data Selection for Efficient Model Update in Federated Learning.” In Proceedings of the 2nd European Workshop on Machine Learning and Systems, 72–78. ACM. https://doi.org/10.1145/3517207.3526980.
Shneiderman, Ben. 2020. “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems.” ACM Transactions on Interactive Intelligent Systems 10 (4): 1–31. https://doi.org/10.1145/3419764.
———. 2022. Human-Centered AI. Oxford University Press.
Shoeybi, Mohammad, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019b. “Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism.” arXiv Preprint arXiv:1909.08053, September. http://arxiv.org/abs/1909.08053v4.
———. 2019a. “Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism.” arXiv Preprint arXiv:1909.08053, September. http://arxiv.org/abs/1909.08053v4.
Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. “Membership Inference Attacks Against Machine Learning Models.” In 2017 IEEE Symposium on Security and Privacy (SP), 3–18. IEEE; IEEE. https://doi.org/10.1109/sp.2017.41.
Siddik, Md Abu Bakar, Arman Shehabi, and Landon Marston. 2021. “The Environmental Footprint of Data Centers in the United States.” Environmental Research Letters 16 (6): 064017. https://doi.org/10.1088/1748-9326/abfba1.
Silvestro, Daniele, Stefano Goria, Thomas Sterner, and Alexandre Antonelli. 2022. “Improving Biodiversity Protection Through Artificial Intelligence.” Nature Sustainability 5 (5): 415–24. https://doi.org/10.1038/s41893-022-00851-6.
Singh, Narendra, and Oladele A. Ogunseitan. 2022. “Disentangling the Worldwide Web of e-Waste and Climate Change Co-Benefits.” Circular Economy 1 (2): 100011. https://doi.org/10.1016/j.cec.2022.100011.
Skorobogatov, Sergei. 2009. “Local Heating Attacks on Flash Memory Devices.” In 2009 IEEE International Workshop on Hardware-Oriented Security and Trust, 1–6. IEEE; IEEE. https://doi.org/10.1109/hst.2009.5225028.
Skorobogatov, Sergei P., and Ross J. Anderson. 2003. “Optical Fault Induction Attacks.” In Cryptographic Hardware and Embedded Systems - CHES 2002, 2–12. Springer; Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-36400-5\_2.
Smilkov, Daniel, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. “SmoothGrad: Removing Noise by Adding Noise.” ArXiv Preprint abs/1706.03825 (June). http://arxiv.org/abs/1706.03825v1.
Smith, Steven W. 1997. The Scientist and Engineer’s Guide to Digital Signal Processing. California Technical Publishing. https://www.dspguide.com/.
Sodani, Avinash. 2015. “Knights Landing (KNL): 2nd Generation Intel® Xeon Phi Processor.” In 2015 IEEE Hot Chips 27 Symposium (HCS), 1–24. IEEE. https://doi.org/10.1109/hotchips.2015.7477467.
Sokolova, Marina, and Guy Lapalme. 2009. “A Systematic Analysis of Performance Measures for Classification Tasks.” Information Processing &Amp; Management 45 (4): 427–37. https://doi.org/10.1016/j.ipm.2009.03.002.
Stephens, Nigel, Stuart Biles, Matthias Boettcher, Jacob Eapen, Mbou Eyole, Giacomo Gabrielli, Matt Horsnell, et al. 2017. “The ARM Scalable Vector Extension.” IEEE Micro 37 (2): 26–39. https://doi.org/10.1109/mm.2017.35.
Strassen, Volker. 1969. “Gaussian Elimination Is Not Optimal.” Numerische Mathematik 13 (4): 354–56. https://doi.org/10.1007/bf02165411.
Strickland, Eliza. 2019. “IBM Watson, Heal Thyself: How IBM Overpromised and Underdelivered on AI Health Care.” IEEE Spectrum 56 (4): 24–31. https://doi.org/10.1109/mspec.2019.8678513.
Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2019. “Energy and Policy Considerations for Deep Learning in NLP.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–50. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/p19-1355.
Sudhakar, Soumya, Vivienne Sze, and Sertac Karaman. 2023. “Data Centers on Wheels: Emissions from Computing Onboard Autonomous Vehicles.” IEEE Micro 43 (1): 29–39. https://doi.org/10.1109/mm.2022.3219803.
Sullivan, Gary J., Jens-Rainer Ohm, Woo-Jin Han, and Thomas Wiegand. 2012. “Overview of the High Efficiency Video Coding (HEVC) Standard.” IEEE Transactions on Circuits and Systems for Video Technology 22 (12): 1649–68. https://doi.org/10.1109/tcsvt.2012.2221191.
Sun, Siqi, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. “Patient Knowledge Distillation for BERT Model Compression.” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics. https://doi.org/10.18653/v1/d19-1441.
Systems, Cerebras. 2021a. “The Wafer-Scale Engine 2: Scaling AI Compute Beyond GPUs.” Cerebras White Paper. https://cerebras.ai/product-chip/.
———. 2021b. “Wafer-Scale Deep Learning Acceleration with the Cerebras CS-2.” Cerebras Technical Paper.
Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel Emer. 2017a. “Efficient Processing of Deep Neural Networks: A Tutorial and Survey.” Proceedings of the IEEE 105 (12): 2295–2329. https://doi.org/10.1109/jproc.2017.2761740.
Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. 2017b. “Efficient Processing of Deep Neural Networks: A Tutorial and Survey.” Proceedings of the IEEE 105 (12): 2295–2329. https://doi.org/10.1109/jproc.2017.2761740.
Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. “Intriguing Properties of Neural Networks.” Edited by Yoshua Bengio and Yann LeCun, December. http://arxiv.org/abs/1312.6199v4.
Tambe, Thierry, En-Yu Yang, Zishen Wan, Yuntian Deng, Vijay Janapa Reddi, Alexander Rush, David Brooks, and Gu-Yeon Wei. 2020. “Algorithm-Hardware Co-Design of Adaptive Floating-Point Encodings for Resilient Deep Learning Inference.” In 2020 57th ACM/IEEE Design Automation Conference (DAC), 1–6. IEEE; IEEE. https://doi.org/10.1109/dac18072.2020.9218516.
Tan, Mingxing, and Quoc V Le. 2019a. “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.” In International Conference on Machine Learning (ICML), 6105–14.
Tan, Mingxing, and Quoc V. Le. 2019c. “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.” In Proceedings of the International Conference on Machine Learning (ICML), 6105–14.
———. 2019b. “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.” In International Conference on Machine Learning.
Tarun, Ayush K, Vikram S Chundawat, Murari Mandal, and Mohan Kankanhalli. 2022. “Deep Regression Unlearning.” ArXiv Preprint abs/2210.08196 (October). http://arxiv.org/abs/2210.08196v2.
Team, The Theano Development, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, et al. 2016. “Theano: A Python Framework for Fast Computation of Mathematical Expressions,” May. http://arxiv.org/abs/1605.02688v1.
Teerapittayanon, Surat, Bradley McDanel, and H. T. Kung. 2017. “BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks.” arXiv Preprint arXiv:1709.01686, September. http://arxiv.org/abs/1709.01686v1.
The Sustainable Development Goals Report 2018. 2018. New York: United Nations. https://doi.org/10.18356/7d014b41-en.
Thompson, Neil C., Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso. 2021. “Deep Learning’s Diminishing Returns: The Cost of Improvement Is Becoming Unsustainable.” IEEE Spectrum 58 (10): 50–55. https://doi.org/10.1109/mspec.2021.9563954.
Thornton, James E. 1965. “Design of a Computer: The Control Data 6600.” Communications of the ACM 8 (6): 330��35.
Tianqi, Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Q. Yan, Haichen Shen, Meghan Cowan, et al. 2018a. “TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.” In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 578–94. https://www.usenix.org/conference/osdi18/presentation/chen.
———, et al. 2018b. “TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.” In OSDI, 578–94. https://www.usenix.org/conference/osdi18/presentation/chen.
Till, Aaron, Andrew L. Rypel, Andrew Bray, and Samuel B. Fey. 2019. “Fish Die-Offs Are Concurrent with Thermal Extremes in North Temperate Lakes.” Nature Climate Change 9 (8): 637–41. https://doi.org/10.1038/s41558-019-0520-y.
Tirtalistyani, Rose, Murtiningrum Murtiningrum, and Rameshwar S. Kanwar. 2022. “Indonesia Rice Irrigation System: Time for Innovation.” Sustainability 14 (19): 12477. https://doi.org/10.3390/su141912477.
Tramèr, Florian, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino, and Dan Boneh. 2019. “AdVersarial: Perceptual Ad Blocking Meets Adversarial Machine Learning.” In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2005–21. ACM. https://doi.org/10.1145/3319535.3354222.
Tsai, Min-Jen, Ping-Yi Lin, and Ming-En Lee. 2023. “Adversarial Attacks on Medical Image Classification.” Cancers 15 (17): 4228. https://doi.org/10.3390/cancers15174228.
Tsai, Timothy, Siva Kumar Sastry Hari, Michael Sullivan, Oreste Villa, and Stephen W. Keckler. 2021. “NVBitFI: Dynamic Fault Injection for GPUs.” In 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 284–91. IEEE; IEEE. https://doi.org/10.1109/dsn48987.2021.00041.
Tschand, Arya, Arun Tejusve Raghunath Rajan, Sachin Idgunji, Anirban Ghosh, Jeremy Holleman, Csaba Kiraly, Pawan Ambalkar, et al. 2024. “MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from Microwatts to Megawatts for Sustainable AI.” arXiv Preprint arXiv:2410.12032, October. http://arxiv.org/abs/2410.12032v2.
Uddin, Mueen, and Azizah Abdul Rahman. 2012. “Energy Efficiency and Low Carbon Enabler Green IT Framework for Data Centers Considering Green Metrics.” Renewable and Sustainable Energy Reviews 16 (6): 4078–94. https://doi.org/10.1016/j.rser.2012.03.014.
Umuroglu, Yaman, Nicholas J. Fraser, Giulio Gambardella, Michaela Blott, Philip Leong, Magnus Jahre, and Kees Vissers. 2017. “FINN: A Framework for Fast, Scalable Binarized Neural Network Inference.” In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 65–74. ACM. https://doi.org/10.1145/3020078.3021744.
Un, and World Economic Forum. 2019. A New Circular Vision for Electronics, Time for a Global Reboot. PACE - Platform for Accelerating the Circular Economy. https://www3.weforum.org/docs/WEF\_A\_New\_Circular\_Vision\_for\_Electronics.pdf.
Van Noorden, Richard. 2016. “ArXiv Preprint Server Plans Multimillion-Dollar Overhaul.” Nature 534 (7609): 602–2. https://doi.org/10.1038/534602a.
Vangal, Sriram, Somnath Paul, Steven Hsu, Amit Agarwal, Saurabh Kumar, Ram Krishnamurthy, Harish Krishnamurthy, James Tschanz, Vivek De, and Chris H. Kim. 2021. “Wide-Range Many-Core SoC Design in Scaled CMOS: Challenges and Opportunities.” IEEE Transactions on Very Large Scale Integration (VLSI) Systems 29 (5): 843–56. https://doi.org/10.1109/tvlsi.2021.3061649.
Vanschoren, Joaquin. 2018. “Meta-Learning: A Survey.” ArXiv Preprint arXiv:1810.03548, October. http://arxiv.org/abs/1810.03548v1.
Velazco, Raoul, Gilles Foucard, and Paul Peronnard. 2010. “Combining Results of Accelerated Radiation Tests and Fault Injections to Predict the Error Rate of an Application Implemented in SRAM-Based FPGAs.” IEEE Transactions on Nuclear Science 57 (6): 3500–3505. https://doi.org/10.1109/tns.2010.2087355.
Verma, Team Dual_Boot: Swapnil. 2022. “Elephant AI.” Hackster.io. https://www.hackster.io/dual\_boot/elephant-ai-ba71e9.
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” SSRN Electronic Journal 31: 841. https://doi.org/10.2139/ssrn.3063289.
Wald, Peter H., and Jeffrey R. Jones. 1987. “Semiconductor Manufacturing: An Introduction to Processes and Hazards.” American Journal of Industrial Medicine 11 (2): 203–21. https://doi.org/10.1002/ajim.4700110209.
Wan, Zishen, Aqeel Anwar, Yu-Shun Hsiao, Tianyu Jia, Vijay Janapa Reddi, and Arijit Raychowdhury. 2021. “Analyzing and Improving Fault Tolerance of Learning-Based Navigation Systems.” In 2021 58th ACM/IEEE Design Automation Conference (DAC), 841–46. IEEE; IEEE. https://doi.org/10.1109/dac18074.2021.9586116.
Wan, Zishen, Yiming Gan, Bo Yu, S Liu, A Raychowdhury, and Y Zhu. 2023. “Vpp: The Vulnerability-Proportional Protection Paradigm Towards Reliable Autonomous Machines.” In Proceedings of the 5th International Workshop on Domain Specific System Architecture (DOSSA), 1–6.
Wang, Alex, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. “SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems.” arXiv Preprint arXiv:1905.00537, May. http://arxiv.org/abs/1905.00537v3.
Wang, Alex, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding.” arXiv Preprint arXiv:1804.07461, April. http://arxiv.org/abs/1804.07461v3.
Wang, LingFeng, and YaQing Zhan. 2019. “A Conceptual Peer Review Model for arXiv and Other Preprint Databases.” Learned Publishing 32 (3): 213–19. https://doi.org/10.1002/leap.1229.
Wang, Tianlu, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez. 2019. “Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations.” In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 5309–18. IEEE. https://doi.org/10.1109/iccv.2019.00541.
Wang, Xin, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E. Gonzalez. 2018. “SkipNet: Learning Dynamic Routing in Convolutional Networks.” In Computer Vision – ECCV 2018, 420–36. Springer; Springer International Publishing. https://doi.org/10.1007/978-3-030-01261-8\_25.
Wang, Y., and P. Kanwar. 2019. “BFloat16: The Secret to High Performance on Cloud TPUs.” Google Cloud Blog.
Wang, Yu Emma, Gu-Yeon Wei, and David Brooks. 2019. “Benchmarking TPU, GPU, and CPU Platforms for Deep Learning.” arXiv Preprint arXiv:1907.10701.
Warden, Pete. 2018. “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition.” arXiv Preprint arXiv:1804.03209, April. http://arxiv.org/abs/1804.03209v1.
Weicker, Reinhold P. 1984. “Dhrystone: A Synthetic Systems Programming Benchmark.” Communications of the ACM 27 (10): 1013–30. https://doi.org/10.1145/358274.358283.
Werchniak, Andrew, Roberto Barra Chicote, Yuriy Mishchenko, Jasha Droppo, Jeff Condal, Peng Liu, and Anish Shah. 2021. “Exploring the Application of Synthetic Audio in Training Keyword Spotters.” In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 7993–96. IEEE; IEEE. https://doi.org/10.1109/icassp39728.2021.9413448.
Wiener, Norbert. 1960. “Some Moral and Technical Consequences of Automation: As Machines Learn They May Develop Unforeseen Strategies at Rates That Baffle Their Programmers.” Science 131 (3410): 1355–58. https://doi.org/10.1126/science.131.3410.1355.
Wilkening, Mark, Vilas Sridharan, Si Li, Fritz Previlon, Sudhanva Gurumurthi, and David R. Kaeli. 2014. “Calculating Architectural Vulnerability Factors for Spatial Multi-Bit Transient Faults.” In 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture, 293–305. IEEE; IEEE. https://doi.org/10.1109/micro.2014.15.
Winkler, Harald, Franck Lecocq, Hans Lofgren, Maria Virginia Vilariño, Sivan Kartha, and Joana Portugal-Pereira. 2022. “Examples of Shifting Development Pathways: Lessons on How to Enable Broader, Deeper, and Faster Climate Action.” Climate Action 1 (1). https://doi.org/10.1007/s44168-022-00026-1.
Witten, Ian H., and Eibe Frank. 2002. “Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations.” ACM SIGMOD Record 31 (1): 76–77. https://doi.org/10.1145/507338.507355.
Wolpert, D. H., and W. G. Macready. 1997. “No Free Lunch Theorems for Optimization.” IEEE Transactions on Evolutionary Computation 1 (1): 67–82. https://doi.org/10.1109/4235.585893.
Wu, Bichen, Kurt Keutzer, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, and Yangqing Jia. 2019. “FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search.” In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10726–34. IEEE. https://doi.org/10.1109/cvpr.2019.01099.
Wu, Carole-Jean, David Brooks, Kevin Chen, Douglas Chen, Sy Choudhury, Marat Dukhan, Kim Hazelwood, et al. 2019. “Machine Learning at Facebook: Understanding Inference at the Edge.” In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA), 331–44. IEEE; IEEE. https://doi.org/10.1109/hpca.2019.00048.
Wu, Carole-Jean, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, et al. 2022. “Sustainable Ai: Environmental Implications, Challenges and Opportunities.” Proceedings of Machine Learning and Systems 4: 795–813.
Wu, Hao, Patrick Judd, Xiaojie Zhang, Mikhail Isaev, and Paulius Micikevicius. 2020. “Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation.” arXiv Preprint arXiv:2004.09602 abs/2004.09602 (April). http://arxiv.org/abs/2004.09602v1.
Wu, Jian, Hao Cheng, and Yifan Zhang. 2019. “Fast Neural Networks: Efficient and Adaptive Computation for Inference.” In Advances in Neural Information Processing Systems.
Wu, Jiaxiang, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. 2016. “Quantized Convolutional Neural Networks for Mobile Devices.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4820–28. IEEE. https://doi.org/10.1109/cvpr.2016.521.
Xingyu, Huang et al. 2019. “Addressing the Memory Bottleneck in AI Accelerators.” IEEE Micro.
Xu, Ruijie, Zengzhi Wang, Run-Ze Fan, and Pengfei Liu. 2024. “Benchmarking Benchmark Leakage in Large Language Models.” arXiv Preprint arXiv:2404.18824, April. http://arxiv.org/abs/2404.18824v1.
Xu, Zheng, Yanxiang Zhang, Galen Andrew, Christopher A. Choquette-Choo, Peter Kairouz, H. Brendan McMahan, Jesse Rosenstock, and Yuanbo Zhang. 2023. “Federated Learning of Gboard Language Models with Differential Privacy.” ArXiv Preprint abs/2305.18465 (May). http://arxiv.org/abs/2305.18465v2.
Yang, Le, Yizeng Han, Xi Chen, Shiji Song, Jifeng Dai, and Gao Huang. 2020. “Resolution Adaptive Networks for Efficient Inference.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2366–75. IEEE. https://doi.org/10.1109/cvpr42600.2020.00244.
Yang, Tien-Ju, Yonghui Xiao, Giovanni Motta, Françoise Beaufays, Rajiv Mathews, and Mingqing Chen. 2023. “Online Model Compression for Federated Learning with Large Models.” In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE; IEEE. https://doi.org/10.1109/icassp49357.2023.10097124.
Yao, Zhewei, Amir Gholami, Sheng Shen, Kurt Keutzer, and Michael W. Mahoney. 2021. “HAWQ-V3: Dyadic Neural Network Quantization.” In Proceedings of the 38th International Conference on Machine Learning (ICML), 11875–86. PMLR.
Yeh, Y. C. n.d. “Triple-Triple Redundant 777 Primary Flight Computer.” In 1996 IEEE Aerospace Applications Conference. Proceedings, 1:293–307. IEEE; IEEE. https://doi.org/10.1109/aero.1996.495891.
Yosinski, Jason, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. “How Transferable Are Features in Deep Neural Networks?” Advances in Neural Information Processing Systems 27.
You, Jie, Jae-Won Chung, and Mosharaf Chowdhury. 2023. “Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training.” In 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23), 119–39. Boston, MA: USENIX Association. https://www.usenix.org/conference/nsdi23/presentation/you.
Yu, Jun, Peng Li, and Zhenhua Wang. 2023. “Efficient Early Exiting Strategies for Neural Network Acceleration.” IEEE Transactions on Neural Networks and Learning Systems.
Zafrir, Ofir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. “Q8BERT: Quantized 8Bit BERT.” In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS), 36–39. IEEE; IEEE. https://doi.org/10.1109/emc2-nips53020.2019.00016.
Zeghidour, Neil, Olivier Teboul, Félix de Chaumont Quitry, and Marco Tagliasacchi. 2021. “LEAF: A Learnable Frontend for Audio Classification.” arXiv Preprint arXiv:2101.08596, January. http://arxiv.org/abs/2101.08596v1.
Zhang, Chengliang, Minchen Yu, Wei Wang 0030, and Feng Yan 0001. 2019. “MArk: Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving.” In 2019 USENIX Annual Technical Conference (USENIX ATC 19), 1049–62. https://www.usenix.org/conference/atc19/presentation/zhang-chengliang.
Zhang, Dongxia and, Xiaoqing Han, and Chunyu and and Deng. 2018. “Review on the Research and Practice of Deep Learning and Reinforcement Learning in Smart Grids.” CSEE Journal of Power and Energy Systems 4 (3): 362–70. https://doi.org/10.17775/cseejpes.2018.00520.
Zhang, Hongyu. 2008. “On the Distribution of Software Faults.” IEEE Transactions on Software Engineering 34 (2): 301–2. https://doi.org/10.1109/tse.2007.70771.
Zhang, Jeff Jun, Tianyu Gu, Kanad Basu, and Siddharth Garg. 2018. “Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator.” In 2018 IEEE 36th VLSI Test Symposium (VTS), 1–6. IEEE; IEEE. https://doi.org/10.1109/vts.2018.8368656.
Zhang, Jeff, Kartheek Rangineni, Zahra Ghodsi, and Siddharth Garg. 2018. “ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Learning Accelerators.” In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1–6. IEEE. https://doi.org/10.1109/dac.2018.8465918.
Zhang, Qingxue, Dian Zhou, and Xuan Zeng. 2017. “Highly Wearable Cuff-Less Blood Pressure and Heart Rate Monitoring with Single-Arm Electrocardiogram and Photoplethysmogram Signals.” BioMedical Engineering OnLine 16 (1): 23. https://doi.org/10.1186/s12938-017-0317-z.
Zhang, Yi, Jianlei Yang, Linghao Song, Yiyu Shi, Yu Wang, and Yuan Xie. 2021. “Learning-Based Efficient Sparsity and Quantization for Neural Network Compression.” IEEE Transactions on Neural Networks and Learning Systems 32 (9): 3980–94.
Zhang, Y., J. Li, and H. Ouyang. 2020. “Optimizing Memory Access for Deep Learning Workloads.” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 39 (11): 2345–58.
Zhao, Jiawei, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, and Yuandong Tian. 2024. “GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection,” March. http://arxiv.org/abs/2403.03507v2.
Zhao, Mark, and G. Edward Suh. 2018. “FPGA-Based Remote Power Side-Channel Attacks.” In 2018 IEEE Symposium on Security and Privacy (SP), 229–44. IEEE; IEEE. https://doi.org/10.1109/sp.2018.00049.
Zhao, Yue, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. “Federated Learning with Non-IID Data.” ArXiv Preprint abs/1806.00582 (June). http://arxiv.org/abs/1806.00582v2.
Zheng, Lianmin, Ziheng Jia, Yida Gao, Jiacheng Lin, Song Han, Xuehai Geng, Eric Zhao, and Tianqi Wu. 2020. “Ansor: Generating High-Performance Tensor Programs for Deep Learning.” USENIX Symposium on Operating Systems Design and Implementation (OSDI), 863–79.
Zhou, Bolei, Yiyou Sun, David Bau, and Antonio Torralba. 2018. “Interpretable Basis Decomposition for Visual Explanation.” In Computer Vision – ECCV 2018, 122–38. Springer International Publishing. https://doi.org/10.1007/978-3-030-01237-3_8.
Zhou, Peng, Xintong Han, Vlad I. Morariu, and Larry S. Davis. 2018. “Learning Rich Features for Image Manipulation Detection.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1053–61. IEEE. https://doi.org/10.1109/cvpr.2018.00116.
Zhu, Chenzhuo, Song Han, Huizi Mao, and William J. Dally. 2017. “Trained Ternary Quantization.” International Conference on Learning Representations (ICLR).
Zhuang, Fuzhen, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. 2021. “A Comprehensive Survey on Transfer Learning.” Proceedings of the IEEE 109 (1): 43–76. https://doi.org/10.1109/jproc.2020.3004555.
Zoph, Barret, and Quoc V Le. 2017a. “Neural Architecture Search with Reinforcement Learning.” In International Conference on Learning Representations (ICLR).
Zoph, Barret, and Quoc V. Le. 2017b. “Neural Architecture Search with Reinforcement Learning.” In International Conference on Learning Representations.