Atnaujintas knygų su minimaliais defektais pasiūlymas! Naršykite ČIA >>

Distributed Machine Learning and Gradient Optimization

-20% su kodu: BOOKS
203,26 
Įprasta kaina: 254,08 
-20% su kodu: BOOKS
Kupono kodas: BOOKS
Akcija baigiasi: 2025-03-09
-20% su kodu: BOOKS
203,26 
Įprasta kaina: 254,08 
-20% su kodu: BOOKS
Kupono kodas: BOOKS
Akcija baigiasi: 2025-03-09
-20% su kodu: BOOKS
2025-02-28 254.0800 InStock
Nemokamas pristatymas į paštomatus per 11-15 darbo dienų užsakymams nuo 10,00 

Knygos aprašymas

This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol. Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management.

Informacija

Autorius: Jiawei Jiang, Ce Zhang, Bin Cui,
Serija: Big Data Management
Leidėjas: Springer Nature Singapore
Išleidimo metai: 2022
Knygos puslapių skaičius: 184
ISBN-10: 981163419X
ISBN-13: 9789811634192
Formatas: 241 x 160 x 16 mm. Knyga kietu viršeliu
Kalba: Anglų

Pirkėjų atsiliepimai

Parašykite atsiliepimą apie „Distributed Machine Learning and Gradient Optimization“

Būtina įvertinti prekę

Goodreads reviews for „Distributed Machine Learning and Gradient Optimization“