CPU GPU

CPU: 总结成少量的复杂计算

GPU: 总结为大量的简单运算

CUDA

While GPUs were originally designed for graphics processing, their architecture makes them well-suited for parallel processing, and they are indeed used for a variety of general-purpose computing tasks, not just simple ones. The parallel architecture of GPUs enables them to handle computationally intensive tasks much more efficiently than traditional CPUs for certain types of workloads.

CUDA allows developers to harness the parallel processing capabilities of NVIDIA GPUs for general-purpose computing, including machine learning tasks.

If you don’t use CUDA and your machine learning framework or library doesn’t support GPU acceleration through CUDA, then your machine learning project will likely run on the CPU by default. Most machine learning frameworks, such as TensorFlow and PyTorch, have CPU implementations that allow you to run your code on the CPU.

However, it’s important to note that running machine learning tasks on a CPU might be significantly slower than running them on a GPU, especially for large-scale deep learning models and datasets. The GPU’s parallel processing architecture is well-suited for the matrix and vector operations commonly found in machine learning workloads.

[TOC]