Why GPU instances are better for hashcat
GPU’s are more suitable than CPU’s because GPU’s are designed to perform work in parallel. Therefore, when there are many identical jobs to perform (like the password hashing function) a GPU scales much better. Hence I was interested in benchmarking Hashcat with the AWS EC2 p3 & g4 instances.
To take advantage of the GPU capabilities these EC2 instances, we need to:
- Run a supported Linux distribution.
- Install NVIDIA driver
- Optionally: CUDA toolkit
Instead of re-inventing the wheel, you use the Deep Learning AMI (Ubuntu 18.04) provided by AWS. While these AMI’s are created for machine learning, they are also great for Hashcat. This is because the AMI comes prepackaged with GPU Drivers (v26 of the AMI includes driver version
418.87.01) and the latest version of CUDA SDK. It also ships with and NVIDIA-DOCKER which enables us to run hashcat in a container that has access to the GPU’s.
Running Hashcat in Docker
After spinning up the instance just run the the below Docker container to initiate a hashcat benchmark:
nvidia-docker run javydekoning/hashcat:latest hashcat -b
You can find both CUDA and OpenCL containers here:
Benchmarks below are run using OpenCL, performance on CUDA is nearly identical.
|p3.16xlarge||441.1 GH/s||60613.2 MH/s||31588.0 MH/s|
|g4dn.12xlarge||80492.2 MH/s||11860.4 MH/s||5466.9 MH/s|
- Per dollar on-demand price of both instances are almost equal, where the g4 instances are just slightly more cost effective (based on us-east-1 pricing.)
Find a full list of benchmarks on my GitHub page here: