For my project, I decided to work with Zopfli (
https://github.com/google/zopfli). I mostly chose Zopfli because it specifically says it takes longer to perform compression to I thought it would have more opportunities for optimization. Zopfli is a data compression software that can compress into gzip format. It takes a significant amount of time to do it's compression.
The build process for Zopfli was very simple. I cloned the files and just ran the make file, not complications. I tested around with various file sizes to get a test case that took around 4 minutes to run. I used </dev/urandom > to generate a mass amount of junk data for my test case.
x86_64
Benchmarking -03 compiler optimizations (default in the makefile)
Most runs of the program on -03 optimizations took within 3-4 seconds of the above times (4 mins and 6 seconds). The big thing that stuck out to me was how long it took. After testing with a few other programs, Zopfli took by far the longest of any of them.
Benchmarking -02 compiler optimizations
Again, most runs of the program on this setting took within a few seconds of the above times (4 mins and 15 seconds). A small decrease in performance over the -03 optimizations
Benchmarking -01 compiler optimizations
Most times took within a seconds of the above time (4 mins and 22 seconds). Interestingly there's less of a performance swing between -01 and -02 settings than between -03 and -02. Overall, there's a different of around 16 seconds between -01 and -03 run times. I would be interested to see what kind of performance increases could be achieved on larger file compressions.