Parallelized processes

make -j8 ogg theora

The amount of performance gained by the use of a multi-core processor is strongly dependent on the software algorithms and implementation. In particular, the possible gains are limited by the fraction of the software that can be parallelized to run on multiple cores simultaneously

Since I have a Linux i7 rig featuring 8 multi-core processors on the Intel i7, I wondered how to best take advantage of its power.

htop shows what the CPUs are doing and most of the time, being bone idle!

I've attended a concurrent programming class at school and I have learnt writing threading programs is hard.

So how does one instead parallelize a bunch of tasks?

Use a Makefile! Make has an incredible switch -j which "Specifies the number of jobs (commands) to run simultaneously.".

So here is an example Makefile that takes my DVD *.vob dumps and converts them into .ogv files for use in my HTML5 video collection.

.SUFFIXES: .vob .ogv

SRC = ${shell ls */*.vob}
OBJ = ${SRC:.vob=.ogv}
CC = ffmpeg2theora

all: ${OBJ}
	@echo Converted ${OBJ}

.vob.ogv:
	@echo ${CC} $<
	@${CC} $< 2>/dev/null

clean:
	@echo ${OBJ}
	@rm -f ${OBJ}

So running make -j8 puts my i7 powerhouse to full use on a cold winters day.

Update: I've discovered thanks to a comment, I could do simply using xargs like so: time ls *.mov | xargs -n 8 -P 8 -Is ffmpeg2theora s. Though it's a little unwieldy isn't it?

Found any of my content interesting or useful?