Thursday, December 11, 2008

Some more WiFi

Recently, I tried a simple experiment. In WiFi you can choose to transmit each packet at different bit rates. eg. for 11b, your options are 1, 2, 5.5, 11Mbps. There's a lot of research in getting the best throughput by controlling the Tx rates. This is usually done to adapt to the flaky medium. I didnt find any research on the power consumptions of these rate control algorithms. So here's what I did and found out ..

Linux Kernel 2.6.22.15, hostAP driver, Prism3 chipset based PCMCIA adapter, and really old laptop (P3 something or other, doesnt matter). Implemented ARF, AARF and SampleRate and compared their PS.

Results:
Proved the theory that Bicket states in his paper; lower bit rates dont necessarily give better delivery probabilities. Lower bit rates infact consume more power. Intuitively, they take longer to transmit, hence keep the Tx circuit alive for longer. Given a flaky medium, these lower bit rates are major power hogs, coz of the packet retries involved. The following graphs show these results ..



This is w/o wifi ON, just for comparison. ~0.7V drop.



This is for AARF. ~1.5V drop. (ARF is similar)



This is SampleRate. ~1.0V drop.

Basically, SampleRate looks at a 10 sec history and chooses the best rate that gave an average Tx time thats lower than the current bit rate. In short it's got some intelligent way of switching.(Read the thesis if you're really interested), or ask me :p


This actually is quite important. Think embedded systems. Every device has WiFi out there. Power saving features directly affect the usability of these devices. My guess is, the differences in voltage drops will be more significant in those kind of devices.

More later ..

No comments: