Can computing efficiency that exploits idle and standby modes (which are more representative of “typical use”) be improved more rapidly than peak output efficiency?