First, some background. To the best of my knowledge
single pixel camera works as follows:
The lens focuses light onto an array of aimable
mirrors.
A pseudo-random subset of these mirrors bounce their
light onto a single light sensor. The light sensor
measures the sum of brightnesses of
the light
reflected
from the mirrors. After numerous samples, an
algorithm, called compressed sensing, is used to
reconstruct a probable set of light levels reaching
each
separate mirror, and thus produce a picture.
This idea eliminates the array of mirrors, and has only
a
lens, a light sensor, and a pair of vibrators, which
torque the camera (or maybe just the lens?) left and
right and up and down, in a pseduo-random manner.
Each sample, instead of being the sum of light levels
of
a scattered set of points, is the sum (differential?) of
light levels along an arc.
It should then be possible to use compressed sensing
with these samples to reconstruct brightnesses
through
the range of angles (field of view?) across which the
camera vibrated, and thus produce a picture.